A good colleague and friend of mine, Ted Vosk introduced me to the world of metrology in the courtroom in 2008. He has introduced a lot of us to it. Since then I have learned more about metrology, statistics and validation having taken a lot of courses and having read a lot of materials on the subjects.To me it is fascinating. But like most things of science and like most things that are interesting, they can become too complex and intimidating to folks who have little or no exposure.
The following is a really super over-simplified explanation that I like to use to illustrate the very basic principles of metrology and answer the question of why it matters because the question is how do we quickly and meaningfully explain this to octogenarian judges who have no interest and simple folks on juries who are worrying about picking Suzie Peesherpants up from daycare care understand it all. I offer this explanation in that context alone, not in a rigorous scientific context.
I am a Bayesian and not a frequentist.
A frequentist believes (in a nutshell) that repeated testing will lead to an acceptable expression of uncertainty of the measurement. This is really problematic in that this approach ignores certain types of error.
A Bayesian believes in the use of mathematical equations like the Propagation of Errors and the Monte Carlo method to arrive at a value expressed to some range of integer expressed to a confidence/predictive interval (e.g., Tube 1 has a BAC of 0.091 g/mL +/- 0.005 g/mL to 2 standard deviations or 95% confidence) based upon the identification and evaluating of different parts and components of error to comprise the expanded uncertainty budget.
This is a debate in metrology with the better part of the evidence in favor of the Bayesians, in my opinion.
What everyone in metrology can agree on is that simple expressions that we typically see in a courtroom where a measurement is expressed as an absoute is wholly and totally wrong. Equally as wrong is that when the defense bar pushes for the laboratory that is doing the testing to express some sort of uncertainty in its measurement, that laboratory simply expresses a “stated” (as opposed to proven ) error. Both are unscientific.
What the hell did I just write?
Let me make it too simple.
So simple that metrologists will likely respond that it is ridiculous. However, I suggest that we have to understand it in an simplistic way to get the proverbial foothold on the large mountain that is Bayesian-based metrology first before we can all stand on the top like my great friend and colleague Ted.
First, I suggest that we understand and agree on the definitions of accuracy (bias) and precision (calibration).
The precision of a measurement is how close a number of measurements of the same quantity agree with each other. The precision is largely limited by the random errors. It may usually be determined by repeating the measurements.
Think about something that is more or less inherently unpredictable.
An example would be: fluctuations in the air pressure when we go to weigh something on a 7 point balance (scale).
The accuracy of a measurement is how close the measurement is to the true value of the quantity being measured. The accuracy of measurements is often reduced by systematic errors, which under certain circumstances are difficult to detect even for experienced research workers.
Think about something that is identifiable and correctable.
An example would be: the calibrators are all running high. We can identify it and correct for it (shift the calibration curve to correct for the bias).
It is this combination of systemic (correctable error) and random error (knowable, but inherently difficult to predict and correct for) that gets us to a properly metrologically responsible expression such as Tube 1 has a BAC of 0.091 g/mL +/- 0.005 g/mL to 2 standard deviations or 95% confidence.
Why is all of this scientifically important?
A measurement is only relevant and useful we can assess the risk of being wrong. Each measurement is a unique event that will never be exactly repeated again. We always have the risk of being wrong in expressing it.
Suppose that you and I are friends, but you have never met me in person but instead have just “met” over this blog. You are at some concert where my favorite band was playing, but I could not go because I was stuck here in Harrisburg. You, being a good friend, want to buy me a t-shirt at the concert. So, you want to know how tall I am so you can buy the right t-shirt. You ask me.
If I were to tell you that I took a measurement of my height with a measuring stick and I was 7’2” an immediate mental image comes in your mind. He’s very tall. I need a very big shirt. Those of you who met me and have seen me know this to be totally false (I am about 5’5”). But if you had never seen me before so my true height was unknown to you, you would make certain decisions based upon that information I gave you ( I am 7’2″). You would buy the wrong shirt.
However, if I were to tell you all of the proper information of that measurement that I did for my height that resulted in my conclusion as stated to you that I am 7’2″, it would be fully expressed as follows: I measured myself with a result of 7’2” using a measuring stick that was +/- 2 feet with a 99% confidence level.
If I told you this full expression (I measured myself with a result of 7’2” using a measuring stick that was +/- 2 feet with a 99% confidence level), you would have no idea of my height. That measurement that was simply expressed as 7’2” is meaningless for the decision you have to make. You wouldn’t know what T-shirt to buy and would get me some other trinket.
As we can see the measurement is totally dependent upon the measurand (the measuring stick) which is largely comprised of systemic error, and to a degree random error (e.g., how much I slouch versus standing straight up).
You need all of the information to buy the right T-shirt.