Prosecutor: “The instrument was calibrated.”
This phrase is uttered in courtrooms all across the United States many times every day.
But is it true?
Calibration is the imperfect act of teaching the machine what response from an unknown yields typically related to a concentration. It should be done
- using Certified Reference Materials (CRMs or SRMs) as calibrators that adhere to ISO Guides 30-35,
- in a 5×5 method (5 tests conducted at 5 concentrations–so 25 data points),
- distributed through the most likely range of response in the testing of unknowns,
- with extremis datapoints that are +/- 20% of that expected response,
- where zero is not a datapoint,
- the calibration curve ignores the origin,
- without forcing the resultant calibration curve through zero.
All datapoints are evaluated with no points discarded. The result is then statistically evaluated using linear regression, lack of fit, Pearson’s coefficient or whatever you like that evaluates data. We should see 0.999 in the r^2 values. If we don’t, then we have to adjust or re-calibrate. It is this robust testing AND adjustment that differentiates it from the next concept-a calibration check.
A calibration check is a scheme of introducing a standard (from whatever source the operator gets it) in whatever range of concentrations tested as many times as they deem appropriate and typically not performing an evaluation of the resultant data as a whole (linear regression) but rather evaluating the data as discrete datapoints (e.g., the 0.10 can achieve a result not lower than 0.095 and not greater than 0.105). As long as there is “agreement” within this acceptable zone (which seems to be arbitrarily set), then according to them, you do nothing. Nothing is done even if the results across the range tested prove positive or negative bias (systemically over reporting or under reporting).
It is this lack of inter-datapoint evaluation and most importantly the lack if adjusting for bias that differentiates between a true calibration and a calibration check.