Quality Control in GC-FID: Teaching the machine right from wrong

It is shocking but very true statement that most sophisticated instruments such as a Gas Chromatograph (GC) with various detectors whether it is a mass spectrometer (MS) or flame ionization detector (FID) when produced and manufactured are incapable of producing meaningful results “straight out of the box.” These machines have to be “taught” what it is looking for and how much there is. Inherently and originally they “know” nothing. They have to be taught what it is analyzing and also taught how much there is. This method of teaching is generally referred to as Quality Control (QC). This series of teaching when it comes to GC-FID is really a process of teaching the machine what it is looking for (the target analyte which, for example, in GC-FID for EtOH determination for purposes of Blood Alcohol Content is ethanol otherwise known as EtOH) and then once it has isolated (hopefully) the target analyte, it then has to be taught to measure how much there is. This process of teaching it what it is looking for in the case of GC-FID is really a process of teaching it what is not the target analyte.

The easiest way of thinking about this process from a global perspective is to think about colors.

When we are born we do not know our colors. This is something that needs to be taught. Just like all things in life, our teacher is vitally important. Every time we are asked “What color is this?,” in reality that “this” that is shown to us is an unknown that we are asked to analyze. We know what red is because we were taught through some sort of process that the hue that our eyes detect (our eyes are a detector just like a FID or MS) is this “red” and therefore from that point on we use this “red” that we were taught to become our standard against which all future unknowns are compared to see if it is “red.” We further “know” that something is “red” because it is not “yellow,” “orange,” “blue” or “violet.” The more colors we learn, the stronger our confidence in the conclusion that “red” is “red” and not something else.

But what if our teacher, taught us “red” wrong when we were little? Let’s say that our teacher taught us what we believe is “red,” based upon showing us what is objectively in reality green (the complementary or opposite color of “red”). Then, we would learn incorrectly or wrongly that what is objectively and in reality is green is in fact “red.” All of this results in the circumstance of when we are produced an unknown and asked “what is this?” and this “this” (which is our unknown hue that we are asked to identify) is objectively and in reality green but is honestly but mistakenly interpreted as “red” and reported out with confidence as “red” because we were taught incorrectly and wrong. When we do so (incorrectly but honestly report the green as “red”), we are not lying or intentionally deceiving the person who is asking us, we are just taught incorrectly and wrongly. So the moral of the color example is if we are taught the wrong way, then the result we report will be wrong.

Well, a GC is not any different. If you teach the machine wrong, it will report the wrong result. The machine does not inherently know anymore what anything is qualitatively than an infant does who is being taught his/her colors.

The goal of all analytical chemistry is to produce a valid result. A valid result is really comprised of two separate, distinct and important aspects: to produce a specific qualitative result and to produce as close as possible to a true quantitative result as can be produced, meaning it is as free from calibration and bias related error as possible.

We cov­ered the con­cept of Qual­ity Con­trol (QC) before, but let’s do so in more detail with this post. QC is strictly speak­ing a process that is used to try to insure validity of the results. QC is the procedure or the method used to demonstrate that the machine has achieved the two chromatographic commandments.

The 2 commandments of chromatography
The 2 commandments of chromatography

This necessary act of QC is best preformed by a series of tests that (1) proves through verifiable data the qualitative strength of the testing regime, and (2) the quantitative sensitivity of the method. As there are no universal nomenclature for forensic chromatography, this qualitative testing regime should be comprised at a minimum the following:

  1. A vial is prepared which purposefully contains several different compounds. This addition is called spiking (adding) the compounds into the vial. The vial is then sampled and injected into the instrument to see what produces. If the method is a valid method, then the purposefully spiked compounds must be shown to have complete chromatographic separation in the resultant chromatogram. Otherwise, commandment number one (thou shall separate) has not been satisfied. The proverbial Achilles heel of any analytical chemistry method is in this step. It is a test of specificity. In the case of GC-FID, some crime laboratories skip this step altogether and others use only a single column analysis and only prove separation between 4 or 5 analytes. Four or five analytes is a forensically indefensible amount in the context of testing EtOH in human blood. Laboratories have different naming schemes for this proof of resolution. Some laboratories call it a volatile mix, separation matrix, separation control, resolution mix, resolution control, or resolution matrix.
  2. A series of blanks must be used. Blanks are an essential component of QC. There are different types of blanks. There are true blanks, internal standard blanks, and target analyte blanks. There are two essential purposes of blanks. First is to prove the retention time of a given analyte to serve as a standard and serves an essential part of this teaching process. The second reason, is to demonstrate that there is no carry-over. Carry-over is a form of contamination where the conditions of one injection carries over to another. It is contributory error that leads to an invalid result. (For more on the carry-over effect, see generally: The Carryover Effect: Lack of Blanks between tests leads to false positive or inflated BAC results, Carryover effect part Deux: Autodilution may be part of the problem for false blood results in DUI and Carryover effect part 3: Flushing of inert gas is not enough to prove there is no carryover)
  • A true blank is a sample in a vial that is designed to have nothing meaning it is one in which there are no detectable analytes. If the method is valid and the preparation of the sample is perfect and according to design, then the resulting chromatogram is supposed to be just the baseline signal with nothing else. If there is any substances detected at all, then the true blank is invalid.
  • An internal standard blank is a sample in a vial that is designed to have only the internal standard (usually n-propanol) meaning it is one in which there is only one detectable analyte. The resulting chromatogram is supposed to feature one peak at a retention time that is characteristic to the internal standard according to the method and the chromatographic conditions. If the method is valid and the preparation of the sample is perfect and according to design, then the resulting chromatogram is supposed to be just the baseline signal with one peak and nothing else. If there is any other substances detected at all, then the internal standard blank is invalid.
  • A target analyte blank is one in which there is only one detectable analyte. The resulting chromatogram is supposed to feature one peak at a retention time that is characteristic to the true target analyte (in the case of GC-FID for EtOH determination this would be EtOH otherwise known as ethanol) according to the method and the chromatographic conditions. If the method is valid and the preparation of the sample is perfect and according to design, then the resulting chromatogram is supposed to be just the baseline signal with one peak and nothing else. If there is any other substances detected at all, then the target analyte blank is invalid.

The second part of QC is the proper construction of  the cal­i­bra­tion curve that our knowns are tested and then an unknowns are tested against. This is the teaching component of the quantitative measurement. This is typ­i­cally per­formed in the begin­ning of the run. If not, then there are legitimate issues about the validity of the quantitation. See generally, Is it legitimate for a crime laboratory to use ‘historical data’ to prove its test results are valid? For information generally about calibration and calibration curves, I offer to you the following posts: When is a straight line a curve: Calibration curve and Why do instruments need to be calibrated? The only metrologically acceptable method for establishing the calibration of any device is to employ the 5×5 and 120% method. The 5×5 and 120% method is to test at least five different concentrations five times each time at those specific concentration points along a wide linear dynamic range that sets as low as possible limit of quantification based upon a legitimate validation studies and a last calibrator that is 120% of the highest expected value in the unknown samples that will be tested.

The most important aspect of all of QC is that the materials that are used for it must be derived and originate from certified reference materials (CRMs) or United States Pharmacopeia (USP) grade or American Chemical Society (ACS) grade raw materials. (See generally our posts on the definitions of CRMs and USP grade calibrators, standards and controls: Standards, Controls, Calibrators, and Verifiers, Oh my…) Even if purchased from reputable third party vendors such as the NIST SRM line of third party products, these materials when purchased or made must be verified before they are placed into the QC process in the testing of unknowns.

Leave a Reply

Your email address will not be published. Required fields are marked *