What is a Tune Report and Why is it important in GC-MS work?

Would Eric Clapton ever play a concert or for a recording on an out-of-tune guitar? Of course not.

The tune on a GC-MS instrument is as important if not more than the tune on Eric Clapton’s guitar. When the result matters, Clapton makes sure that his guitar is in tune prior to preforming so that he has confidence that the result of his using his instrument. When the results matter in a criminal case (which is in every case), and a GC-MS instrument is used, the GC-MS instrument must likewise be proven to be in tune or the results may be compromised.


We have written on this blog a lot about the Gas Chromatography with Mass Spectrometer instrument because it truly is the workhorse in forensic science. You can see our main post on how it works here:

How does a GC-MS machine know that there’s a drug in the blood or urine?

And we have an entire category devoted to this great piece of technology here:

GC-MS posts

In all sincerity I state emphatically that is one great instrument. When I write about GC-MS, I am referring to the classic single quadrupole device. I think it is just scientifically the cat’s pajamas. In fact, in a laboratory, the GC-MS device and the LC-MS-MS are by far my favorite devices to operate, maintain, and run. When the 5 Q’s [i.e., Design Qualification (DQ), Installation Qualification (IQ), Operational Qualification (OQ), scheduled repeat OQ, Performance Qualification (PQ), and Re-Qualification after Repair (RQ). See more about the 5 Q’s in this post “A Forensic Measurement Device is Not an X Box”] of putting a GC-MS into service are followed, it can be a very powerful device of discovery. There is a reason it has earned its reputation as “the gold standard.” Sadly, not many Quality Assurance Managers or Quality Risk Management (QRM) officers in forensic laboratories have ever heard about the 5 Q’s. True equipment qualification, which is required in the pharmaceutical field, is lacking in all but a few laboratories in forensic science.

So, in a GC-MS context with or without adherence to the 5 Q’s, how can we all be sure that we have good quality results?

The sad truth is without adherence to the 5 Q’s, you may not have a valid result… ever.

One of the 5 Q’s is PQ or Performance Qualification.PQ refers to the test or validation protocol carried out by the user, offering documentary evidence that the instrument is maintaining the agreed values.

In the GC-MS world, a big part of any sort of validated instructions (sometimes called Standard Operating Procedures or SOPs) is the tune of the Mass Spectrometer.

Is the tune and the tune report evidence that the system is working well and will develop valid results every time?

No. The tune of the MS is a check of the MS system only. It is vital to note that if an instrument passes the tune and is functional, it does not mean the the Gas Chromatograph is working appropriately. The tune compound is directly injected into the MS and does not go through the GC system at all. While a tune is important, it is one part out of many that concerns quality control (QC). Proper QC is only a small part of what comprises a valid result.

What is a tune and why is it important to tune the MS? Why do we need to perform a tune on the MS?

The power of the GC-MS system is its multiple points of information that it gathers. From the GC, we get the retention time. From the MS we get ions and a spectrum. The key to the qualitative identification is in the ability to take the retention time, and combine it with the fragmentation pattern that results in the spectrum and compare it against a library of results from Certified Reference Materials that have been tested on that instrument in that environment to come up with some metric of similarity, whether it be a match factor, quality score, probability score or something similar. As this is a function of comparison between a standard and an unknown, the key then is in the reproducibility of the results. If the results are not reproducible, then the computerized library qualitative identification may be wrong or the metric used to determine similarity will be skewed. So this is why we need a means to assure that the fragmentation pattern is performing as it should. We use a compound, usually PFTBA (perfluorotributyl-amine), that is directly injected into the MS device. PFTBA is almost the universal choice in tune compounds because it is a very stable compound with a very well known and uniform fragmentation pattern. It also has masses that clearly fragment at low, medium and high ranges.

PFTBA container that is directly injected into the MS

A comprehensive tune of the MS evaluates the RF/DC frequency at several masses, relative abundances, isotopic ratios, the Electron Multiplier voltage, various temperatures and pressures as well as for leaks in the system. If needed it refocuses the lens or makes adjustments in these variables to tune it correctly so that the results are reproducible. At the end of the tune, a tune report and maybe a tune evaluation is printed out. These tune reports and evaluations must be kept to prove adherence to the QC protocol and tracked over time to evaluate changes in the system.

How frequently should a tune be performed?

When results matter and critical decisions are made, the QC system has to be comprehensive and frequent. A frequent QC regimen that is robust reduces the risk of being wrong.

A good guide into best practices for MS work can be found in “Best Practice Guide for Generating Mass Spctra” by Vicki Barwick, John Langley, Tony Mallet, Bridget Stein, and Ken Webb. This was a consensus document produced by mass spectrometers that are part of the UK’s Department of Trade and Industry’s VAM Programme which forms part of the UK National Measurement System. According to the authors:

The Guide arose from discussions held at the VAM Mass Spectrometry Working Group and was prepared by LGC in collaboration with the members. In addition to major contributions by the authors, other member s of the Working Group provided suggestions and comments. The idea for this work came about during preparation of an earlier guidance document concerning accurate mass (“AccMass”) applications of mass spectrometry. It became clear that users of mass spectrometry instrumentation or services, including both specialists and research chemists, frequently have little understanding of the instrumentation or the meaning of the spectra they produce. Often, they will obtain or request an accurate mass determination for confirmation of identity on the basis of spectra which are meaningless or which could not possibly have originated from the target molecule. Discussion of this problem highlighted the changes which have taken place in teaching chemistry and analytical science and the rapid expansion in the application of mass spectrometry. The latter has been fueled by a number of factors, including advances in the automation and performance of instrumentation and recent rapid growth in the use of mass spectrometry for the biosciences. The outcome has been widespread use of complex instrumentation, often as a “walk up” service, by staff with little education or training relevant to the task. The main aim of the Guide is to enable those unfamiliar with mass spectrometry to generate mass spectra that are fit for purpose, primarily for qualitative analysis of small molecules. We have done this by providing a clear and concise summary of the essential steps in obtaining reliable spectra.

Later in the Guide, we find the following outline of the essential steps in obtaining reliable spectra:

5.1 General sequence

• The general sequence of actions when acquiring a mass spectrum is as follows: tune instrument, mass calibrate, acquire a background spectrum, analyse a test compound, analyse the sample. This sequence requires a number of essential checks on aspects of instrument performance before acquiring a mass spectrum, to ensure that spectra obtained for samples will be of acceptable quality. Initial instrument performance should be checked by acquiring the mass spectrum of a test compound using a defined protocol.

So in short, a tune must be performed (along with other aspects of QC) every single time before a batch is run. Best practices dictate a bracketed approach whereby a tune is run before a batch and then after. We evaluate the two tunes and look for meaningful differences. If there are none, then we can infer that the instrument was in tune during the batch testing. If there are meaningful differences, then we can infer that there may have been an error during the analysis and the validity of all results are compromised.

The laboratory says that it did a tune, is that good enough?

Not all tunes are alike. There are three major types: (1) Quicktune, (2) Autotune, and (3) Standard tune. A Quicktune only examines the Electron Multiplier and the peak width. No lens corrections are made. It can be thought of as an act of verification, not a true robust tune. A Standard tune does all of the acts that the Quicktune does, but it adds in standard response values over the mass range. The Autotune performs a total system verification and adjustment if needed. It is the most comprehensive of the three. It is the autotune that should be performed and evaluated every day as well as evaluated over time. And even beyond the simple fact that a tune was performed, is the major question of is this particular tune any good? Unlike courthouse and courtroom folklore, the MS will operate if it is out of tune. A good number of GC-MS operators that I have encountered have no idea how to interpret an autotune report or know what acceptable values are.
In conclusion, don’t let your crime laboratory be like this guy:

and have them try to tell your jury that it is like Eric Clapton and/or Kirk Hammett and/or some really talented guitar player who is in tune.

Tuning matters.


Like Bill Clinton’s famous re-framing of the issue in this video:

It depends on what your definition of “positive” is? TLC in marijuana testing

We have posted here before about the Thorton-Nakumura protocol that is used throughout the United States for the prosecution of illegal possession of marijuana (in its solid drug dose pre-consumption form). A fair examination of the question reveals that there is no validity to the notion that the 3 test regimen produces a valid conclusion that the unknown examined in fact contains THC. Here are those posts:

Here are those series of posts:

  1. What is the goal and the purpose of testing of unknowns generally? How do we best design a test for marijuana?
  2. How is most marijuana testing conducted in the United States?
  3. What is microscopic morphological examination? Is it a “good” test?
  4. What is the modified Duquenois-Levine test? Is it a “good” test?
  5. What is Thin Layer Chromatography? Is it a “good” test?
  6. Is the combination of all three tests create a “good” testing scheme?
  7. Is there a better way to test for marijuana?

We looked at great detail about Thin Layer Chromatography in Marijuana testing in the following post What is Thin Layer Chromatography? Is it a “good” test?

There are also may publications that describe known false positives such as coffee, tobacco and basil.

Beyond that, the test to a large degree is totally subjective in its interpretation. Case in point consider this TLC plate:


Is this a positive

Is the sample labelled B2 a positive?

Different people can answer this differently, I suppose.

But what makes it worse is that few if any laboratories preserve their plates or take pictures of the results. So what you are left with is an unverified subjective opinion of someone who is in an adversarial position from the accused with no data that will support the “positive” call by the analyst. Doesn’t seem very scientific to me.

It’s time for SWGDRUG and drug seizure laboratories to come out of the 1950s and the 1960s and get with modern objective and valid science. It’s time to stop the non-validated Thorton-Nakumura protocol and move to GC-MS (it’s not that hard just put it in a methanol suspension and shoot away) and another technique Category A or Category B to verify it. Is it really that hard to do?


Even the government continues to acknowledge the difficulty of proving impairment in a DUID case.

Just a day ago….

NHTSA’s Understanding the Limitations of Drug Test Information, Reporting, and Testing Practices in Fatal Crashes by Amy Berning & Dereece D. Smither was released.
DOT HS 812 072

This recent publication reads as follows:

In addition, while the impairing effects of alcohol are well- understood, there is limited research and data on the crash risk of specific drugs, impairment, and how drugs affect driving- related skills. Current knowledge about the effects of drugs other than alcohol on driving performance is insufficient to make judgments about connections between drug use, driving performance, and crash risk (Compton, Vegega, & Smither, 2009).

The authors in referencing the Compton, Vegega, & Smither 2009 study are pointing to another NHTSA publication (non-peer review) “Drug-Impaired Driving: Understanding the Problem & Ways to Reduce it” by Compton, R., Vegega, M., and Smither, D. That was a report to Congress and is part of the Congressional record.

R. Compton is Dr. Richard P. Compton, MD. He is a psychiatrist and psychologist by training. He currently is the Director Office of Behavioral Safety Research at US DOT / National Highway Traffic Safety Administration.

Yes, he published the LAPD DRE study [formally known as Field Evaluation of the Los Angeles Police Department (DOT HS 807 012) in December 1986]. The LAPD DRE study has been used by many, who have not looked fully at the data and the writing, as validation of the DRE process and the ability to validly opine impairment based upon the DRE protocol. [As an aside a fair reading of the paper and its data does not support such a broad conclusion.]

Dr. Compton has also published other studies that consistently state the scientific truth that DUID impairment calls are very difficult. Including:

  • Compton, R.P. (1988). Use of controlled substances and highway safety: A report to Congress (DOT HS 807 261). Washington, DC: National Highway Traffic Safety Administration.
  • Compton, R. P., Preusser, D. G., Ulmer, R. G., & Preusser, C. W. (1997). Impact of drug evaluation and classification on impaired driving enforcement. In the Proceedings of the 14th International Conference on Alcohol, Drugs and Traffic Safety, Annecy, France.
  • Shinar, D., & Compton, R. P. (2002, August). Detecting and identifying drug impaired drivers based on observable signs and symptoms. In the Proceedings of the 16th International Conference on Alcohol, Drugs and Traffic Safety, Montreal, Canada.

There are lots of quotes in the “Drug-Impaired Driving: Understanding the Problem & Ways to Reduce it” paper that continue to point out how difficult opining impairment in DUID cases truly are. For all involved in DUID, it is a paper that is well worth the time invested in reading it. Here are some of a few quotes from it.

Since the effects of alcohol on driving performance are relatively well understood, it is useful to review and contrast what is known about alcohol with what is known and not known about other drugs.


Unfortunately, the behavioral effects of other drugs are not as well understood as the behavioral effects of alcohol. Certain generalizations can be made: high doses generally have a larger effect than small doses; well-learned tasks are less affected than novel tasks; and certain variables, such as prior exposure to a drug, can either reduce or accentuate expected effects, depending on circumstances. The ability to predict an individual’s performance at a specific dosage of drugs other than alcohol is limited.

Most psychoactive drugs are chemically complex molecules whose absorption, action, and elimination from the body are difficult to predict. Further, there are considerable differences between individuals with regard to the rates with which these processes occur. Alcohol, in comparison, is more predictable. A strong relationship between BAC level and impairment has been established, as has the correlation between BAC level and crash risk.

Factors that make similar prediction difficult for most other psychoactive drugs include:

  • The large number of different drugs that would need to be tested (extensive testing of alcohol has been undertaken over many decades; whereas relatively little similar testing has occurred for most other drugs);
  • Poor correlation between the effects on psychomotor, behavioral, and/or executive functions, and blood or plasma levels (peak psychomotor, behavioral, and executive function effects do not necessarily correspond to peak blood levels; detectable blood levels may persist beyond the impairing effects or the impairing effects may be measurable when the drug cannot be detected in the blood);
  • Sensitivity and tolerance (accentuation and diminution of the impairing effects with repeated exposure);
  • Individual differences in absorption, distribution, action, and metabolism (some individuals will show evidence of impairment at drug concentrations that are not associated with impairment in others; wide ranges of drug concentrations in different individuals have been associated with equivalent levels of impairment);
  • Accumulation (blood levels of some drugs or their metabolites may accumulate with repeated administrations if the time-course of elimination is insufficient); and
  • Acute versus chronic administration (it is not unusual to observe much larger impairment during initial administrations of drugs than is observed when the drug is administered over a long period of time).

The result of these factors is that, at the current time, specific drug concentration levels cannot be reliably equated with effects on driver performance.


Congress requested that an assessment of methodologies and technologies for measuring driver impairment resulting from use of the most common illicit drugs (including the use of such drugs in combination with alcohol) be conducted. The measurement of driver impairment is challenging since driver performance is a product of manual, cognitive, and perceptive skills and the range of performance reflected in the normal driver population is large.

Current knowledge about the effects of drugs other than alcohol is insufficient to allow the identification of dosage limits that are related to elevated crash risk.


The development of a method of measuring driver impairment due to the use of drugs would greatly enhance the ability of law enforcement to investigate suspected drug-impaired driving cases. However, there is currently no accurate and reliable way to measure the level or degree of driving impairment associated with the use of drugs.

All of this supports our blog posts from long ago in our six part series of “Pharmacology for Lawyers”:

Part 1. Introduction
Part 2. Pharmacokinetics
Part 3. Pharmacodynamics
Part 4. Bioavailabilty
Part 5. “Free versus Bound Drug”
Part 6. Elucidating Pharmacodynamic Effect from an Analytical Chemistry Result


What does reprocessing mean on a chromatogram?

What does reprocessing mean on a chromatogram?

Reprocessing of the data generally means that after the data is acquired, the chromatograms are subjected to some sort of manipulation, meaning data processing. I know that to defense attorneys that when they read the word “manipulation” that is like nails on a chalk board to them inherently. It is a justified alert, but certainly is not an automatic condemnation. We need to look at the method and the integration and data processing procedures.

It could be an unintuitive, error prone and repetitive task or it could be something that is well defined and justified and utterly scientific. It is for us to discover.


Some programs like TotalChrom allow the user to Batch reprocess the information so that every chromatogram is subjected to the same manipulation equally. All programs allow for manipulation on an individual chromatogram (remember what Josh does in the integration talk in the GC class). Some programs like ChemStation allow the user to click one button to “AutoIntegrate” a specific chromatogram.


It all depends on how the reprocessing is used. It can effect quantitation and in the extreme even qualitative identification.

The bottom line is that if there is reprocessing, we have to make very sure that the calibrators are subjected to the same reprocessing events as the unknowns. If they are different, then the results may be different.

For additional information about integration and reprocessing of data in a GC system, please read:

The case for raw data: “Integration” in Gas Chromatography: How to make an innocent person guilty in a DUI case by manipulating the software