Judge: “What else smells like marijuana, counsel?”

Attorney: “Is ‘smells like marijuana’ a valid perception by police?”

The problem is that “smells like marijuana” reports are entirely an unverifiable subjective opinion. It is unverifiable because even if marijuana is found that might be a function of random chance as opposed to olfactory detection.

Smell is one of the most amazingly individual senses that we can have. It is also hugely subject to confirmation bias.

The ability to validly and reliably detect through one’s nose the presence of cannabis is a very difficult metric to measure. Unlike specially trained canines, with humans there are no standardized methods (such as those outlined in SWGDOG) or training (such as those outlined in SWGDOG) or methods to track and verify an officer’s olfactory senses. It becomes an act of faith and a true ipse dixit.

So what are we left with? A human. With a nose. Who sniffs.

smells-like-marijuana

As a human who partakes in no specialized training, using no developed method, and having no method of compiling statistics, the validity of the perception depends on so many variables that it becomes as exercise in futility just to try to list them all. Things such as windage, environment, length of alleged exposure, maturity of the plant, age of perceiver, gender of perceiver, distance of the of perceiver, physical health of the perceiver, experience with marijuana by the perceiver, was it burnt or unburnt that they thought that they smelled, what is the perceiver’s odor threshold for marijuana (i.e., the individual’s odor threshold), and the list goes on and on ad nauseum with no clear sight at the end.

Although I have no data to support it, I strongly suspect that the “smells like marijuana” suffers from not simply inter-rater issues, but intra-rater issues. That is two people exposed to the same smell may not come to the same conclusion. And the same person exposed to the same smell at a later time may not come to the same conclusion.

Remember, “smells like marijuana” is not a perception that is based upon olfaction of pure chemicals. The “Cannabis smell” is most likely the terpenes (beta-caryophyllene (BCP) which is what canine trainers train their dogs to look for when they are trained to look for marijuana) that give its smell. But terpenes are very common. Researchers have identified more than 200 terpenoids in Cannabis. The most common and most studied include limonene, myrcene, alpha-pinene, linalool, beta-caryophyllene, caryophyllene oxide, nerolidol and phytol. Some of the smells may not even come from the plant itself but rather come to the plant due to aspects that are part of the process of cultivation or collection, or transportation of the plant through commerce such as bat guano to heavy salt-hydroponic fertilizers, and perhaps even chlorophyll.

image001

This is not to take into account the human factors aspect of cognitive bias. A study in olfactory detection by van Langenhove & Schamp in 1989 (Van Langenhove, H., & Schamp, N. Encyclopedia of environmental control technology (Vol. 2, pp. 935–963) Houston: Gulf Publishing.) of PURE chemicals (N.B., cannabis is not a pure chemical), the researchers found and modelled the risk involved of false positive detections. This means that people reported smelling an aroma when in fact the aroma was not present, meaning they responded that they smelled something and definitively reported it as present while the smelling happened during what was supposed to be the study’s the controls or blanks period. Similarly there is the famous James experiment that was published in his book “The Principles of Psychology” in 1890. Professor James told a classroom of students that he was about to open a small bottle with a very strong odor in it, and that they were to raise their hands when they first noticed the smell. A few minutes after removing the bottle top, students in the first rows of the lecture hall began raising their hands, and in a matter of minutes the whole room was filled with raised hands. In fact, the bottle had no discernible smell, containing only the vapors of distilled water and a dying agent used to the make the water look dark. An amazing false positive rate of 100%.

I have been told that many products and natural phenomenon “smell like marijuana” such as Axe Body Spray, garbage, marigolds, a skunk, skunk cabbage, hops (as in what it used to make lager), and many other things.

 

There is a paper in the peer review that talks about this. It is entitled “Marijuana Odor Perception: Studies Modeled From Probable Cause Cases” and was published in Law and Human Behavior, Vol. 28, No. 2, April 2004 (2004) by Professor Doty et al. The authors write: “The present findings throw into question, in two specific instances, the validity of observations made by law enforcement officers using the sense of smell to discern the presence of marijuana. Although these instances reflect a very small set of studies with very specific constraints, they do suggest that a blanket acceptance of testimony based upon reported detection of odors for probable cause is questionable and that empirical data to support or refute such testimony in specific cases is sorely needed.”

So before we all go taking the reported perception of the police officer of “it smells like marijuana” as gospel, we should all consider these things.

 

Are results expunged from the CODIS or SDIS databases and why should I care one way or another?

Maybe you are like me. I like it when crimes are solved. I like it when the true perpetrator is arrested, fairly tried and convicted. I like that there are prisons to house the truly dangerous.

But at what cost are we as a society and as a criminal justice system willing to pay to provide for this?

As a society we have been struggling with this for quite some time. One of the most valuable and powerful tools for crime solving are databases. Databases allow for computerized searching. Computerized searching means quick results. Quick results leads to quicker arrests. Quicker arrests may lead to prevention of crime for someone who is heck bent on being a serial criminal.

Admittedly, one of the very best computerized databases is the DNA Identification Systems both at the federal and state level. But at what cost are we expanding the population of these databases?

What one has to understand is the way that this data is used. Once a genetic profile is developed, it becomes part of the the SDIS (the State-based DNA Index System). For example, in my state, the Pennsylvania State Police maintains the Pennsylvania SDIS. Once in SDIS, that genetic profile is uploaded to the Combined DNA Index System (CODIS) provided that the state has a Memorandum of Understanding (MOU) (most states do) and if the profile generated if it meets the published quality criteria for acceptance.

CODIS

Figure 1: An example of the route in FL to getting data into CODIS.

I am not convinced that in reality that records that should be deleted from the databases are actually being deleted. Here is why?

  1. How many criminal defense lawyers, when there is an acquittal, go through the process to obtain an expungment for state court records once charges has been dismissed or that there is an acquittal? Not all of us. Certainly not many Public Defenders or appointed counsel.
  2. How many times have criminal defense attorneys seen what we thought was an expunged record by way of signed court order still show up on either the NCIC (National Crime Information Computer) or the state police criminal history check? I sure have.
  3. How many criminal defense lawyers file and follow through to get a separate order seeking to remove from CODIS or SDIS data from overturned felony conviction, or if a sample is developed pursuant to a search warrant that the person is later found not guilty or have the charges dismissed? Almost none.
  4. Even if counsel remembered to do all of this at the state court level, no state court judge can force a CODIS expungment. Does the local SDIS repository notify CODIS as they are supposed to do per the MOU and per the DNA Identification Act of 1994 (42 U.S.C. §14132)? Do they honor it as a matter of comity? Beats me. You will never know. Why? CODIS is not subject to auditing.

So it seems to me that only in the most rare circumstance will a record ever be truly expunged from CODIS or a SDIS. It seems to me that unless it become part of regular activity by the defense bar, CODIS and SDIS will continue to grow and have records that it should not. Logically, the only way to prevent this non-designed expansion to happen is to get a state court order to order the state’s SDIS to expunge the profile, and then go into a Federal Court and get a federal court order to expunge the CODIS record.

My simple theory is this: once in there, it is in National DNA Index (NDIS) is like a diamond, it lasts forever. Even past death. There is no current mechanism of action to remove a genetic profile even after the physical capability of the person to commit crime is over because that person is dead.

Why is it that law enforcement wants to keep the genetic profile of someone even past death? (This is a question that I have been pondering and asking people for quite some time,)

For certain, a small part of that is to solve past crime. Say for example that a crime occurred some time ago. The cold case is re-opened, and it is discovered that for whatever reason, genetic evidence was present, but a profile was never developed.  In 2014, the genetic profile is developed and it goes into the forensic, unsolved crime database. In that case, searching the Offender Index or the Arrestee Index may provide useful. It comes up with a “match.” The case is closed. The victim or the victim’s surviving family is notified. And it also forecloses the possibility of a false arrest and a false conviction. This seems to be the prevailing argument for survival of DNA profiles in CODIS and SDIS after death. However, what no one will admit is that there is a far more useful reason why law enforcement wants to keep DNA profiles in SDIS and CODIS well past death…. familial searching. This, I suggest, is the real reason.

A familial DNA search is a search by law enforcement in DNA databases for genetic information indicating a relative of a person they seek to identify. Although it is recognized that parent-offspring will share one allele at every locus, full siblings may share two, one, or zero alleles at a locus. Siblings have a 25% chance of having no alleles in common, a 50% chance of having one allele in common, and a 25% chance of having two alleles in common. For more distant familial relationships, allele sharing decreases. The uncertainty in the probative value of the identification increases as we get more distant. First-order relatives will share more genetic data than unrelated individuals. There are certain metrics such as relationship index (kinship index) that can be used to arrive at a likelihood ratio (called the combined relationship index) of the likelihood or relation that have been used for years in court to establish paternity or lineage in estate cases for probate purposes. Certainly these CRI and LR ratios are not without flaw as unrelated folks may have higher LRs due to random chance allele sharing, but it is still a tool that is used in civil and family court. However, if you ever want to see a swift and angry response from a state DNA analyst simply mention the phase “familial search.”

On a daily basis, thousands of computerized searches are completed without a warrant and without probable cause to try to “match” the Convicted Offender and/or Arrestee Database and/or Legal Indices of the various SDIS databases and the CODIS to the Forensic (casework or unsolved crime) database.

Although the FBI claims that they do not do “familial searching” (brother’s keeper searching or family DNA searching), they absolutely do. According to this link (http://www.fbi.gov/about-us/lab/biometric-analysis/codis/codis-and-ndis-fact-sheet) they admit that they conduct CODIS searches at

moderate stringency.  Moderate stringency is defined as a search that requires all alleles to match, but the target and candidate profiles can contain a different number of alleles.

For those who know DNA and CODIS searching that is by definition a familial search. They even admit to “partial match” reporting which is also code for familial searching. Although later on they try in the same document to refine what they believe is familial searching per SWGDAM, which they say is the purposeful searching of family members as opposed to an un-targeted general search of everyone that just so happens to end up with closely related people (i.e., family), the end result is the exact same. Same pants different pocket. In many states, the same is true. They report partial matches or do moderate stringency searches which result in a list of the most closely associated hits.

Now that is scary stuff to me and should be to you.

Why?

Because with the “partial match” result or the “moderate stringency” result, it becomes the horrible “round up the usual suspects” dragnet that sweeps up the innocent with the not so innocent like the dolphin in the old fashioned tuna nets.
dolphins

All the way back in 1969, our SCOTUS found such a method to be totally unacceptable in the case of Davis v. Mississippi U.S. 721, 727 (1969). In Davis, a rape victim described her assailant as an African American youth. Police rounded up dozens of local youth who met that profile, processed them, and took fingerprints until a match with a print on the victim’s window was found. In that case nearly 50 years ago, the United States Supreme Court found that the Fourth Amendment protected against fingerprinting without individualized suspicion. The government can never conduct blanket searches just for the sole purpose of solving crimes – yet it does this with CODIS every single day, and familial searches by whatever name “moderate stringency” or “partial matches” does exactly this.

But this problem will compound when all states go to an Arestee DNA collection scheme. In this post Maryland v. King world, thirty-two states have legalized collecting DNA from people arrested for crimes before they’re convicted.


The defense community needs to be diligent in getting people out of SDIS and CODIS for the sake of justice and the innocent.

[Off soapbox]

 

2014-10-17 FALP XII(1)

Congratulations to the graduates of the 12th class of the American Chemical Society Hands-on Gas Chromatography class. This makes for 210 graduates in 40 states.

 

Today’s graduates include:

First name Last Name State
Tim Bussey CO
Gregory Willis GA
Thomas Addair KS
John Thurston KS
Barton Morris MI
Neil Rockind MI
Steven Hernandez NJ
Jeff Meadows OH
Alan Woodland OK
John Arose PA
Rich Roberts PA
TC Tanski PA
Leslie Johnson TX
Mark Kelley TX
Jessica Phipps TX
Courtney Stamper TX
Michael Cohen WI
Nathan Dineen WI
Sarah Schmeiser WI
 

[10/16/2014 Second update: NCDD Regent Don Ramsell has taken down his ad hominem attack, but the error on the NCDD website and its quiz still exists. Now in a bizarre attempt to re-frame the quiz, he explains what I can only assume that he meant for the quiz to mean (as a trained lawyer, he knows language and words are important. I took him at his literal word, but here it is) he writes:

#1

#2

Here is the answer again:

The chemicals which are added to many blood samples taken by police, sodium fluoride and potassium oxalate, are salts. If too much is added, or the amount of blood placed into the tube is less than the desired amount, then the ratio of salt to blood can become too high. This can cause extra alcohol to leach into the air above the blood sample (i.e. headspace) in the blood vial. If the laboratory uses the headspace method of analysis, the results will become artificially high.

I do not know of a single assay that is used today involving the testing of human Blood for Ethanol determinations for criminal prosecution in the United States in a ante-mortem sample that does not rely upon the use of some form of higher alcohol as an internal standard (generally either n-prop or t-butanol) both of which behave as described below in great detail an in the peer review literature from at least 1994. In 1964 G. Machata of the University Forensic Institute in Vienna first worte about HS-GC-FID for blood ethanol analysis. Machata, G. Mikrochim Acta, 1964, 262-271. Three years later, he wrote the first application note for automated HS-GC-FID analysis for BAC determinations. The model F-40 (Bodenseewerk Perkin- Elmer & Co.) was the first automated and integrated headspace GC system based upon these principles, was introduced in the spring of 1967 at the International Exhibition–Congress on Chemical Engineering (ACHEMA) in Frankfurt am Main, Germany. Machata G (1967) über die gaschromatographische Blutalkoholbestimmung. Blutalkohol 4: 252–260. (You can read it here) As noted in the contemporaneous literature of the time, these methods were published using the internal standard technique.

#9

Here is one of the published chromatograms:

#20

Here is the published English summary (It's main text is in German) noting the use of internal standard:

#10

Specifically, as noted on page 207 in the seminal book in the subject "Static Headspace-Gas Chromatography: Theory and Practice"  By Bruno Kolb, Leslie S. Ettre:

#6

Various ISTDs were used through the years including using internal standards of tert-butanol, n-propanol, MEK, acetonitrile and others. So going as far back as 1964 or 1967 internal standards were begin used in headspace GC. Again, I know of no assay that is used today involving the testing of human Blood for Ethanol determinations for criminal prosecution in the United States in a ante-mortem sample that does not rely upon the use of some form of higher alcohol as an internal standard (generally either n-prop or t-butanol). So one has to wonder if he will ever concede that he is wrong or that the question was inartfully written. Let's put aside pride and get the accurate information out there, please.

I do find it interesting that the source for his claim that as written below is correct is the Handbook of GC/MS (which I have read cover to cover) and it does not support his proposition. Here is his link as it appears:

#3

It is interesting to note that his reference does not mention ethanol. In fact it is in a wonderful and great book about GC-MS or Gas Chromatography with Mass Spectrometry which (except for the notable exception of two labs in the United States) is not used in Blood Alcohol analysis at all. It is in an application note in the appendix "Static Headspace Analysis of Volatile Priority Polutants." What is briefly described is the deuterated internal standard technique. It is something used in GC-MS and is a fantastic way for conducting GC-MS analysis and quantitation, but we do not use deuterated ethanol as an internal standard in GC-FID work because with the typical column used and the detector used, the Flame Ionization Detector, we are incapable of differentiating between ethanol in the samples versus deuterated ethanol.

He wrote: "As has been written, the salting out problem disappears 'when n-propanol is used as the ISTD.'"

I wonder where it has been written other than here. It certainly still is not on that quiz. I think it is a disservice to the bar and to justice to propagate error for the sake of "winning more cases." It's ok to be wrong. It is not ok to correct oneself when pointed out that one is wrong. Confession of error is a good thing. Time to confess error. Nevertheless, this remains a great illustration of the thrust of this blog post. Be careful who your coach is.]

[10/16/2014 Update to this post: As I mentioned below, I think that there can be no possible insult in pointing out a source of error. Those who get offended by the questioning and pointing out of error need to re-examine whether or not they are out for personal or institutional glory or accurate information dissemination. Error is wrong. Everyone makes error, myself included. What is not excusable is continuing to propagate error once you are alerted to it. Shortly after this post was issued, there was a public response by the NCDD Regent Don Ramsell on his Facebook post (it is unclear if his post was an authorized and Board directed post, I will presume not). Within that response by Don Ramsell both Regent Don Ramsell and Regent Paul Burglin do not mention me specifically by name; however, they post details that make it clear that it is me that they direct their anger at. I pray for them and ask that you do too. But what is most important is not the ad hominem attacks, which sadly I suppose in their mind I invite when I point out error that involves them or their institution, but that they clearly now know of their error on their "quiz" and have not fixed it. Be careful who your coach is. Check out the"quiz" for yourself. Hopefully they fix it soon. I am doubting it. https://ncdd.com/we-help-win-more-cases-answers ]

 

I have written before about one of the things I think is plaguing the Continuing Legal Education system when it comes to criminal defense lawyer education. The widespread practice of lawyers teaching lawyers science breeds and foments the potential propagation of error. This problem is widespread. It is epidemic. There are some notable exceptions in terms of lawyers who know and have rigorous training in science, but there are actual and great potential problems in this method of education.

It leads to mythology and propagation of error.

Error is error. There is no shame in pointing out error. There is great shame in presenting something as empirical fact which is wrong and then to continue to propagate that error. Certainly, I am not free from error. If I am wrong, I like to have that pointed out to me. I research it. If I am wrong, I acknowledge it, and correct it and most importantly learn from it. I have been wrong in the past. I will likely be wrong again in the future. But we have to be careful in what we teach others. This is especially so with institutions that serve as the conduit of knowledge and information dissemination in the legal world. (The defense bar is not the only group that has this problem, it also affects prosecution and its experts.)

In that prior post (The Ethics of CLE Agenda: A Jury Argument or a Science Argument), I wrote and remarked:

It is a small part of the reason that I stopped speaking at lawyer CLE programs. Too often, I would find myself on stage presenting fair, balanced, and objective information on the science (such as the limitations of an assay or its specificity and selectivity) just to be followed by someone next who would present the science “very loosely” (and that is putting it mildly) and as such, totally teaches the audience faux science to pass off to a jury. There never is a caveat to this faux science presentation. They sometimes back up their beliefs not with data, but with a line out of context in a trade journal or a line in a peer reviewed journal. When you ask them for sources, they point you to a ream of information and when it is looked into with an educated eye, their contentions are not supported at all. I have even asked the faux science presenters to consider putting in a disclaimer. But they will not, the faux science is presented as scientific truth to the audience. The proportion of faux science presentations to legitimate science at CLEs has exploded over the years. About a year ago, in my travels across the United States, it came to the point where in good conscience I could not and would not participate in some of the seminars any more. If I had already accepted to teach at them, I always asked to be first or near first and put up a slide that had a single word, “Bias” in white letters on a black background. Even in conferences where there is only good and legitimate science presented, I would so disclose my bias to every audience.

And later I wrote:

The titles [such as the NACDL/NCDD "DWI means Defends with Ingenuity(R)" seminar held every year in Vegas] aren’t the only problem, it’s the content. Words are just words. The content is what matters the most. People, lawyers, come to these conferences to learn and then to apply their learning in the courtroom to defend an accused. The content is what matters. Over the most recent years, the content has focused less and less and less on legitimate scientific information and more on what I term a “Jury argument.” A jury argument is one that “works”  on a jury (read as “tricks the jury”), but holds little or no water in the scientific world. I personally do not teach or propagate jury arguments. In fact, I like to expose them. I think the faux science jury argument presentations are harmful to the public, harmful to the lawyers listening and are harmful to anyone who is accused. In fact, I believe they are borderline unethical.

Is there a place to teach jury arguments? Perhaps there is, perhaps not. It is debatable for some. No so much for me.

What isn’t debatable is one thing: if we present legitimate science side-by-side with jury style fast and loose faux science arguments, we as a defense community and as presenters and as organizers must declare which is which. To the audience, there is great harm potentially if it is not distinguished. A lawyer instructed in jury argument faux science may very well come into court and face an expert who will expose the non-scientific aspects of the presentation leading to perhaps an unjust guilty verdict.

All of this leads me to an important issue that as a defense bar we just ignore: Bias. If this continues, it is a long-term path to infecting a whole generation of lawyers with invalid science. It is not sustainable.

We, as a defense community, need to do better for our future and so we don’t get discounted as extremists. Let’s hit the reset button and present the science legitimately and if we have to present jury arguments, let’s state as much upfront to the audience rather than teaching a whole generation of lawyers to present faux science. The truth is that when science is presented accurately the citizen accused often wins – as the science is often on his side.

Here I add on more specific examples of objectively wrong science presented as fact by lawyers teaching to other lawyers:

This is a picture of a slide that was provided to me from a lawyer’s presentation at the 2014 NCDD Mastering Scientific Evidence seminar entitled “Blood Testing: Fermentation and Facts”:

2014-03-20 09.07.19

It is not a clear picture. I did not take it. In case you cannot read it, the slide clearly reads:

Fermentation the Fatal Flaw to EVERY BREATH TEST

Perhaps this was just a typo and the presenter meant to write “BLOOD” and let’s operate under that premise, but to claim to lawyers that it is a legitimate scientific defense in EVERY case? I think not.

But let’s also consider this as well: NCDD1NCDD1

Let’s leave aside the “We help you win more cases” as that is likely just marketing to help them sell memberships (perhaps they should consider rephrasing it to “we help you learn legitimate science to promote justice”), but when one takes the quiz, one encounters the following question and what they believe is the right answer:

Wrong Answer

While it is partially right about the role of inorganic salts, it is totally wrong in the end result to BAC analysis. This has been known and published for many, many years in the peer review. The most prominent and most sited article is Jones AW. Salting – out effect of sodium fluoride and its influence on the analysis of ethanol by headspace gas chromatography. J Anal Toxicol 18;292 – 293, 1994. Since 1994, it has been well settled science that a short draw (the most practical way that salting out happens or the differentiating use of salts (more salts used in the defense lawyer’s client’s sample unknown than the instructions call for) will always result in an under-reported BAC when n-propanol is used as the ISTD.

I wrote before on this blog at What’s Salt Got to Do With It? Salting Out Effect (October 26, 2010) about salting out and the use of inorganic salts.

But just to amplify, the addition of inorganic salts are going to effect the Beta (the phase ratio) which in turns effect the partition coefficient at equilibrium in the vapor phase.

The phase ratio (β) is defined as the relative volume of the headspace compared to volume of the sample in the sample vial.

Calculating the Phase Ratio

Phase Ratio (β) = Vg / Vs

where Vs is the volume of sample phase;

Vg is the volume of gas phase

Sensitivity is increased when β is minimized

Lower values of β (i.e. larger sample size) will yield higher responses for volatile compounds.

Decreasing the β value will not always yield the increase in response needed to improve sensitivity. When β is decreased by increasing the sample size, compounds with high K values partition less into the headspace compared to compounds with low K values and yield correspondingly smaller changes in Cg. Samples that contain compounds with high K values need to be optimized to provide the lowest K value before changes are made in the phase ratio.

Now applying the above to the case of the typical ISTD (n-propanol) and the target analyte (ethanol), the following is revealed:

Ethanol at 50 degrees C in aqueous solution has a relatively high partition ratio in comparison to n-propanol (1220 versus 520). Remember the higher the partition ratio the more resistance of the analyte to get into the vapor (gaseous phase). This means that n-propanol at equilibrium will escape more readily into the headspace than ethanol at this temperature.

As mentioned above with the addition of inorganic salts were/are effecting this K value for both. This effect is not equal. The response will be different among them. As the EtOH K is higher it is harder to reduce the K value there (it has more to go). The impact of the inorganic salts is greater on n-prop than EtOH. So by adding inorganic salts, there will be proportionally more n-prop than EtOH. Stated differently, the n-propanol will more easily elute to the gaseous phase by a disproportional amount versus the EtOH. This will result in more n-prop proportionally in the gaseous phase than there should be if no inorganic salt was used.

When we are using ISTD method, if the ISTD area count is larger than the expected, there will be a ratio correction DOWNWARD of the ethanol amount.

When we are using ISTD method, if the ISTD area count is less than the expected, there will be a ratio correction UPWARD of the ethanol amount.

 

Now when we operate static headspace GC-FID and are infinitely precise and infinitely accurate with our volumetric delivery of the ISTD and the only variable is the amount of inorganic salt in the tube between that of a short draw (where the proportion of inorganic salt is higher) and a long draw (where the proportion of inorganic salt is lower), then the following will result:

Short draw= proportion of inorganic salt is higher in the tube= N-prop elutes more readily in headspace than EtOH proportionally, but still n-propanol is higher= ratio correction not in proportion to increase of EtOH increase= BAC underreported.

Long draw= proportion of inorganic salt is lower in the tube= N-prop elutes less readily in headspace than EtOH proportionally, but still n-propanol is lower= ratio correction not in proportion to increase of EtOH increase= BAC over reported.

In conclusion, scientifically, that quiz answer is objectively wrong.