Skip to main content

FBI Forensics Scandel

Hello internet, and welcome to The Science They Don't Want You to Know.  As I have mentioned in the first post of this series, I am doing research regarding the statistical viability of currently unconfirmed conspiracies (no leaked documents) by way of currently known conspiracies (documents have been leaked).  The primary purpose of this initial research is to gather particular information, specifically how many people were involved in the actual conspiracies and the length of time which these conspiracies took place.  If you have not read the first post, you should read it here.


There are instances where people in charge of accumulating information and data fudge the numbers or outright lie.  This is one snippet of information where the conspiracy theorists are correct.  There are people in the position of collecting information feel that the possible rewards for lying about said information outweighs the cost of getting caught.  As a chemist, I do not understand this mentality, but I do accept the empirical fact that it does happen.  It happens in science more often than I would like, and with government officials of all political persuasions.

In the FBI, this concept has been amplified in the past.  Dr. Frederic Whitehurst, who was a special agent of the FBI for 12 years, became a whistle-blower in 1998 for the concept that the FBI Crime Labs have a culture of intentionally biasing evidence for the prosecution.  This comes on the tail end of the O.J. Simpson Murder Trial, where Simpsons lawyer Johnnie Cochran told the court three years earlier that the FBI is fudging the science and they have a person in the FBI who could prove it.

But these charges were dropped after Whitehurst was demoted and eventually was let go from the Bureau.  While Dr. Whitehurst did bring this concept to public light, no real investigation was done on the matter and no real evidence was brought forth.  The matter was merely dropped with no explanation.

So the conspiracy was pushed back into the darkness.  Years passed before it was brought back to light by the Washington Post in 2012.  They came forward with evidence to support the wistle-blowers claims, and a formal investigation finally revealed the extent of the forensic assistance to prosecutors.  By the time this came out, 14 people who were put on death row in cases where the FBI knowingly presented flawed science "were executed or killed in prison".  The rubish science behind these cases were performed by 26 of the 28 members of the FBI's Elite Forensics Team.  At least the FBI is publicly admitting it now.  It would have been very nice indeed if they didn't do this piss poor work in executing their jobs to begin with.

So why did the FBI fabricate data?  Why did they want to increase guilty verdicts so badly?  No one knows for certain.  

Which goes to show how important forensic science is in the court of law, and how easily it can go awry.  So what is the solution to this problem?  The best bet is to have multiple organizations do forensic analysis on any given case.  Yes, have the FBI continue doing forensic analysis, but also have city, county, and state forensic labs work on the case.  If money allows for it, also bring in private forensic labs into the mix, not as a replacement, but rather as an additional assessment.  The more, the merrier.

The biasing of reports was beyond the realm of explosives, the division which Dr. Whitehurst worked.  Yes, there was bias in the realm of explosives -- the 1993 WTC bombings and 1995 Oklahoma City Bombing being the two most well known cases -- but there are other cases, including the conviction of Donald E. Gates (falsely charged with rape and murder), Santae A. Tribble (falsely charged with murder), and Kirk L. Odom (falsely charged with rape) based on faulty hair analysis.

The question that come to mind are about when this began and how many people in the agency are involved in this particular conspiracy.  The first known instance of the FBI using it's labs to persuade federal cases is the aforementioned Tribble case, which closed in 1978.  So that's 34 years from the first case until the time the investigation revealed it to the public, involving 26 people.

For those of you keeping track, this means that the per-person-per-year probability of the conspiracy being revealed is 9.992E-4, which is the highest to date by two orders of magnitude.  This brings the running average runs up to 2.005E-4.  This does bring the standard deviation up to 3.945E-4, which makes this probably a statistical anomoly.  After everything is input, I'll check if it is.

So until next time, take that as you will.
K. "Alan" Eister Δαβ

Relevant Entries:

Comments

Popular posts from this blog

Basic Statistics Lecture #3: Normal, Binomial, and Poisson Distributions

As I have mentioned last time , the uniform continuous distribution is not the only form of continuous distribution in statistics.  As promised, here are the three most common continuous distribution types.  As a side note, all sampling distributions are relative to the algebraic mean. Normal Distribution: I think most people are familiar with the concept of a normal distribution.  If you've ever seen a bell curve, you've seen the normal distribution.  If you've begun from the first lecture of this lecture series, you've also seen the normal distribution. This type of distribution is where the data points follow a continuous curve, is non-uniform, has a mean (algebraic average) equal to the median (the exact middle value), falls from highest probability at the mean to (for all practical purposes) zero as the x-values approach $\pm \infty$, and therefor has equal number of data points to the left and to the right of the mean, and has the domain of $(\pm \i

Confidence Interval: Basic Statistics Lecture Series Lecture #11

You'll remember last time , I covered hypothesis testing of proportions and the time before that , hypothesis testing of a sample with a mean and standard deviation.  This time, I'll cover the concept of confidence intervals. Confidence intervals are of the form μ 1-α ∈ (a, b) 1-α , where a and b are two numbers such that a<b, α is the significance level as covered in hypothesis testing, and μ is the actual population mean (not the sample mean). This is a the statement of there being a [(1-α)*100]% probability that the true population mean will be somewhere between a and b.  The obvious question is "How do we find a and b?".  Here, I will describe the process. Step 1. Find the Fundamental Statistics The first thing we need to find the fundamental statistics , the mean, standard deviation, and the sample size.  The sample mean is typically referred to as the point estimate by most statistics text books.  This is because the point estimate of the populati

Basic Statistics Lecture #5: Baye's Theorem

As promised last time , I am going to cover Baye's Theorem. If Tree diagram is the common name for Bayes Theorem.  Recall that conditional probability is given by $P(A \mid B) = \frac{P(A \wedge B)}{P(B)}$.   For tree diagrams, let's say that we have events A, B 1 , B 2 , B 3 , … (the reason we have multiple B's is because they all are within the same family of events) such that the events in the family of B are mutually exclusive and the sum of the probabilities of the events in the family of B are equal to 1. Then we have $$P(B_i \mid A)= \frac{P(B_i)*P(A \mid B_i)}{\sum_{m=1}^{n}[P(B_m)*P(A \mid B_m)]}$$  What this means is reliant on the tree diagram. If we are only looking at the sub-items of A, this is what the tree diagram would look like. If J has a probability of 100%, and P(C) and P(D) are not 0, then when we are trying to find the probability of any of the B's being true given that A is true, we have to set the probability of A to be the entir