Skip to main content

Types of Statistical Error: Basic Statistics Lecture Series Lecture #12

As promised last time, I will cover types of statistical error this time.  Knowing the magnitude and the type of error is important to convey with any hypothesis test.  This also happens to be why, in science, it is said that nothing can ever truly be proven; only disproven.



First, it is important to understand that error typing is an integral part of hypothesis and no other part of statistics, similar to the human brain and the person it's in.  The human brain cannot fit into any other species, and it is necessary for humans to live with it.  The same concept applies with these types of errors and hypothesis; it cannot fit anywhere else, and is necessary for the success of hypothesis testing.

So what specifically is hypothesis testing?  It is the chances that the conclusion is incorrect, namely the chances of the null hypothesis is rejected when it's true (Type I Error, false positive) and the chances of failing to reject the null hypothesis when it's false (Type II Error, false negative).  In statistical hypothesis testing, the alternative hypothesis will always be a positive result.  In the case of statistical analysis, a positive result doesn't mean the same as it does in common language; it means that the result which is different than the proposed mean is the true value.

That's no baby, sir.
How would we find how big the errors would be?  For type I errors, that would be easy.  The chances are simply the level of α we've chosen to compare the p-value to.  Type II errors are are a little bit more complicated to calculate, but we've already calculated it before.  What you do is obtain the p-value, which is to calculate the difference between the sample mean and the proposed population mean, and divide it by the standard error, and obtain the p-value from the resultant z-value.  Remember from the hypothesis testing posts that the value for z is given by $z_{calc}=\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}$.  Here's a table which contains the types of errors and their calculations:

True Statement
Null Hypothesis True Alternative Hypothesis True
Statistical Conclusion Null Hypothesis Rejected Type I Error, α Correct Decision
Null Hypothesis Not Rejected Correct Hypothesis Type II Error, p from $$z_{calc}= β = \frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}$$

So that's the types of error and how to find them.  If you have any questions, please leave a comment.  Next time, I'll start on regression analysis.  Until then, stay curious.

K. "Alan" Eister has his bacholers of Science in Chemistry. He is also a tutor for Varsity Tutors.  If you feel you need any further help on any of the topics covered, you can sign up for tutoring sessions. here

Comments

Popular posts from this blog

Confidence Interval: Basic Statistics Lecture Series Lecture #11

You'll remember last time , I covered hypothesis testing of proportions and the time before that , hypothesis testing of a sample with a mean and standard deviation.  This time, I'll cover the concept of confidence intervals. Confidence intervals are of the form μ 1-α ∈ (a, b) 1-α , where a and b are two numbers such that a<b, α is the significance level as covered in hypothesis testing, and μ is the actual population mean (not the sample mean). This is a the statement of there being a [(1-α)*100]% probability that the true population mean will be somewhere between a and b.  The obvious question is "How do we find a and b?".  Here, I will describe the process. Step 1. Find the Fundamental Statistics The first thing we need to find the fundamental statistics , the mean, standard deviation, and the sample size.  The sample mean is typically referred to as the point estimate by most statistics text books.  This is because the point estimate of the po...

Multiple Regression: Basic Statistics Lecture Series Lecture #14

As promised from last time , I am going to cover multiple regression analysis this time.  As mentioned last time, correlation may not imply causation, causation does imply correlation, so correlation is a necessary but insufficient (but still necessary) first step in determining causation.  Since this is a basic statistic lecture series, there is an assumption that matrix algebra is not known to students who take this course, so this section will only be working with the solutions obtained through a program such as Excel , R , SPSS , or MatLab . This is the regression case where there is more than one independent variable, or multiple independent variables, for a single dependent variable.  For example, I mentioned last time that there is a causal correlation between the number of wins a team in the MLB has and the ratio of the runs that team score to the runs that team allowed.  The more runs they scored per run they allowed, the more wins they are likely to hav...

The Connections Between the Sciences

I apologize for taking so long with this entry of my blog. I have been abnormally busy lately with my academics and poetry. Today, I am writing on how all of the sciences are related to one another, in the hopes that you will come to realize that the sciences are not as separate as popular culture and news has us believe. This blog will be geared to those individuals – weather you're the average person or a student of science, or a full blown scientist – who have the opinion that the different fields of science are completely isolated from one another. This sentiment is not true, and I hope to show the false-hood of this concept here. In physics, we have the concept of “The Right-Hand-Rule”. This pretty much determines whether the a force perpendicular to two vectors is “positive” or “negative”. Torque is a good example of this. The amount of torque placed on, say, a bolt by a crescent wrench is perpendicular to the position vector and the fo...