Skip to main content

Types of Statistical Error: Basic Statistics Lecture Series Lecture #12

As promised last time, I will cover types of statistical error this time.  Knowing the magnitude and the type of error is important to convey with any hypothesis test.  This also happens to be why, in science, it is said that nothing can ever truly be proven; only disproven.



First, it is important to understand that error typing is an integral part of hypothesis and no other part of statistics, similar to the human brain and the person it's in.  The human brain cannot fit into any other species, and it is necessary for humans to live with it.  The same concept applies with these types of errors and hypothesis; it cannot fit anywhere else, and is necessary for the success of hypothesis testing.

So what specifically is hypothesis testing?  It is the chances that the conclusion is incorrect, namely the chances of the null hypothesis is rejected when it's true (Type I Error, false positive) and the chances of failing to reject the null hypothesis when it's false (Type II Error, false negative).  In statistical hypothesis testing, the alternative hypothesis will always be a positive result.  In the case of statistical analysis, a positive result doesn't mean the same as it does in common language; it means that the result which is different than the proposed mean is the true value.

That's no baby, sir.
How would we find how big the errors would be?  For type I errors, that would be easy.  The chances are simply the level of α we've chosen to compare the p-value to.  Type II errors are are a little bit more complicated to calculate, but we've already calculated it before.  What you do is obtain the p-value, which is to calculate the difference between the sample mean and the proposed population mean, and divide it by the standard error, and obtain the p-value from the resultant z-value.  Remember from the hypothesis testing posts that the value for z is given by $z_{calc}=\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}$.  Here's a table which contains the types of errors and their calculations:

True Statement
Null Hypothesis True Alternative Hypothesis True
Statistical Conclusion Null Hypothesis Rejected Type I Error, α Correct Decision
Null Hypothesis Not Rejected Correct Hypothesis Type II Error, p from $$z_{calc}= β = \frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}$$

So that's the types of error and how to find them.  If you have any questions, please leave a comment.  Next time, I'll start on regression analysis.  Until then, stay curious.

K. "Alan" Eister has his bacholers of Science in Chemistry. He is also a tutor for Varsity Tutors.  If you feel you need any further help on any of the topics covered, you can sign up for tutoring sessions. here

Comments

Popular posts from this blog

Multiple Regression: Basic Statistics Lecture Series Lecture #14

As promised from last time , I am going to cover multiple regression analysis this time.  As mentioned last time, correlation may not imply causation, causation does imply correlation, so correlation is a necessary but insufficient (but still necessary) first step in determining causation.  Since this is a basic statistic lecture series, there is an assumption that matrix algebra is not known to students who take this course, so this section will only be working with the solutions obtained through a program such as Excel , R , SPSS , or MatLab . This is the regression case where there is more than one independent variable, or multiple independent variables, for a single dependent variable.  For example, I mentioned last time that there is a causal correlation between the number of wins a team in the MLB has and the ratio of the runs that team score to the runs that team allowed.  The more runs they scored per run they allowed, the more wins they are likely to hav...

Confidence Interval: Basic Statistics Lecture Series Lecture #11

You'll remember last time , I covered hypothesis testing of proportions and the time before that , hypothesis testing of a sample with a mean and standard deviation.  This time, I'll cover the concept of confidence intervals. Confidence intervals are of the form μ 1-α ∈ (a, b) 1-α , where a and b are two numbers such that a<b, α is the significance level as covered in hypothesis testing, and μ is the actual population mean (not the sample mean). This is a the statement of there being a [(1-α)*100]% probability that the true population mean will be somewhere between a and b.  The obvious question is "How do we find a and b?".  Here, I will describe the process. Step 1. Find the Fundamental Statistics The first thing we need to find the fundamental statistics , the mean, standard deviation, and the sample size.  The sample mean is typically referred to as the point estimate by most statistics text books.  This is because the point estimate of the po...

Basics of Statistics Lecture #10: Hypothesis Testing of Proportions

As promised last time , I am going to cover hypothesis testing of proportions.  This is conceptually similar to hypothesis testing from the mean and standard deviation , but the calculations are going to be different. A proportion is the percentage of success in a sample or population, reported in decimal form.  This means that if heads comes up 50% of the time, then it is reported as p=0.50.  Because of this, the calculations are different than that of the mean and standard deviation case from last time.  For instance, when we have proportion data, we don't know the standard deviation of either the sample or the population.  This means that the standard error calculation cannot be performed as typical.  We must instead use a proportion-specific version, which is given by $\sigma_{E}=\sqrt{\frac{p*(1-p)}{n}}$, where p is the proportion and n is the sample size.  If we have the sample size and the number of successes, we can calculate the proporti...