Skip to main content

Posts

Showing posts from December, 2017

Multiple Regression: Basic Statistics Lecture Series Lecture #14

As promised from last time , I am going to cover multiple regression analysis this time.  As mentioned last time, correlation may not imply causation, causation does imply correlation, so correlation is a necessary but insufficient (but still necessary) first step in determining causation.  Since this is a basic statistic lecture series, there is an assumption that matrix algebra is not known to students who take this course, so this section will only be working with the solutions obtained through a program such as Excel , R , SPSS , or MatLab . This is the regression case where there is more than one independent variable, or multiple independent variables, for a single dependent variable.  For example, I mentioned last time that there is a causal correlation between the number of wins a team in the MLB has and the ratio of the runs that team score to the runs that team allowed.  The more runs they scored per run they allowed, the more wins they are likely to have.  After all, the

Regression Analysis: Basic Statistics Lecture Series Lecture #13

As promised last time , I am going to cover the basics of simple regression analysis.  It is simple because there is only one independent variable for the dependent variable. As a side note here, most of you are likely familiar with the phrase "correlation does not imply causation".  This statement, while true, is misleading.  Yes, it is true that not all correlations are causal, it is also true that all causation's are correlated.  Causation cannot occur without correlation, so correlation is the first necessary step to show causation, but it is also an insufficient step. For example, if we look at the graph of MLB wins on the y-axis and the ratio of runs scored to runs allowed on the x-axis, it is easily apparent that the ratio and the number of wins are highly correlated.  This is a causal relationship as well.  The more runs you score and the fewer runs you give up, the more you'll win, because the winner is the team who scores more runs than they give

Types of Statistical Error: Basic Statistics Lecture Series Lecture #12

As promised last time , I will cover types of statistical error this time.  Knowing the magnitude and the type of error is important to convey with any hypothesis test.  This also happens to be why, in science, it is said that nothing can ever truly be proven; only disproven. First, it is important to understand that error typing is an integral part of hypothesis and no other part of statistics, similar to the human brain and the person it's in.  The human brain cannot fit into any other species, and it is necessary for humans to live with it.  The same concept applies with these types of errors and hypothesis; it cannot fit anywhere else, and is necessary for the success of hypothesis testing. So what specifically is hypothesis testing?  It is the chances that the conclusion is incorrect, namely the chances of the null hypothesis is rejected when it's true (Type I Error, false positive) and the chances of failing to reject the null hypothesis when it's false (Typ

Confidence Interval: Basic Statistics Lecture Series Lecture #11

You'll remember last time , I covered hypothesis testing of proportions and the time before that , hypothesis testing of a sample with a mean and standard deviation.  This time, I'll cover the concept of confidence intervals. Confidence intervals are of the form μ 1-α ∈ (a, b) 1-α , where a and b are two numbers such that a<b, α is the significance level as covered in hypothesis testing, and μ is the actual population mean (not the sample mean). This is a the statement of there being a [(1-α)*100]% probability that the true population mean will be somewhere between a and b.  The obvious question is "How do we find a and b?".  Here, I will describe the process. Step 1. Find the Fundamental Statistics The first thing we need to find the fundamental statistics , the mean, standard deviation, and the sample size.  The sample mean is typically referred to as the point estimate by most statistics text books.  This is because the point estimate of the populati

Basics of Statistics Lecture #10: Hypothesis Testing of Proportions

As promised last time , I am going to cover hypothesis testing of proportions.  This is conceptually similar to hypothesis testing from the mean and standard deviation , but the calculations are going to be different. A proportion is the percentage of success in a sample or population, reported in decimal form.  This means that if heads comes up 50% of the time, then it is reported as p=0.50.  Because of this, the calculations are different than that of the mean and standard deviation case from last time.  For instance, when we have proportion data, we don't know the standard deviation of either the sample or the population.  This means that the standard error calculation cannot be performed as typical.  We must instead use a proportion-specific version, which is given by $\sigma_{E}=\sqrt{\frac{p*(1-p)}{n}}$, where p is the proportion and n is the sample size.  If we have the sample size and the number of successes, we can calculate the proportion by $p=\frac{n_s}{n}$, where

Basics of Statistics Lecture #9: Hypothesis Testing of Mean and Standard Deviation

As promised last time , I will introduce the concept of hypothesis testing.  This is finally getting to the meat and potatoes of statistical analysis. There are three things that need to be true in order for a hypothesis to be held true: The data needs to be sampled randomly.  A random sample is exactly what it says on the tin; a group from the population where each individual has an equal chance of being picked. We need to know the sample mean and standard deviation . One or both of the following: Data needs to come from a normal distribution There needs to be a large sample size (for a lot of cases, at least 30, but the more the merrier). There are 5 steps for hypothesis testing from either of two methods.  Statisticians typically distinguish between whether or not we know the population standard deviation.  Realistically, this means whether we know the standard deviation for the population (everyone) or for the sample (small portion of everyone under study).  For

Basic Statistics Lecture #0: Basic Terms and Calculations

This is an entry in the Basic Statistics Lecture Series which I have realized that I have failed to incorporate into the series so far, and I apologize for that.  This lecture is going to cover the basic calculations required for all of statistical analysis, on a fundamental level. When we collect statistical data, we typically collect it on a small part of the population as a whole.  The population is the entire group for which a sentiment holds true.  This small part of the population as a whole is called the sample of the population, or just sample for short.  The number of data points we collect -- the number of items in the sample -- is called the sample size, which is typically denoted by the letter n. In statistics, we typically calculate the arithmetic mean, which is the sum of all values divided by the sample size.  For the population as a whole, this is denoted by the greek letter μ (mu), and for a sample, it's denoted by x . The sample mean is expressed by $\ov