Skip to main content

Experiments

All physical scientists, regardless of the field of study (both on the experimental side and on the theory side), rely upon past experiments.  Regardless of how purely theoretical the work of any particular scientist tends to be, the foundation of the theoretical work is solidified in experiment.  I challenge you to present a case where this is not the case which is not philosophy.  Think very carefully about and thoroughly background check any example you come up with before being set in presenting the example; chances are phenomenal (100%, actually) that either it is based on experiment or is philosophy.

Since I am a student of science, I wish to understand experiment construction, running, and replication.  I also wish to have everybody else understand the process of experimenting and why it is so important to the process of science.

First, we need to understand why experiments are performed.  Everything about any given experiment follows from the reason for experimentation.

Why do we experiment?  The obvious answer is to scientifically understand one bit of the world.  That's why any scientist does an experiment.  As an extension to this concept, we are looking for a reproducible understanding to that bit of the universe.  This means that I had you my experimental procedure, you go to another continent without me, and you can reproduce my findings by reproducing my experiment without me hovering over you in any way, either in person or remotely.

The concept of reproducibility is very important, whether it's in industry or in pure science.  The reason for this reproducibility is different depending on whether you're in industry or in pure science, but they are equally important.  In industry, you want to have reproducibility in order to save money by making your commercial process occur more frequently than not, regardless of who does it where.  In science, this reproducibility is important to keep science (and scientists) honest.  If this reproducibility is not done before an experiment is published (via a scientific paper), then someone will try to run the experiment and either find that the procedures are incomplete or that the results are consistently something other than what is advertised, and I don't think anybody wants that.

With this in mind, how do scientists go about running an experiment?  First, the scientist has to determine what they wish to discover.  Once the question is present, the experiment to find the answer can be planned.  Planning the experiment will depend upon the question to be answered, and will rarely (if ever) be exactly the same.

Different people have different particular methods for designing an experiment.  My preferred method is to start directly with the question to be answered and to work backwards.  How do I answer the question?  When I figure out the method to do so, I ask what I need to do experimentally to get to that method.  To this as far back as needed to get a starting point with the materials I have available to me.

There are other means to plan an experiment, of course, and another popular method is to list all of the relevant starting materials available and the list of possible final procedures and make bridges from start to finish.

There are many methods of designing an experiment, and so long as these experiments end up answering the question indisputably, are reproducible, and manufacture reproducible results, the method of designing experiments is perfectly valid; as long as those three criteria are met, the only difference between scientists is personal preference.

Once the experiment is designed and planned, then the scientist goes into the lab to run the experiment.  After the experiment is run, the scientist would do good by checking that the experiment is valid by running it multiple times to check the reproducibility of the results produced by the experiment which was designed.  If the results are reproducible, the scientist writes up the procedures and the results and works to have them published.  If they are not reproducible, then the scientist works to see what went wrong with the procedure and fixes the problem(s) until the results are reproducible.

With experiment-planning, we can have an idea of how to get to an answer before stepping foot into a lab.  This is important, because if you're in a lab, you want to know what is your plan of action.  Otherwise, you're in everybody's way for no reason.  If you want to be in the lab, you better know what you're plan of action is while in the lab; alas the short term purpose of pre-planning experiments.  The long term purpose is knowing how to reproduce the results. 

Comments

Popular posts from this blog

Basic Statistics Lecture #3: Normal, Binomial, and Poisson Distributions

As I have mentioned last time , the uniform continuous distribution is not the only form of continuous distribution in statistics.  As promised, here are the three most common continuous distribution types.  As a side note, all sampling distributions are relative to the algebraic mean. Normal Distribution: I think most people are familiar with the concept of a normal distribution.  If you've ever seen a bell curve, you've seen the normal distribution.  If you've begun from the first lecture of this lecture series, you've also seen the normal distribution. This type of distribution is where the data points follow a continuous curve, is non-uniform, has a mean (algebraic average) equal to the median (the exact middle value), falls from highest probability at the mean to (for all practical purposes) zero as the x-values approach $\pm \infty$, and therefor has equal number of data points to the left and to the right of the mean, and has the domain of $(\pm \i

Confidence Interval: Basic Statistics Lecture Series Lecture #11

You'll remember last time , I covered hypothesis testing of proportions and the time before that , hypothesis testing of a sample with a mean and standard deviation.  This time, I'll cover the concept of confidence intervals. Confidence intervals are of the form μ 1-α ∈ (a, b) 1-α , where a and b are two numbers such that a<b, α is the significance level as covered in hypothesis testing, and μ is the actual population mean (not the sample mean). This is a the statement of there being a [(1-α)*100]% probability that the true population mean will be somewhere between a and b.  The obvious question is "How do we find a and b?".  Here, I will describe the process. Step 1. Find the Fundamental Statistics The first thing we need to find the fundamental statistics , the mean, standard deviation, and the sample size.  The sample mean is typically referred to as the point estimate by most statistics text books.  This is because the point estimate of the populati

Basic Statistics Lecture #5: Baye's Theorem

As promised last time , I am going to cover Baye's Theorem. If Tree diagram is the common name for Bayes Theorem.  Recall that conditional probability is given by $P(A \mid B) = \frac{P(A \wedge B)}{P(B)}$.   For tree diagrams, let's say that we have events A, B 1 , B 2 , B 3 , … (the reason we have multiple B's is because they all are within the same family of events) such that the events in the family of B are mutually exclusive and the sum of the probabilities of the events in the family of B are equal to 1. Then we have $$P(B_i \mid A)= \frac{P(B_i)*P(A \mid B_i)}{\sum_{m=1}^{n}[P(B_m)*P(A \mid B_m)]}$$  What this means is reliant on the tree diagram. If we are only looking at the sub-items of A, this is what the tree diagram would look like. If J has a probability of 100%, and P(C) and P(D) are not 0, then when we are trying to find the probability of any of the B's being true given that A is true, we have to set the probability of A to be the entir