Skip to main content

O. Chem I Lecture #1: Origin of Matter

Hello Internet, and Welcome to the Organic Chemistry Lecture Series by The Science of Life.  This is the first lecture, so let’s start off at the beginning.  That means let’s start off with the synthesis of the units of chemistry in mother nature, well before either the Earth or Humans were even ever a thing.
The Big Bang happened 14.7 billion years ago.  This was the first instance of space, energy, and time in our universe.  It’s where the universe itself banged into existence.  What caused it is a mystery; nobody knows how it happened.
Notice what was missing from the list: matter.  In the very first seconds of the universe, there was no matter in the universe.  Everything in the universe was on the energy side of the  equation.  After a few minutes of cooling, the energy finally became cool enough (and therefor cool enough) to condense to form matter in the form of quarks and bosons.  The quarks are the building blocks of protons and neutrons required for the nucleus of atoms, and the most well-known of the bosons is the electron which orbits the nucleus of an atom.
As things cooled further, the quarks combined to form mainly protons, and combined with electrons to form Hydrogen atoms in clouds.  Gravity takes over, and eventually, these clouds of hydrogen in otherwise empty space condenses enough to form big balls of gas which eventually starts the process of nuclear fusion.  The second that happens, we call that ball of gas a star.
The nuclear fusion in a star works to convert hydrogen into helium, and helium into the heavier elements.  This includes the carbon which is central to the study of Organic Chemistry.  Eventually the stars begin developing Iron 56, at which point the star begins the march towards fizzling out like the end of a camp fire without any fresh wood.  When this happens, it’s only a matter of time before it goes super-nova, or explodes all of its material into space.  This explosion helps accelerate the condensation of nearby space-clouds into star systems, some of which also include planets.  Because nature is messy like that and not the neat and sterile environment of a lab.
After a few generations of stars, about 4.7 billion years ago, one of these supernova helps condense one of these clouds of gas that we now call “our solar system”.  So that’s an oversimplified version of how all the elements were synthesized by nature.
So that comes to the end of this lecture.  Next time, I will be covering some basic definitions which need to be known before carrying on further into the course.  Subscribe to stay up to date and click on the bell to get notified when I upload new episodes.

Comments

Popular posts from this blog

Basic Statistics Lecture #3: Normal, Binomial, and Poisson Distributions

As I have mentioned last time , the uniform continuous distribution is not the only form of continuous distribution in statistics.  As promised, here are the three most common continuous distribution types.  As a side note, all sampling distributions are relative to the algebraic mean. Normal Distribution: I think most people are familiar with the concept of a normal distribution.  If you've ever seen a bell curve, you've seen the normal distribution.  If you've begun from the first lecture of this lecture series, you've also seen the normal distribution. This type of distribution is where the data points follow a continuous curve, is non-uniform, has a mean (algebraic average) equal to the median (the exact middle value), falls from highest probability at the mean to (for all practical purposes) zero as the x-values approach $\pm \infty$, and therefor has equal number of data points to the left and to the right of the mean, and has the domain of $(\pm \i

Confidence Interval: Basic Statistics Lecture Series Lecture #11

You'll remember last time , I covered hypothesis testing of proportions and the time before that , hypothesis testing of a sample with a mean and standard deviation.  This time, I'll cover the concept of confidence intervals. Confidence intervals are of the form μ 1-α ∈ (a, b) 1-α , where a and b are two numbers such that a<b, α is the significance level as covered in hypothesis testing, and μ is the actual population mean (not the sample mean). This is a the statement of there being a [(1-α)*100]% probability that the true population mean will be somewhere between a and b.  The obvious question is "How do we find a and b?".  Here, I will describe the process. Step 1. Find the Fundamental Statistics The first thing we need to find the fundamental statistics , the mean, standard deviation, and the sample size.  The sample mean is typically referred to as the point estimate by most statistics text books.  This is because the point estimate of the populati

Basic Statistics Lecture #5: Baye's Theorem

As promised last time , I am going to cover Baye's Theorem. If Tree diagram is the common name for Bayes Theorem.  Recall that conditional probability is given by $P(A \mid B) = \frac{P(A \wedge B)}{P(B)}$.   For tree diagrams, let's say that we have events A, B 1 , B 2 , B 3 , … (the reason we have multiple B's is because they all are within the same family of events) such that the events in the family of B are mutually exclusive and the sum of the probabilities of the events in the family of B are equal to 1. Then we have $$P(B_i \mid A)= \frac{P(B_i)*P(A \mid B_i)}{\sum_{m=1}^{n}[P(B_m)*P(A \mid B_m)]}$$  What this means is reliant on the tree diagram. If we are only looking at the sub-items of A, this is what the tree diagram would look like. If J has a probability of 100%, and P(C) and P(D) are not 0, then when we are trying to find the probability of any of the B's being true given that A is true, we have to set the probability of A to be the entir