Consider the following question: someone takes a sample from a population and finds both the sample mean and the sample standard deviation. What can he learn from this sample mean about the population mean?
This is an important problem and is addressed by the Central Limit Theorem. For now, let us not bother about what this theorem states but we will look at how it could help us in answering our question.
The Central Limit Theorem tells us that if we take very many samples the means of all these samples will lie in an interval around the population mean. Some sample means will be larger than the population mean, some will be smaller. The Central Limit Theorem goes on to state that 95% of the sample means will lie in a certain interval around the population mean. That interval is called the 95% confidence interval. Practically spoken it means that whenever someone is taking a sample and calculates the mean of that sample, he can be 95% confident that the mean of the sample he just took is in the 95% confidence interval. More importantly, if someone takes a sample from a population and calculates the mean of that sample, he can be 95% confident that the population mean is also in the 95% confidence interval. Thus, the sample mean gives us an approximation of the population mean. The same holds true for a 90%, a 99%, or for that matter any percentage confidence interval. Depending on the situation we are in, we can easily calculate these intervals. There are three different situations which we will study, but let us first look at the general idea of a confidence interval.
The General Idea of Confidence Intervals
Suppose that we have a population which is normally distributed. The population mean, usually denoted by μ, will thus be at the peak of the distribution. Assume that we plot the sample means on the horizontal axis. The 95% confidence interval is that interval in which 95% of the sample means will be in. Since the