The Central Limit Theorem
A long standing problem of probability theory has been to find necessary and sufficient conditions for approximation of laws of sums of random variables. Then came Chebysheve, Liapounov and Markov and they came up with the central limit theorem. The central limit theorem allows you to measure the variability in your sample results by taking only one sample and it gives a pretty nice way to calculate the probabilities for the total , the average and the proportion based on your sample of information.
A statistical theory that states that given a sufficiently large sample size from a population with a finite level of variance, the mean of all samples from the same population will be approximately equal to the mean of the population. Furthermore, all of the samples will follow an approximate normal distribution pattern, with all variances being approximately equal to the variance of the population divided by each sample's size. Using the central limit theorem allows you to find probabilities for each of these sample statistics without having to sample a lot.
The central limit theorem is a major probability theorem that tells you what sampling distribution is used for many different statistics, including the sample total, the sample average and the sample proportion. The main purpose of the Central limit theorem is to approximate normal distribution as long as n, the size of your sample is large enough. Let X be any random variable with µx and standard deviation бx (such as weight, gender, age etc). The amazing and counter-intuitive thing about the central limit theorem is that no matter what the shape of the original distribution, the sampling distribution of the mean approaches a normal distribution. Furthermore, for most distributions, a normal distribution is approached very quickly as N increases. If the sample size is sufficiently large, then the mean of a random sample from a population has a sampling distribution