According to Neutens, J. J., & Rubinson, L. (2010) the key to most significance testing is to establish the extent to which the null hypothesis is believed to be true. The null hypothesis refers to any hypothesis to be nullified and normally presumes chance results only, no difference in averages or no correlation between variables. For example, if we undertook a study of the effects of consuming alcohol on the ability to drive a car by asking a sample of people to perform basic driving skills while under the influence of large quantities of alcohol, the null hypothesis would be that consuming alcohol has no effect on an individual’s ability to drive. In statistics, a result is said to be statically significant if it is unlikely to have occurred by chance. In such cases the null hypothesis cannot be shown to be true. The most common significance level to show that a finding is good enough to be believed is 0.05 or 5%. This is that the null hypothesis is true. When the null hypothesis is rejected, but it is actually true, a so-called type I error has occurred. This means that there is no correlation between the variables, but the test indicates that there is. A type II error occurs when a false null hypothesis is accepted or not rejected. In most cases this will mean that results are not down to chance alone, there is a correlation between the variables, but the test did not detect this and gives a false negative finding. There is a tradeoff between type I and type II errors as the former can be reduced by setting a very low significance level, but this increases the likelihood that a false null hypothesis will not be rejected.
Neutens, J. J., & Rubinson, L. (2010). Research techniques for the health sciences (4th ed.). San Francisco, CA: Pearson Benjamin