Significance tests state two explanations, or hypotheses, about a parameter. One, called the null hypothesis, states that the parameter equals some value (usually 0). The other, called the alternative hypothesis, states that the parameter is greater than, less than or (not) equal to the value stated in the null hypothesis. When the alternative hypothesis considers the values above and below the null hypothesis, it is called two sided.
Statistical significance is not about the importance of a hypothesis, but rather the likelihood that an observation would occur if the null hypothesis is true. The P-value is the probability that an estimate deviates from the value stated in the alternative hypothesis, assuming that the null hypothesis is true. Statistical significance is commonly stated when P-values are less than .05, or 5%. When this occurs, this means that there is more evidence against the null hypothesis. One common misstep is to accept the null hypothesis because there is not sufficient evidence to reject it. There are also 2 types of errors that researchers can make. The first, called Type 1 error, happens when the null hypothesis is mistakenly rejected. The second, called Type 2 error, involves accepting a null hypothesis even though it is not true.
Significance tests can also be 1-sided or 2-sided. One-sided tests evaluate sample means that fall on one end, or tail, of the distribution. Two-sided test suggests that the null hypothesis be rejected if the sample mean falls in either tail, representing the value stated in the alternative hypothesis. The decision about which type of significance test to use is based on whether the variable is categorical (proportion) or quantitative (mean).
The matched pairs method involves the use of dependent samples. It is commonly used when