Samantha Smith
BSHS/382- Research and Statistics
April 22, 2013
ANGELA COLISTRA
Introduction Reliability and Validity are two important aspects of research in the human services field and without these tools researchers results would be incomplete. This paper will give examples of the different types of reliability and validity and examples on how they apply to human services research. Also this paper will provide examples of data collection methods and data collection instruments used in human services research.
Types of Reliability The criterion of reliability is the consistency or stability, but it may also imply dependability. The different types of reliability are: Alternate-form reliability, Item-to-item reliability, and Test-retest reliability. Alternate-form reliability is the degree of relatedness of different forms of the same test (Rosnow & Rosenthal, 2008), for example a human service researcher gave his or her client the same assessment that describes their characteristics but some of the wording and questions are different the results should be the same, if not the assessment needs to be redone. The second reliability is, Internal-consistency reliability: The overall degree of relatedness of all items in a test or all raters in a judgment study (also called reliability of components), (Rosnow & …show more content…
Rosenthal, 2008). An example of Internal-consistency reliability is two clients being asked to categorize there issues based on their needs from the agency. A perfect reliable result would be both clients identify that they both need help in the same ways. The third reliability is Item-to-item reliability: The reliability of any single item on average (analogous to judge-to-judge reliability, which is the reliability of any single judge on average), (Rosnow & Rosenthal, 2008). An example of item-to-item reliability in human service research would be two items that are identical. Test-retest reliability: The degree of temporal stability (relatedness) of a measuring instrument or test, or the characteristic it is designed to evaluate, from one administration to another; also called retest reliability (Rosnow & Rosenthal, 2008), this will determine the outcome of how reliable the instrument or test is.
Types of Validity Validity has different types of uses in the human services research design which are: Construct validity, Content validity, and Convergent and discriminant validity, Criterion validity, External validity, Face validity, internal validity, and Statistical- conclusion validity. Construct validity is the degree to which the conceptualization of what is being measured or experimentally manipulated is what is claimed, such as the constructs that are measured by psychological tests or that serve as a link between independent and dependent variables (Rosnow & Rosenthal, 2008). Content validity is the adequate sampling of the relevant material or content that a test purports to measure (Rosnow & Rosenthal, 2008). Convergent and discriminant validity is the grounds established for a construct based on the convergence of related tests or behavior (convergent validity) and the distinctiveness of unrelated tests or behavior (discriminant validity), (Rosnow & Rosenthal, 2008). Criterion validity is the degree to which a test or questionnaire is correlated with outcome criteria in the present (its concurrent validity) or the future (its predictive validity), (Rosnow & Rosenthal, 2008). External validity is the generalizability of an inferred causal relationship over different people, settings, manipulations (or treatments), and research outcomes (Rosnow & Rosenthal, 2008). Face validity is the degree to which a test or other instrument “looks as if” it is measuring something relevant (Rosnow & Rosenthal, 2008). Internal validity is the soundness of statements about whether one variable is the cause of a particular outcome, especially the ability to rule out plausible rival hypotheses (Rosnow & Rosenthal, 2008). Statistical-conclusion validity is the accuracy of drawing certain statistical conclusions, such as an estimation of the magnitude of the relationship between an independent and a dependent variable (the effect size) or an estimation of the degree of statistical significance of a particular statistical test (Rosnow & Rosenthal, 2008).
Data Collection Methods and Data Collection Instruments: Examples of data collection methods and data collection instruments used in human services research are interviews, assessments, observations, research, and test.
Examples of data collection method and a data collection instruments used in managerial research are experiments, surveys, and monitoring. It is important to ensure that these data collection methods and instruments are both reliable and valid because human service and managerial research needs to be accurate in order to make a proper decision.
Conclusion In conclusion according to Bradburn (1982) and Schuman & Presser (1996), “Specific questions appear to be less affected by what preceded them than are general or broadly stated questions”, in other words human services and managerial researchers use data in order to make sure the outcome of the research is accurate and doing so the data can be collected in different types of reliability and validity measurements.
References Bradburn, N. M. (1982). Question-wording effects in surveys. In R. Hogarth (Ed.), New directions for methodology of social and behavioral science: Question framing and response contingency (No. 11, pp. 65–76). San Francisco: Jossey-Bass. Rosnow, R. L., & Rosenthal, R. (2008). Beginning behavioral research: A conceptual primer (6th ed.). Upper Saddle River, NJ: Pearson/Prentice Hall Schuman, H., & Presser, S. (1996). Questions and answers in attitude surveys: Experiments on question form, wording, and content. Thousand Oaks, CA: Sage.