Dr. Nolan
May 16, 2013
6. There are four specific methods for implementing survey research. They include on-site surveys, telephone surveys, computer surveys, and mail surveys. Each has pros and cons, which are listed as follows:
On-site surveys:
Pros—great for probing; best for non-vested markets; should be no more than 6 pages long; you can use physical props for demonstrations; higher general response rate at around 75%.
Cons—people may not answer sensitive information questions correctly or at all (age, income, marital status); most costly of the 4 types; labor intensive; can’t go back for a follow-up because you usually don’t ask for address or phone number.
Telephone surveys:
Pros—central location; less costly now than in the past due to lower phone rates; data is collected quickly; cost is only about 40% of on-site or mail surveys; people are anonymous so they are more likely to share sensitive information.
Cons—some investment required; not able to physically test products.
Computer-based surveys:
Pros—inexpensive way to reach a lot of people; sensitive questions are more likely to be …show more content…
answered; respondents can take their time; faster data collection; ability to track respondents
Cons—people may ignore requests for surveys thinking they are spam; technical problems may arise.
Mail surveys:
Pros—good for sensitive questions because most don’t require a name or address; bulk mail can be used to keep costs down.
Cons—internet surveys decreasing mail survey importance; can be costly due to postage, printing and paper costs; not recommended if the survey has a time constraint.
5. There are two major sources of errors in survey research: random sampling errors and systematic errors. A random sampling error is a statistical fluctuation that occurs because of chance variation in the elements selected for a sample. A systematic error is error resulting from some imperfect aspect of the research design that causes respondent error or from a mistake in the execution of the research. Other errors include respondent error, which is a category of sample bias resulting from some respondent action or inaction such as nonresponse or response bias. A nonresponse error is the statistical differences between a survey that includes only those who responded and a perfect survey that would also include those who failed to respond. Nonrespondents are people who are not contacted or who refuse to cooperate in the research. No contacts are people who are not at home or who are otherwise inaccessible on the first and second contact. Refusals are people who are unwilling to participate in a research project. Self-selection bias is a bias that occurs because people who feel strongly about a subject are more likely to respond to survey questions than people who feel indifferent about it. Response bias occurs when respondents either consciously or unconsciously tend to answer questions with a certain slant that misrepresents the truth.
4.
When using descriptive data, everyone involved is known. With inferential data everyone is unknown, with a finite number of people to draw a conclusion about, and the purpose is to make a judgment about a population. Population parameters are what is measured, for example price, and sampling statistics describe who is measured. Frequency distributions are frequency percentages (what is) and probability (what may be), which is the long-run relative frequency with which an event will occur. It is one of the most common ways to summarize a set of data. A significance level is a critical probability associated with a statistical hypothesis test that indicates how likely it is that an inference supporting a difference between an observed value and some statistical expectation is true. It is the acceptable level of Type I
error.
3. The sampling frame is a list of elements from which a sample may be drawn. It is also called the working population because these units will eventually provide units involved in analysis. A simple example of a sampling frame would be a list of all members of the American Medical Association. A sampling unit is a single element or group of elements subject to selection in the sample. A sampling frame error occurs when certain sample elements are not listed or are not accurately represented in a sampling frame. The four types of sampling methods are simple random sample, systematic random sample, stratified random sample, and cluster sample. Simple random sampling is a procedure that assures each element in the population of an equal chance of being included in the sample. Systematic sampling is a procedure in which a starting point is selected by a random process and then every nth number on the list is selected; the con is periodicity, which is a type of bias. Stratified sampling is a probability sampling procedure in which simple random subsamples that are more or less equal on some characteristic are drawn from within each stratum of the population; a pro is that random sampling error will be reduced with the use of stratified sampling, and the con is this method could be costly if a list of population elements is not obtained. A cluster sample is an economically efficient sampling technique (pro) in which the primary sampling unit is not the individual element in the population but a large cluster of elements; clusters are selected randomly. The con of cluster sampling is that the attitudes and characteristics of the elements within the cluster may be too similar and not diversified enough.
1. There are several variables that impact experimental studies:
History—specific events occurring between the first and second measurement in addition to the experimental variable.
Maturation—growing older, hungrier, and more tired.
Testing—the effects of taking a test upon the scores of a second testing.
Instrumentation—changes in calibration or observers produce changes in measured instrument.
Statistical regression—operating where groups have been selected on the basis of extreme scores.
Selection—basis resulting in differential selection of respondents for the comparison groups.
Experimental mortality—loss of respondents from the comparison groups.
Selection, maturation, interaction, inc.—multiple group quasi-experimental designs might be mistaken for the effect of the experimental variable.
There are 6 types of recognized experimental design options. The first 3 are pre-experimental, and the other 3 are true experimental designs. The pre-experimental starts with The One-Shot Case Study, which is when the experimental group is exposed to the independent variable (X), then observations of the dependent variable (O) are made. No observations are made before the independent variable is introduced. Disadvantages include total absence of control, no valid reference data for comparison, and standardized tests provide only limited help. The One-Group Pretest-Posttest Design uses two groups. One group is given the treatment and the results are gathered at the end. The control group receives no treatment, over the same period of time, but undergoes exactly the same tests. Statistical analysis can then determine if the intervention had a significant effect. One common example of this is in medicine; one group is given a medicine, whereas the control group is given none, and this allows the researchers to determine if the drug really works. Disadvantages are history, maturation, testing, reactivity, instrumentation, and statistical regression. The Static Group Comparison is a design that attempts to make up for the lack of a control group but falls short in relation to showing if a change has occurred. In the static group comparison study, two groups are chosen, one of which receives the treatment and the other does not. A posttest score is then determined to measure the difference, after treatment, between the two groups. As you can see, this study does not include any pre-testing and therefore any difference between the two groups prior to the study are unknown. Its disadvantages are there are no formal means of certifying that the group would have been equivalent had it not been for the X, there is no control over the selection of participants in each group, and mortality (drop-outs). The next group are the true experimental designs. The Pretest-Posttest Control Group Design is a true experimental design in that there is a degree of randomization, use of a control group, and therefore greater internal validity. It is also the most widely used of the 3 true experimental designs. Disadvantages are the same as the One-Group Pretest-Posttest Design. The Solomon Four Group Test is a standard pretest-posttest two-group design and the posttest only control design. The various combinations of tested and untested groups with treatment and control groups allows the researcher to ensure that confounding variables and extraneous factors have not influenced the results. Finally, The Posttest-Only Control Group Design is a type of experimental design in which the experimental and control groups are measured and compared after implementation of an intervention. Comparisons are made only after the intervention, since this design assumes that the two groups are equivalent other than the randomly assigned intervention. Between-group differences are used to determine treatment effects.
References http://allpsych.com/researchmethods/trueexperimentaldesign.html http://www.hsrmethods.org/Glossary/Terms/P/Posttest%20Only%20Control%20Group%20Design.aspx