The answers:
1a.Before defining and explaining science as product, I would like to explain first what is the purpose beyond studying research methods? First, we study, research method to consume the research evidence. Moreover, as a student, to satisfy curiosity or will help in fulfill the research requirements. As a communication scholar, to keep up with the latest developments in the field. What about as critical consumer, it helps the consumer to make useful decisions about everyday life. Second, to produce research evidence; first, try to estimate data or opinions with certain research group. Acquire or obtain reliable data and eventually, draws conclusions. In addition, there are two criteria for social science research; 1-make or do social performance; how people interact, think and feel with each other. 2- Must be scientifically observable. There are four basic methodological approaches to the social world; experimental research, survey research, field research and available …show more content…
data. In short, multiple method available and no single method is superior to others and the researcher should choose the most appropriate one that could answer specific questions accurately. Now we can move to the target part of the question, generally, the aim of science is to produce knowledge, understand and explain some aspects of the world. Scientifically, knowledge judged as scientific based on two criteria; -science as product and ?science as a process. First, science as a product, it produced knowledge which gives us explanations and predictions which help us to satisfy our curiosity about the research subject. Also, it must be observable. It means possible to address topics or studying the relationships by making appropriate observations. Whereas, nonscientific means, cannot be observable by making observations to identify the conditions under which certain event occurs. For example, can we actually observe pornography? Additionally, in science as a product, answers take a particular form of description, explanations, and predictions, by this I mean the researcher start formulating propositions, hypotheses, laws related to the study, for example, empirical laws that test or observe the relationships or it can be a statistical law that defines the proportion of time we do expect something to happen. Moreover, understanding, our theories can clearly interpret and describe the underlying cause of phenomena. Always, put in mind that in science, we don?t look for approve, we find a degree of support. It is tentative knowledge, uncertain and unending and humans always evolve and change. Also, I can think about science as product knowledge as dealing with defining clearly our concepts. So later on, we can measure it. The description we provide about the concepts will be presented in two different stages in the research paper; first, as conceptual definitions; here our concepts defined clearly in the literature. Second, as operational definitions, our concepts will be interpreted clearly in the method section, how we measure them, by which scale and what are the results and how they are linked or related to the conceptual definition part.
1b. Science as process. This part articulates how we can conduct a research to get this project done. It is a cyclical interplay between theory and process. In order to get this done, we have to use logical reasoning of the theory and the research subject. The logical reasoning can be inductive or deductive. The first is the kind of reasoning that evaluates the inductive argument. It goes from observation to eventually developing a theory, it?s constructed as a form of reasoning that draw a generalizability based on individual instances or observations. Whereas, the deductive reasoning goes from theory to observation, it constructed or evaluates a deductive argument, by this I mean the researcher attempt to show that a conclusion necessarily follows from a set of promises or hypotheses. So, the conclusion must be true provided that the hypotheses are supported or true. Next, empiricism comes to play a crucial rule in science as a process. Empiricism means, using our senses sight or hearing.. Etc. to observe and understand the world around us. It can be used directly or indirectly. Simply empirical means observing. However, non-empirical means the research subject cannot be observable by our senses; it is more tradition and authority revelations. Additional science as a process must draw upon objectivity and control. By objectivity, I mean, a researcher must be objective in conducting the research, following the scientific process to reduce our biases. Control means, by ruling out biases or variables other than the effect of manipulated variables which can impact our outcomes. Simply it is a way to eliminate our biases from the study. For example, double blind experimental study (treatment vs sugar pills). Also, we can use nonexperimental study, by statistically, isolate C from the equation. Finally, I would like to end up the answer of this question by explaining science as ideal versus reality. By this I mean, theoretical knowledge not will develop in social science. As researcher, we strive to develop a theory because human evolve very quickly. Theories are evaluated and compared based on the accuracy of predictions.
1c. The type of science, we are studying in class is science as process, because we are focusing on the operationalization part of the research paper, by this I mean, we are studying measurements, sampling and what are the types of measurements and sampling. Also, how can we collect the data correctly and what could be the best way to collect the data. Now, for example, I can decide which method is suitable for my research paper; for example, an experiment or a survey. Also, how can we process our data by using SPSS and conducting statistical analysis and eventually, analyze our data and provide better interpretations of our results in the study. 2a. Conceptualization as concept and process means. The ultimate goal of conceptualization as concept is to make sure that you have clearly defined your concept before moving to the operationalization stage. In addition, it is a process of formulating and clarifying concepts. It is linked to theory testing and construction. In addition, the researcher should consider many points related to the concept; first, concept and variables are not interchangeable. Second, the concept may sign single category or several categories. Also, it is not always observable.
2b. Operationalization as product and process: operational definitions describe research operations that specify values or categories of variables. . Many operational definition could be possible, but the researchers choose or develop the one that corresponds well to the concept of the research. . In other word, determine the operational definition that describes what the concept is and what not part of it. When create operational definitions, we must consider many different empirical representations or indicators or items .Indicator: consists of a single observable measure, for example single questionnaire in a survey. It is important to know that no two indicators measure a concept or variables in the same way. Also, no two indicators could perfectly correspond to the underlying concept. This because of two reasons; first they often contain errors of classification. Second, rarely capture all the meanings of the concept. In short, due to the amount of errors, researchers usually rely on more than one measure when they operationalize. Moreover, operationalization as a process; how is the selection of operational definitions can be accomplished?1- the researcher decides on the overall research strategy (methods, measurements, and hypothesis) in mind and then, choose the proper scale. The researcher is trying to focus on the strength or limitations of the tool methods. 2- Select definitions that fit based on level of measurement reliability and validity. I mean, the researcher must consider the level of measurement; first, measurement means, the assignment of numbers or labels to unit of analysis to represent variable categories. Additionally, the level of measurement means the various meanings of these numbers or labels which reflect basic empirical rules for category assignment. There are four general level measurement; they are nominal, ordinal, interval, and ratio. Now let?s explain them one by one;
Nominal level measurement; assign labels and identifying categories of variables. Codes are numerals and arbitrary. They are just labels used in statistical programs. The only numbers you can generate from codes are percentages and cannot calculate an average or other type of statistic. It considered as lowest level and forms of qualitative variables. The cases are classified into two or more categories for example, gender- race. Etc. Cases placed in some categories must be equivalent. For example, male and female NOT male and feminism. Categories of variables must have two qualities. Also, they must be exhaustive, sufficient, and inclusive. So, all objects or persons should fit into one of the categories. Finally, these categories should be mutually exclusive; cannot fit into more than one category.
Ordinal level measurement: here, we are still dealing with qualitative variables. Numbers indicate only the rank order in cases of some variables. For example, the order of the stone along a continuum of hardness. Here we can only rank things. We can make an accurate judgment about thing compared to other even if we cannot make an absolute judgment. It shows progression but does not indicate how much one is faster than two for instance? Interval level measurement; Here we can start to think about quantitative. It indicates that equal distances or the interval between numbers represents equal distances in the variable being measured. The difference between 20 -30 is the same as the difference between 90 -100. Can attach numbers to things, quantitatively analysis and make mathematical comparison. Examples; Likert scale and semantic differential scale. It can have an arbitrary zero.
Ratio level measurement; it has an absolute zero point and the present of absolute zero makes it impossible to multiply and divide the scale numbers meaningfully and thereby we form ratio. Here we cannot have a negative value. This level is the highest level or the most specific one.
2c. I think in this question, you are asking how we can practically apply conceptualization and operationalization in the research process. First, the researcher encounters a problem and start thinking deeply about it. The outcome of this thinking is identifying the concept that underlay the study. The concepts or the constructs include an explanation of the concept, predictions about the propositions or hypotheses and understanding of the cause of the problem based on the theory knowledge. This process of conceptualization is defining the concepts abstractly. Then the problem moves from conceptualization? the abstract level ?to ?the practical or empirical level ?operationalization, here we start to measure the variables of the study empirically, by using different level of measurement, nominal, ordinal, interval and ratio. After that, we will be able to make a statistical analysis of results and interpretation of the causal relationship in the problem. It is also important to mention the operational definitions in social research and the two general kinds of operational definitions in social research; manipulated and measured. The first aimed to change the value of the variable (experiment). Whereas the measured, estimate the existing value of a variable. Such as, verbal or self -report; composite scale (measure complex concepts) .
2d. I believe that the process of conceptualization and operationalization are not interconnected. Rather, they follow a sequence or a specific order. This tells us, for example, that we need first to define clearly our concept before we move to the operationalization, so, later we can measure the concept accurately.
3a. Reliability means. Consistency or the extent to which a measure does not contain random errors. Also, obtain a stability estimate based on consistency. , so, a researcher eventually, can get accurate results or outcomes. When we say, the scale is reliable, we mean this scale allows us to replicate the study. Put in mind that a very unreliable measure, cannot be valid. We have different ways to assess reliability. One of this way is (Cronbach?s alpha), it is a measure of internal consistency. Moreover, it explains the relationships among all the items simultaneously to what extent do they measure the same concept. There is no negative value. 0, 7 or higher is considered acceptable in internal consistency of measure in social science. Another reliability assessment measure is test retest reliability; this method is used to test same person or unit of two persons on two separate occasions. The usefulness of this measure is limited by three reasons; 1-respondents remembering and answers then inflating the reliability estimate. By this I mean the time gap between T1 and T2 was short, so the participant was able to recall the information.2- real natural change in the concept being measured between T1 and T2. 3- The first application of measure may bring conceptual changes in the persons understudy. By this, I mean people really thought about it deeply and produced new impacts on the participant.
3b.It concerned with the accuracy or goodness of fit between an operational definition and the concept a researcher wants to measure. Cannot be assessed directly either subjectively evaluated operational definitions that measure what was intended Or by capturing the results of other measures. Subject validity consists of two criteria; face validity, few would dispute it and it refers to personal judgment that appears in the operational definition. It includes what we call expert validity, it seeks out experts in the field who are aware about the social literature in order to assess validity efficacy. And content validity, it is defined as to what extent a measure adequately represents all facets of a concept and accurately measure what intended to measure. It is actually focusing on the concept in depth. To demonstrate content validity, the researcher should identify clearly all the components of the total study and then show that the test items or indicators adequately represent these components.
3c- the relationship between reliability and validity is interdependent. A highly unreliable measure cannot be valid. Also, a very reliable measure can be not valid. I was writing down this response based on the dots reliability figures. By this I mean when the dots clustered closely together and hit the center, you will have high reliability and validity measure. But as the dots spread far from the center, you start having some validity problems. Also, when the dots spread randomly across the center, some are close from the center , others not , you are having a valid measure but actually, not high consistency , I believe in this case you are not having a strong right feedbacks from each individual but rather you are having kind of an overall estimate of the feedbacks. Finally, because of that, having a highly reliable and valid measure is very crucial in social process. .
4a. Probability sampling: scientifically more acceptable, although not always feasible or economical. It involves random selection done with cut plan on choices and get equal probability chances for selection. It allows us to estimate how closely the extent sampling mean reflect or represent the entire population. There are many types of probability sampling; Simple random sample-Systematic random-Stratified random-Random cluster sampling and finally, multi-stage cluster. Whereas, nonprobability sampling: the essential character of the nonprobability sampling technique is that samples are selected based on the subjective judgment of the researcher rather than random selection and the samples are gathered in a process that does not give all the individuals in the population equal chances of being assigned. So, there are no sampling frame or random sampling. Also, nonprobability sampling has two weaknesses; first, they do not control for investigation bias. Their pattern of variability cannot be predicted from p =, 05. Additionally, the sampling theory makes it impossible to calculate sampling errors or estimate sample precision. They are many types on nonprobability sampling, such as convenience sampling, purposive sampling, quota sampling, referral sampling. Snowball sampling. It saves time and money and easy to use.
4b.first I need to give you a full and complete explanation of the sample and then I will explain how the procedure will be applied. Simple Random sampling is one of probability sampling; indicates simply that every possible combination of cases has an equal chance of being selected in the sample. How simple random sampling works in real world? If I?m going to survey CSU undergrad communication students about their satisfaction with their communication classes based on sample random sampling. I will follow these steps; First, I will identify sample frame which is undergrad communication students. Second, I will structure a list of all undergrad communication students? participants. Then, I will pick names drawn out of a hat technique. Then, I will allow random numbers to be selected, after that, I will put elements from an ordered list, and finally, I will number the list. For example, list 1 ? list 2- list3 Etc. Then, I will select them and finally contact them and send the surveys. The disadvantages of this sample, it consumes time and money and difficult to accomplish. Its advantages are, scientifically is ideal, because it highly representative of the entire population and it reduces the human biases in selection by giving an equal chance for a participant to be selected in the sample.
4c. quota sampling is one of the nonprobability sampling. It is very similar to stratified sampling ,but non probability version. It is essentially a form of a purposive sampling and it bears superficial resemblance to stratified random sampling. Now let?s applied quota sampling of CSU undergrad communication students in order to conduct a survey about their satisfaction with their communication classes first, I will divide CSU undergrad communication students (the population) into relevant strata or quotas such as age and race. I will not use random sample, instead I will set my quotas and then select them conveniently. Let say that my quota were age and biological sex. So, my quota sampling requires that representative individuals are chosen out of a specific subgroup for example, 150 females. And 150 males and their age vary from 20-30 years old. Here I had the choice to determine how the quota of cases was filled in my case, I chose to survey male and female whose age from 20-30 years only.
5a. True Experiment; it is considered as the most accurate types of the experimental design, it is considered more accurate in providing better results about the causal relationship than other design. It has three basic features or requirements;
Frist, Manipulated independent variable is followed by a measured dependent variable. Second, it consists of at least two groups, A; (the treatment group) that received the treatment or the manipulation of the experiment. I mean, exposing participant to the actual active ingredients of the drug.
. B,( the control group, placebo or the sugar pills), is defined as the group that stays constant, it does not receive the treatment or the manipulation of the experiment. It is used as a benchmark to measure clearly how the dependent variables were tested.
C; is called (comparison group )it may include different levels of independent variables and exposed to all of the conditions of the study except the variable being tested or what we called manipulated variable. Moreover, this group receives either no treatment or an alternative treatment. The purpose of this group is to see if there is a different intervention or impact than that of the experimental group, for example, when the researcher wants to compare the impacts of treatment between the treatment group and the comparison group to determine if there is any different impact other than the actual treatment.
Third, it required random assignment, it means that each person has an equal chance of being assigned to any condition. This by assigning participants to different groups in an experiment (treatment group versus control group).
5b. I chose a pre experimental design- It is the simplest form and it lacks one or more requirements of the true experimental design .A researcher can choose either single group or multiple groups. Also, there is no randomization and sometimes no control group or comparison group or pretest. So, here actually the independent variable is the actual treatment X and the effects of the treatment is observed on the dependent variables or the measured variables Y. In this case, we are studying the effect of the treatment ID on the group DV without comparison or control group and also without randomization. The previous example represented in one shot case study. Pre experimental is good at providing exploratory information about the case, but it has a lot of validity threats and it becomes difficult to rule out the rival explanations. So, the ability to generalizability become impossible.
5c. The three major factors that a researcher should consider when conducting a validity assessment of an experimental study. Are, measurement validity, internal validity and external validity. Measurement Validity, It means that measure is valid when they are correspond well to the theoretical concepts under study. Measure of validity shows the accuracy of the measure, by this I mean how much it assesses what it intended to measure in the experiment, to what extent our results are accurate and correct and we could apply these results to the real world (generalizability).Looking at it in term of manipulation, the IV is valid measure and DV are valid to measure what we intended to measure. We can do pretest or pilot test to evaluate the IV variable or what we called the manipulated variable. Next, internal validity, generally, it helps us to rule out the rival explanation of the experiment. Also, internal validity: it focuses on the procedure and what happens during the experiment. The internal validity provides sound and strong evidence of a causal relationship. By this I mean change in independent variables caused a change in dependent variables. When this happens, we say this study is high in internal validity. We manipulated what we intended to measure. In other words, the more control we have over the experiment, the more validity we will have. There are some requirements of any experiment to evaluate the internal validity; first, there is random assignment. Thus, by establishing equivalency before we start. Second, there is manipulation of independent variables. Also, there is measurement of dependent variables. Third, at least, on comparison or one control group to compare or assess the treatment effects. Finally, I believe it is important to have the constancy of conditions across groups (excluding experimental manipulations). Finally, external validity: here we are focusing on the message testing. Its main concern, can we apply these results to the rest of the world? Also, we often sacrifice external validity to have control over the experiment environment (internal validity). Here questions of generalization or generalizability raise. By this I mean what experimental results outside of experiment context? Essentially, it is established over time with repeated studies (replication). In short, experimentation is really about testing theories and causal relationships not necessarily external validity. The idea is, the more we can replicate or copy the experiment results, the higher external validity we will get over time.
6a. Pre experimental design True experimental design
All testing causal relationships. All testing causal relationships.
The manipulated IV is followed by a measured DV.
Can choose either single group or multiple groups. It consists of two groups at least .treatment group-control group, and sometimes a comparison group.
There is no randomization. No appropriate selection methods of the sample, this will hinder the generalization of our results. It required random assignment, it means equal chances of being assigned to each group. This could strength our ability to have internal and external validity and generalize our results to the rest of the world.
Threats to internal validity, such as threats of selection because there is no random assignment, we started the experiment by setting an existing difference between the groups. Sometimes testing threats if we have pretest. Maturation and history effect if we have pretest and posttest .However, there is threat to external validity, without random assignment, we cannot generalize the results to the rest of the world. And it is very hard to rule out the rival explanations.
Although we could have some internal validity, such as attrition, maturation, interaction, history effect. Adequately, it controls, threats to internal validity. And pretest in some cases allows us to check if random assignment works.
It is good at providing exploratory information about a case It is good at providing explanatory information about a case
6b. 3- Static group comparison; X O 2 O 2 Here, there is a little control. It provides a set of data with which to compare post treatment score of the control group. I mean, a group that has experienced some treatment is compared with one that has not. There are no history effects because we do not have pretest observation. Threats to internal validity include maturation; it is threat to internal validity. It is defined as any psychological and physical changes within the subjects or individuals that occur with the passing of time regardless of the experimental manipulation. A good example I can think about now, is during an experiment subject may become hungry or tired. This determined based on a short period of time. Or subjects may become more rigid and tolerate based on long period of time, I mean longitudinal studies; study people over a long period of time. Attrition; I can define it as people or subjects? dropout rates. Or the loss of subjects in an experiment. Differential attrition means being off balance or having different rate of dropout between groups or we can say we do not have equal numbers of participants. Overall, It is an effect of random assignment because, there is no randomization, so we have a problem in the selection, it is a threat of internal validity and defines as ?naturally existing groups, occurring groups could influence dependent variables, there is a systematic difference in composition of the control and experimental groups?. For example, when they are differences between the subjects and we do not randomly assign subjects. For example, we divided subjects into supervisors and subordinates. So, we started the experiment by setting group differences. . In short, the disadvantage of the pre experimental design is threats to internal validity and often difficult or impossible to rule out alternative or rival explanations .It is more about generalizability and external validity.
6c. Pretest-posttest control group design; O 1 X O 2 R O 3 O 4
As I mentioned above the true experiment designs require at least two groups. Subjects are randomly assigned to ensure approximate equivalence. Pretest-posttest control group design, it involves measuring experimental groups before and after the experimental treatment. Control group measured at the same time, but does not take a treatment eventually. The random assignment rule out the selection threat. Times moves from left to right. Adequately, it controls, threats to internal validity. There are external validity threats. Also, internal validity threats such as history effects, maturation, attrition and threats of testing interacting with independent variables, for example the pretest has an impact of the effects of the treatment on the DV.
Continue 6c. Posttest-only control group design; X O 1 R O 2
Since we do not have pretest, we cannot see if randomization really worked instead we just assume it did. It is a simple design that has the basic elements of experimental design. Also, there is a random assignment of subjects to treatment and control group. A post treatment measure of dependent variables for both groups; one got the treatment and other works as comparison group he got no treatment. It controls for the common threats to internal validity adequately. There are no test sensitivity or history effects because there is no pretest. Also, it is more economical and it eliminates the possibility of interaction between experimental manipulation pretest.
6c.I think I will choose Pretest-posttest control group design because as I said it measures experimental groups beforehand and after the experimental treatment. This what is really useful about having pretest in your experimental design! , it is good in allowing the researcher to know more about the treatment impacts and the chance to compare between pretest and posttest to evaluate the impacts of the treatment.
6d. Factorial designs: (2x2 as an example). It appears when there are two or more independent variables are studied in a single experiment. In addition, within each factor or independent variable, there are different levels or values, we referred to IV as factors. In addition, factorial design allows us to form an assessment of individual factor effects on the dependent variables. The slots are the independent variables. Also, the actual numbers tell us how many levels we have. Each factor has a unique impacts on dependent variables. In factorial design, we can assess two effects; main effects and interaction effects.
Main effects: defined as the individual effect of each IV or factors on the dependent variables. Also, I can interpret it as the overall effect of the IV on DV. It allows us to assess the overall effect of an individual factor (IV) on the (DV). It represents the mean scores across all the levels of Independent variables. We can get benefit out of the main effect when we start to interpret the significant differences of the mean and how this means significantly different from the control group. Additionally, if there are no significant differences, the IV has no impacts on DV. As researchers always assesses the interaction between IV on DV becomes a good idea. Eventually, it helps us to determine overall if there is a particular factor that is making a difference.
Now let?s explain how the interaction effect is different and complicated to assess from the main effect.
Interaction effects: it is defined as joined effects of two or more factors on the dependent variables. The effect of one factor on the dependent variables varies according to the value or level of the other. So, it is always best to check first for the interaction between factors. A good example, the health status, age and exercises as the factor that could create interaction, for example, older people who exercise are healthier than young people who do not exercise. Also, think about the pretest and does it has an impact on the IV. Always assess the interaction first and think about it as in the reliability the effect of X on Y could depends on Z rather than X consistently affects Y .is there interaction between independent variables that resulted the effects on the DV and it is not because the manipulated IV. In the book example, there may be an interaction between the pretest and the movie = thereby enhancing the movie?s impact.
7. RQ; what effect do framing and point of view have on physicians? intentions and behavior to test their patients? level of kidney functioning? Difference.
H; It hypothesized that physicians in the experimental groups will demonstrate greater intentions and behavior to test their patients ?kidney functioning than physicians in the control group. Directional, group difference.
8. In the experimental part of the study 2x2 design was applied , the independent variables or the manipulated variables were, Frame of the messages ;( positive (gain) vs negative (lose) ) and point of view ( personal vs impersonal) .
Conceptual definitions:
The loss frame messages? focus on the negative consequences of continuing or adopting an unhealthy behavior?. However, gain frame messages ?focus on the positive aspects of continuing or adopting a healthy behavior?. The point of view is, actually another way to vary messages and this by changing how personalized or immediate the message by using either second or third person pronouns. A message high in personalization directly addressing the recipient to him or her as ?you?. Whereas, messages that are low in personalization use words as ?one? or ?people?.
In the study, they manipulated the perceived threat of CKD to physicians?
patients. Also, I believe it is important to mention that there were four different cover letters. All cover litters were high in both threat and efficacy ,but different in the manner in which threat was addressed differently based on the manipulation of two dimensions as I mentioned above , Frame of the messages ;( positive =gain) vs negative =lose ) and point of view ( personal vs impersonal). We had (Four of experimental groups, each has one manipulated cover letter+ 1the control group has a generic letter). The results were based on conducting two types of survey; 1- initial survey; each contact has one of the five randomly assigned cover letters. 2-And after 4 months a follow-up survey was sent to all physicians who completed the initial survey. The level of measurement was an interval in the initial survey and follow-up survey 7 point Likert scale (strongly disagree to strongly
agree).
9. The measured or dependent variables in the study were (physicians? intentions and behavior) to test their patients? level of kidney functioning. By this I mean, examining if there is any effect on the DV (physicians? intentions and behavior) based on the manipulation of the IV( the frame of the message and point of view) . Also, in the initial survey, researchers measured the perceived threat and efficacy using RBD scale; the DV or the measured variables in this survey were; susceptibility, severity, response efficacy, self ? efficacy. Also, behavioral intention and behavior using procedures outlined by Ajzen and Fishbein (1980) and Witte et al. (1996) . The level of measurement was interval level, 7 point Likert scale. (Strongly disagree to strongly agree).
10. Yes, it was 2x2 factorial design, but it called incomplete factorial design. The tricky part was identifying the position of the control group who got a generic letter. It was independent or I can say a separate cell from the 4 experimental groups. This was new and I have not experienced before. So, I read about it and I got some internet knowledge, and finally, I decided that it is an incomplete factorial design. I hope my answer is correct. Another thought, it might be posttest only control group design but I?m not sure if we can have : R X O1 X O2 X O3 X O4 O5 The independent variables or the factors were, first, the frame of the messages and it has two levels ;( positive or gain vs negative or lose) and second, point of view and it has two levels ( personal vs impersonal). 11. Female patients predict greater behavioral intentions to seek out physicians' consultations about their illness uncertainty than male patients. 12. Male patients? and female patients? differ in their perception of physician empathy.
13. As the number of appointments with a primary care provider increases, the patients? satisfaction increases too.
14. Patients? health anxiety and patients? age are related.