Saturday, January 31, 2009

Week-4 Readings

What distinguishes quantitative from qualitative designs? What is the difference between validity and reliability, and what is meant by the term "probability" and "significance"?

Quantitative vs. Qualitative Research Designs

Both the quantitative and qualitative research designs are empirical research designs used in social sciences and recently adopted in technical and business communication research projects. As empirical research designs, they undertake research on certain inconsistencies or problems through inductive processes. However, these two methods, as the names suggest, are significantly different in terms of the processes they use and the nature of the conclusion they draw. The nomenclatures “quantitative” and “qualitative,” unlike what we initially tend to predict, are quite misleading. The essential difference of these methods is not the use of quantity or numbers. Both the methods use numbers or quantities, though in varying degrees. But, what distinguishes them is that quantitative research is experimental and qualitative research is descriptive.
As quantitative research design is experimental, it divides the variables into two equal groups, one control and the other experimental (sometimes also called independent and dependent). Then the researcher applies certain treatment to the experimental group and observes the changes in it compared to the control group due to the treatment applied on it. So, the second important element of quantitative research is comparison. Next important aspect of it is its emphasis on cause and effect relationship. It tries to establish that the change has occurred as a result of the treatment applied and, therefore, the treatment is the cause for the effect the experiment displays. The other two features are random sampling (randomization) of the subjects and the generalizability of the conclusion of this research, though to be used with some caution. One major drawback of this method is the isolation of the variables from the other circumstances, which makes this method unnatural. Its result may not be applicable to real situations.
Qualitative research design is basically about “process and description” of a certain phenomenon or situation. It is a thus a descriptive research. The researcher observes a specific situation and “identifies key variables in a given situation” (Goubril, p. 584). Unlike, quantitative research, it does not divide subjects into two groups, rather studies some situation and the environment around it or the subjects to find solution to specific research problem. Unlike in quantitative research method, research subjects are not randomly chosen, rather the researcher tries to choose the most representative one(s), and even if sampling is done, it is purposeful sampling. In this research, no treatment is applied; the subjects are observed and studied in the natural(aistic) situation. Due to the absence of randomization of subjects and treatment, the researcher is required to provide complete description of the situation, the modality of data collection, the position and the role of the researcher, management and recording of data, and the strategies of data analysis in qualitative research method. Equally important is the difference in these two methods of the participation of the researcher as an active observer in qualitative research and a detached analyst in quantitative research. One important notion in qualitative research is triangulation (comparison of the information gathered at different times, analysis of the data by different researchers, and the use of different perspectives to analyze data), which is used to establish validity of qualitative research. The benefit of this method is the reflection of real situation, whereas its drawback is the less possibility of generalization in comparison to the quantitative research.
Now, for example, let us assume that most of the teachers, students, and the employees do not make optimum use of Blackboard system of Clemson University. If a quantitative research is to be done, the researcher should begin with a certain hypothesis, for instance, the reason behind this underuse of Blackboard system is the lack of training. Then the researcher first makes random sampling (in this case randomly choosing faculty, students, and employees), divides them into independent variables (group) and dependent variables (group) and gives training to the dependent/experimental group and sees the result. If the participants of training start making more use of the system than the other group, the hypothesis works (but probability measurement and other factors have to be considered).
If one is to conduct a qualitative research of the same phenomenon, he/she needs to observe few representative students and faculties (or employees) in their workplace, take interviews, and conduct surveys to explore into the nature of the problem. Here, unlike in the previous example, comparison, treatment, or sampling (?) are not used. Through observation, interviews, and surveys, the researcher tries to describe the phenomenon and identify the nature of the problem.

Validity and Reliability

These two concepts are central to empirical research design. They are concerned with measurement. Though they are related, there are certain differences between them. Validity is concerned with accuracy: how much accurately the research methods measure what they intend to measure. So, its focus is on the truthfulness of the result. Reliability, on the other hand, concerns the consistency of the result in different times. It refers to the issue of how much does the result of the test remains the same if the same “measures were applied and reapplied under precise replication of conditions” (Williams, p. 22). For instance, we can see how much the “speak test” conducted to measure the level of oral language competency of the foreign students measures their real capacity. Or similarly, how much does a TOEFL test measure the language skills. If it exactly reflects the language ability of the candidate or the subject, the test has good validity and if there is very little correspondence, it does not have good validity. In the above example, if the same test is administered again in the same situation, if it gives similar result, it is reliable. But here, the influence of the practice or the familiarity of the test has to be kept null.
Hence, validity concerns the truthfulness of the result of the measurement; reliability concerns the consistency. Williams distinguishes these concepts as follows: “While there are many and more complex approaches to assessing validity and reliability than this illustration provides, the ideas that evaluation of validity requires some type of outside standard, and that evaluation of reliability requires some way of comparing a measure with itself, remain basic considerations” (p.23). In other words, to check the validity, the result is measured against the external factor where as to check the reliability, one test is compared to the other similar test.

Probability and Significance

Probability and significance are related to statistical analysis. Probability is used literally to mean the probable or possible occurrence of certain event or phenomenon. Most common example is that of tossing a coin and predicting the probability of turning head or tail. If it is tossed once, then the probability of turning the head is 0.5 and the same is the case with the tail. The hundred percent probability (certainty) is taken as 1 and other values of probabilities remain between 0 to 1. The notion of probability is mostly used when conducting quantitative research and testing hypothesis. There are two kinds of hypothesis, research hypothesis and null hypothesis. And research hypothesis is regarded “valid” or “proven” when the probability of its alternative hypothesis (null hypothesis) is below certain preset level (generally 0.05). So, the probability of the research hypothesis being true or closer to true is high when the probability of null hypothesis being possible is below, e.g., 0.05. This level can vary in different cases.
“Significance” is quite a related term to probability. To reject the null hypothesis, certain level of probability has to be made a criterion. For instance, as stated in the previous paragraph, assume that the probability level of 0.05 is taken as sufficient for rejecting null hypothesis and verifying research hypothesis, this level is called “rejection region or the significance level” (Williams, p. 61). Williams defines it as “a level of probability set by the researcher as grounds for the rejection of the null hypothesis” (Williams, p. 61).

1 comment:

  1. Wonderful post, Hem. The way you approach this material is very helpful. Examples, definitions...you clearly have a strong background as a teacher. Thank you for providing a very insightful post.

    One thing we may want to consider is the bi-directional nature of research methodology. Consider this metaphor. In the dice game "craps," betters are occasionally called "right" or "wrong" way betters depending on whether they bet with or against the roller. I'll skip the primer on game strategy, and simply say the action reflects the strategy (and vice versa). Based on the readings, I believe this metaphor is applicable to study of research methodology.

    Why do I bring this up? Empiracle researchers seem to fall into one style of research. Applying the frame where strategy and action are closely interconnected, Kirsch's commentary regarding Methodological Pluralism is particularly helpful. Does it make sense to design a novel methodology from both approaches based on the specifics of the question and it's contexts? Should we view this as a third type of practioner, a happy marriage, or threat to research validity?

    ReplyDelete