Saturday, April 4, 2009


Vitanza on Historiographies:
Vitanza divides historiographies into three categories: traditional, revisionary, and subversive. He clearly mentions that these categories are fluid and can overlap each other. However, these categories are largely distinct.
Traditional historiography:
Histories that have time (narrative) as a major category
Histories that does not emphasize time or man as a major category
In short traditional historiography attempts to discover and present the historical “facts” and events in an objective manner (though questionable for the other types of historians). Some of the models under this category are “documentary model,” “archival model,” and “objectivist model.” For them data are of primary importance as they speak objectively about the events of the past. This kind of history is positivistic.
Revisionary Historiography:
Revisionary history as full disclosure: trying to recover the things excluded in mainstream histories.
Revisionary history as self-conscious critical practices: “… all writing (thinking) self-deception, . . . all facts are always already interpretations.”
In short, revisionary historiography revisits history to recover the details and facts left out in the traditional histories or provides history with an awareness that it is always already ideological.
Sub/versive Historiography:
This kind of historiography is anti-fascist (in any form) and expresses distrust of “mastery and authority.” It is completely opposed to traditional historiography’s focus on history as a repository of truth and knowledge. This short of historiography is more Derridean, Deluse and Guattarain, with a focus on uncertainty and infinite possibilities. I think it can be said to be an extension of or a radical form of the second type of revisionary historiography. Vitanza’s taxonomy of historiographies is both compelling and sometimes confusing too. This may be because he is writing in a fashion that seems to follow the poststructuralist tendency that he advocates to a large extent.
Corbett: Traditional Historiography
The main argument of this essay is that business and professional communication has ignored many aspects of classical rhetoric that can be used to make the act of communication effective. He attempts to show the importance of the strategies and notions of classical rhetoric like ethos and pathos and some other aspects of communication like tone, persona, and image. Classical rhetoric is a good source for us to learn diverse aspects of communication.
Corbett’s is a traditional historiography as he is simply listing some of the aspects of classical rhetoric to justify his point that classical rhetoric is a good source for enhancing communication practices in professional and business communication field. I don’t find any revisionary tendency in his text. He is neither trying to disclose the previously hidden facets of classical rhetoric, nor is he self-consciously critical about his own practice. The only thing he says is that the importance of classical rhetoric for business and professional communication has been largely ignored.
Howard: Traditional/Revisionary
A problem dealing with Howard’s piece in terms of historiography is that it does not specify a different version of a history of copyright system which overlooks certain aspects of the history of copyright system in the West. However, Howard clearly mentions that he wants to present a notion of copyright based on its history different from the one generally understood. He demonstrates how copyright principle emerged not as a natural right of authorship but due to an “ignoble desire for censorship.” He shows how copyright is a privilege like getting a license to drive a car. His major purpose is to show the complexities brought about by new technologies in understanding and interpreting copyright laws.
Now his history of copyright laws is traditional in the sense that his purpose is to present how copyright law emerged in the beginning and evolved to the present state. However, he is completely aware that the rise of copyright laws in specific forms was the product of politico-historical factors. He is also aware that the copyright laws are subject to varying interpretations. In this sense his historiography can be called revisionary.
Zappen: Revisionary (self-conscious critical practices)
He says that Bacon’s texts have been taken variously to advocate different, sometimes even incompatible, views of science and scientific writing style. His point is that varying interpretations of Bacon have occurred due to the richness of Bacon’s writings and the ideological positions of the individual historians. I believe Zappen falls under the second type of revisionary historiography. He clearly says that “each of these interpretations, including my own, reflects a different ideology, a different perception of the good of the scientific and community, and alternative vision of what science and scientific rhetoric might and ought to be.”

Saturday, March 28, 2009

Quantitative Descriptive Research, True/Quasi Experiments

Very rough notes.
Lauer and Asher, “Quantitative Descriptive Research.”
Goes one step ahead of descriptive research designs like ethnography and case studies. It not only identifies variables, but also isolates the most important ones and quantifies them to some extent. It also interrelates those variables. However, no controls groups are created and no treatment is applied.
Class comment:
correlational research, Morgan.
actually it is still a qualitative research as it does not make causal relationship.
And relationship can both be negative or positive.
Subject Selection:
More than in both case studies and ethnography. More than the variables: at least 1:10. For representativeness.
Variable Selection: Independent and dependent variables: Independent variables do not depend on treatment. dependent variables change due to treatment.
Hypothesis: alternative hypotheses. but for different reason, it helps for understanding, but also is important statistically. Null hypothesis is key in chi analysis. You have to assume the null hypothesis.
Statistical analysis of two variables:
class comment:
N:k=10:1 (at least)
(significant: likelihood of occurance by chance, typical is 0.05, significant does not mean important)
what sense doe it make in such correlational research to differentiate dependent and independent variables?
Try to stay away from differentiating.
variance is imp: spread of data from mean. what of the other variables might be accounting for that variance.
Moderator variables

Faber’s “Popularizing Nanoscience; The Public Rhetoric of Nanotechnology, 1986-1999
Purpose: how new subjects in science and technology are represented in the popular media; how discourse works to imterpret and translate technical material and to build public recognition and awareness of science, technology and other specialized academic fields.
Subject Selection: Faber studies how new subjects in science and technology are represented in public media. So, the subject selection here becomes his selection of articles published in popular media. He searched "nanotechnology" and "nanoscience" and he found 885 articles in his first search in his university database. Then he later separated 203 articles that were published in popular media. He collected articles published between 1986 to 1999.
Data Collection:
ABI/ProQuest database
From 1986to December 1999
Use of keywords nanotechnology and nanoscience
Article texts as the search criteria
‘All’ for publication type
Result: 885 articles
Popular media: 203
Data Analysis:
· Categorized articles by month and year.
· Analysis of both propositional and grammatical structure (combined give meaning)
· Use of descriptive categories theme, rheme, (before verb and after verb) and topic (content-based interpretative summary of the propositional content)
· Correlated these catogeries to propositional content to create representation (fourth category)
· 39 topics (termed these topics representations as that is how nanoscience was represented)
· Categorized representations temporally,
· Created three-part hierarchy by frequency of mention in the data set. The hierarchy consists of “high-occuring,” “average-occurring,” and “low occurring.”
· Temporal findings: the representations manufacturing, medical applications, and science fiction endured across the entire data-collection time.
· Limitedness of the generalization: “this study is limited by the temporal choices of articles and my own decision to restrict my analysis to the general representations of nanoscale science technology presented by each article. Lack of third-party testing.
Conclusion: the process of presenting technical information for general audiences can be enabled by combining social and technical approaches.

Golen, “A Factor Analysis of Barrier to Effective Listening
Purpose/Research Question:
· To determine which barriers are perceived to be the most frequently encountered that may affect listening effectiveness among business college students and to expand the Watson and Smeltzer study
· How frequently do barriers to effective listening affect the listening process as perceived by business college students?
o What specific listening barriers do students perceive as the most frequent?
o How do the listening barrier factors differ based on selected student demographic variables?
Subject Selection:
· 3 large business com lecture sections of approx 400 each; there were 33 breakout sections with 35 students each.
· Random sample of 10 sections was selected from 33 sections
Data Collection:
· Each student completed a questionnaire containing 25 barriers to effective communication
· Total of 279 questionaires were collected
Data Analysis:
· Variable identification: 25 barriers
· Then obtained the most common barriers through literature review, feedback from advanced students, and interview with professors—reliability
· Independent variables: major, age, sex
· Dependent variables: listening for details; distraction by noise; daydreaming; detour due to the speaker’s ideas; lack of interest in the subject …. (mean).
· Relationship between independent and dependent variables: only gender.
Lauer and Asher, “True/Quasi Experiments.”
· True Experiments:
o Treatment
o Cause and effect relationship between treatments and later behaviours
o Randomization to avoid threats to internal and external validity
o Hypothesis
o Measurements and statistical analyses
· Quasi-experiments
o Useful when researchers cannot randomize groups
o Enables researchers to make cause-effect inferences
o Three distinguishing features:
§ No randomization
§ Should have at least one pretest or prior set of observations to examine whether the groups are initially equal or unequal
§ There must be research design hypotheses to account for ineffective treatments and threats to internal validity
o Strong and weak quasi-experiments
Carroll et al, “The Minimal Manual.”
Purpose: to examine whether Minimal Manual “affords more efficient learning progress than standard self-instruction manual or not.”
Subject Selection:
· Two experiments:
o Experiment 1: performance on 3 days of simulated office work experience
o 19 subjects, 10 in MM and 9 in SS. Screened on the basis of their prior experience
o Experiment 2: 32 subjects (same selection criteria)
Data Collection:
· Experiment 1
o Periodic performance tests
o Subjects in a simulated atmosphere
o At the end of every day, an interview and administrative sessions held; 8 performance tasks
· Experiment 2
o Observer makes a detailed notes about the activities and outcomes during the hands-on portion
o Six performance tests
Data Analysis:
· Experiment 1
o Two dependent measures were collected and analyzed in this experiment: time to complete training and performance tasks; b) performance on eight word processing tasks. (summing the number of correct activities)
· Experiment 2
o Same process, summing of the number of correct.
Their analysis supported their hypothesis that MM affords better learning experience than SS.
This research seems to have problems like the lack of interrater reliability and the lack of randomization. This is a quasi-experimental research. However, the groups are not kept intact, Notarantonio, “The Effects of Open and Dominant Communications Styles on Perceptions of the Sales Interaction.”
· Whether or not communicator styles affect perceptions of the sales interaction
· The study hypothesizes that a) openness of the salesperson adds effectiveness to selling; b) the more dominant the salesperson, the more effective she or he would be
Subject Selection:
· 80 subjects (undergraduate business adm), 41.3% male and 58.7% female; Freshman 92.1%
Data Collection:
· A complete self-report
· After viewing the tape, subjects were asked to complete a questionnaire consisting of 42 items. (likert scale, but not specified)
Data Analysis:
· No analysis on self-reports
· For data received from questionnaire, separate two-way ANOVAS were run with Openness and Dominance as independent variables for the measures of openness, dominance, and the six composite measures. All measures ranged from 1 to 7

Kroll’s “Explaining How to Play Game.”
· One goal was to examine changes in the informational adequacy of the explanations as a function of grade level

Subject Selection:
· 24 students in grade 5, 26 in grade 7, 19 in grade 9, 27 in grade 11, and 27 college freshmen
Data Collection:
· Students were asked to view the film about playing game and then write explanations
· Then ten-item multiple choice quiz was given to test their knowledge of the game
· Game quiz was validated by testing it with advanced students who did not know about the game
· Two raters independently scored all explanations from students
· One rater was vaguely aware of the purposes whereas the other was not
· However, the correlation between them was strong
Data analysis:
· One goal was to examine changes in the informational adequacy of the explanations as a function of grade level
· The effect of grade level on game information scores was analyzed with a one way ANOVA
· In addition to total scores for overall informativeness, scores for each of the ten individual game elements were examined
· Z statistics were used to order the ten game elements from those exhibiting the strongest developmental trends to those with the weakest trends
· Finally Chi-square tests were performed to assess the statistical significance of the association between grade level and students’ listing of the game pieces, between grade level and students’ mentioning the object of the game, and between grade level and students’ use of the three explanatory approaches

Saturday, February 28, 2009

Week 8, Ethnographies

What distinguishes ethnographies from case studies, how does triangulation impact data collection and analysis, and what must ethnographers do to ensure their work is both reliable and valid?
The ethnographies, unlike case studies, “examines entire environment, looking at subjects in context,” instead of only concentrating on the subjects. So, the “context” is the single most important factor in ethnographic research.
Triangulation gives validity and reliability to their work. Data collection has to be done from multiple perspecitives (using different methods) over a long period of time (repeatedly) to accurately identify and correlate the variables in the rich context and to offer a “thick description.” The collection of data through multiple methods gives validity to the work and the collection and analysis of those data by more than one observer-researcher makes it more reliable. The choice of the site and sample selection also affects validity (they have to be more representative to be generalizable).
Analysis of sample research studies
Purpose/Research Questions:
1. In a given non-academic setting, how are writers’ conceptions of rhetorical situation formulated over time and how are they affected by their perceptions of their social and organizational setting?
2. What are the social/organizational elements of writers’ composing processes and how do these elements influence those processes?
3. How do writing processes shape the organizational structure of an emerging organization?
So, the writer’s overall purpose of this research is to examine how contexts and writing processes in a nonacademic setting influence each other.
Subject/setting selection
Microware, Inc., an emerging organization. The writer makes a detailed study of the writing processes of Microware’s 1983 business plan.
Data Collection
Regular collection (three to five days a week) of data for eight months. He collected data from both formal and informal discussions. The data collection methods were 1) field notes, 2) tape-recorded meetings, 3) open-ended interviews, 4) discourse-based interviews
Data analysis
He established analytical categories and the properties of those categories and linked them to form the major theme and subtheme. (where is interrater reliability?)
The writer is careful enough not to make a sweeping generalization. His findings are:
a. Social and organizational contexts influence rhetorical contexts.
b. Rhetorical activity influences the company’s organizational structure.
However, the writer clearly says that this finding is provisional. He also suggests the necessity of further research.
Farina’s study is a good example of ethnographic research. His selection of setting helps him clearly demonstrate how writing also affects organizational structure. However, he is quite aware that his model may not totally apply to the more established organizations. His use of multiple methods of data collection over a long period of time gives validity and reliability to his work. One weakness, if it is so, is that he does not seem to involve other observers and analysts to create analytical categories to achieve reliability.
Research Questions:
How does the social roles of the (novice) writers affect their socialization process and their learning? How do writers handle transitions from one writing context to the other? The particular set of questions the writer raises is:
a. What differentiated simpler from more complex (and higher status) writing tasks?
b. What determined writers’ social roles in this particular community of practice?
c. What methods of socialization were used for writers new to this organization, and to what effect?
Subject/site selection
The site of study is Job Resources Center (JRC). The writer is studying writing in this non-profit ogramization located in the heart of an urban area. It offers trainings and English language classes to nonnative speakers. The writer concentrates on the learning processes of Pam and Ursula, newcomers to the o, organization from the academy (from universities ranked in the top 10).
Data Collection:
1. Weekly interviews with Pam and Ursula, audio-taping the conversation and photocopying all of the writing each did week to week, including drafts of texts and the subsequent revisions.
2. Observing client programs, talking to other staff members, and watching the activities going on like collaborative writing sessions, …
3. Interviewed the executive directors and more experienced writers both inside and outside JRC to get their perspectives on similar writing tasks.
Data Analysis
The writer studied her data iteratively to find out patterns and themes in relation to social roles of the writers within the discourse community of JRC. She operationalized the concept of discourse community in terms of its essential elements and used triangulation to assure validity.
The writer is aware that it is difficult to test reliability in her ethnographic study and the generalizability of her finding. However, towards the end of her essay, while talking about writing instruction, she seems to be making more generalized claim about the importance of collaboration and social context of writing.
Seventh graders’ science and social studies classroom at SMS, the only middle school in a big city. Most of the students were from economically disadvantaged families. The subjects were 30 students, teachers Jade, Audrie, and Sarah, and an adult community organizer. The writers also provides the racial and gender composition of the subject population. She defines her role as participant observer, though sometimes she would change her role.
Data collection
8 months period. The writer collected data from the whole class and the focus groups. (she chose two focus groups based on her rapport). Data collection methods were: a. field notes, b. audio- and video-tapings, c. transcriptions of them, d. participant interviews, e. the community surveys, f. speech drafts, g. other texts used for projects, h. students’ notes and journaling. The writer has triangulated her data very well though her selection of focus group may raise some problem.
Data Analysis
She develops two levels of analysis. The first level involves charting production, consumption, and distribution as articulation of the building project. The second level of analysis involves centripetal and centrifugal forces in writing.
The writer does not make a broader generalization and is aware of the limitations of her findings.
Problems: selecting those with whom she has good rapport.
Also, she fails to code the data well. Her categories like production, consumption, and distribution are not clear. So, methodologically flawed.
Carolyn Ellis
Her writing gave me a relief from numbers and charts of other readings of the week. We should not judge her writing from the framework used to analyze other research studies. Her autoethnography offers a different perspective to look at the events like September 11 other than the data oriented studies that hide or fail to account for the intensity of the suffering and pain undergone by the victims. The subject of her study fits well to the methodology of autoethnography.
Question: What about scholarship?
replication: methodologically, yes.

Saturday, February 21, 2009

Sampling and Surveys

What are appropriate purposes for survey, how are subjects selected, how is data collected and analyzed, and what kinds of generalizations are possible?
Unlike case studies and ethnography, survey is intended to “obtain descriptive information about readily observed or recalled behavior of very large populations” instead of making an in-depth study of a phenomenon to identify certain variables. Survey done with random sampling can be used to “achieve representativeness of large population.” Its purpose is to reduce the cost and efforts in conducting research on large population and still obtain representativeness. The researcher has to determine the large population and the appropriate size of the sample population (subjects). Size of the sample has to be balanced (neither too big nor too small) to achieve both representativeness and not to lose the quality of the data. The best way to achieve representativeness is to use random sampling so that every member of the large population has equal chance to be selected in the sample.
In survey different data collection methods can be used from questionnaire to interviews, but careful attention is required in the construction of survey and questionnaire. They have to be clear and unambiguous. As far as possible, the tested methods in the field of research have to be used and if any new method or instrument is used, that has to be pretested, edited, and reviewed to establish its validity and reliability. The main consideration has to be given in the method’s capability of eliciting high response rate. After data is collected, the major variables of it are determined and tabulated in terms of nominal, interval, or rank order data. The researchers may also use the measures of central tendency or dispersion to analyze the data. The results of sampling survey are largely generalizable to the population (N). However, it is difficult to claim cause-effect relationship as this is a descriptive research. Overall, sampling and survey are important in making the study of large population manageable and representative as well. But the researcher needs to be careful in subject selection, selection of methods/instruments of data collection, and data analysis.
Let’s examine Blokzijl and Naeff’s research. They conclude that the result of the survey suggests to keep the PowerPoint presentation sober. They are careful enough not to make a sweeping generalization on students’ preference about PowerPoint presentation. However, the researchers could have taken a much larger population, (N) incorporating students with varying levels familiarity with technology. Their results do not become generalizable to or representative of any students other than the ones they studied. They do not define the large population and the proportion of sampling size with the large population. The five-scale point may have affected the result of the research as it has a tendency to draw the responses towards the center.

Saturday, February 14, 2009

Case Studies

What are appropriate purposes of case studies, how are subjects selected, how is data collected and analysed, and what kinds of generalizations are possible?
Research designs have to be chosen in terms of the purpose of the research. Sometimes quantitative research will be more appropriate than qualitative research if the purpose is to identify the reasons behind certain problem and to make a generalization about on that phenomenon applicable to a larger population. However, in certain cases qualitative research designs can be more appropriate if the purpose is to make an in-depth study of a specific phenomenon and to investigate the variables affecting it. In the field of writing and communication where complex issues like writing process have to be studied, qualitative research designs become mostly unavoidable. Quantitative research can be a step after qualitative research identifies major variables.
Case study is a major kind of qualitative descriptive research. Its major purpose is to identify the variables affecting a certain phenomenon and to “investigate a few cases in great depth” (Farina). It tries to determine the key variables that are important to study a certain issue. In addition, it also studies the relationship between different variables, but the relationship is not that of cause and effect.
The subjects selection in case studies is quite different from quantitative research where subjects are selected through random sampling. But in case studies, a few subjects are selected to represent the major sections of the population to be studied. For instance, to study the composing process of the students (phenomenon), Emig selected eight “twelfth graders” from “a variety of types of schools: an all-white upper-middle-class suburban school, an all-black ghetto school, a racially mixed lower-middle-class school, an economically and racially mixed good school.” Graves chose eight students who were taken as “normal” by their instructors. So, what this shows is that, as a few subjects are selected in case studies, they have to be carefully chosen to best represent the phenomenon at hand.
According to the purpose of the research, data are differently collected and analyzed in case studies. The data collection methods can range from direct observation to protocol analysis, from interview to taking data from institutional records. After data is collected from different sources, the major task of its analysis is to identify the important variables. For this purpose, the researchers have to label and divide the data into different categories, which become the variables of the study. In addition, to maintain reliability, coding has to be done by more than one observers.
However, due to its focus on depth rather than on breadth, its results are not normally generalizable to a larger population. In other words, case studies enable the researchers to know “how some users act.” Its results can lead to further quantitative research. But even the case studies can at times provide generalizable results to some extent. For instance, in Janet Emig’s case, she has taken subjects from various backgrounds making her result generalizable to a reasonably larger population. Yet, since case studies do not have a broader data base as in quantitative research, its generalization is limited.

Saturday, February 7, 2009

Week 5: Internet Research

First my personal feeling: I have found the regulations concerning research with human subjects really important. I was not aware at all about these provisions. This was first because I used to make research only on literary issues, and second I don’t know whether there are such provisions in my country about the research with human subjects in the humanities and social sciences.

Internet research provides both the unique opportunities for research with human subjects and the heightened possibility of the violation of the principles of privacy and confidentiality. It can make surveys less expensive, include diverse population, and maintain more anonymity than in normal research situation. However, as Curtis’s quote suggests there are several possibilities that the important personal and confidential information are manipulated and misused in internet environment. The information collected through emails can leak if the researcher does not make sure that the proper security provision is adopted. There are equally the possibilities that the information on the computer of the research subject can be accessed and monitored by family members or others. So, the researcher should clearly mention to the participants that it is the responsibility of the subjects to protect their information in their location.

One puzzling thing is the idea of private versus public regarding the information kept on the open websites. The module says: “ One view is that the act of posting to an open site, accessible to millions, constitutes public behavior and may be observed and recorded without consent. According to this view, if no identifiers are recorded, such observations may not even meet the definition of research with human subjects. An opposing view is that, in spite of the accessibility of their communications, people participating in some of these groups make certain assumptions about privacy, and that investigators should honor those assumptions. If one subscribes to this second view, either consent would be required or it would have to be waived in accordance to the regulations.” I think when we can access data or some information about the persons in an open place, that should not require any consent. How can someone claim the privacy of something publicly displayed? That sounds strange to me. This is quite different from individuals having some personal activity in the public places like parks or restaurants due to the nature of online environment.

Another issue I often find intriguing in the internet environment is that of taking consent of the subjects by telling them to click on “I agree” where the subject may find it quite monotonous to read a long statements. I think in such cases the researchers should clarify the conditions that the subjects should comply with in some other modes.

Saturday, January 31, 2009

Week-4 Readings

What distinguishes quantitative from qualitative designs? What is the difference between validity and reliability, and what is meant by the term "probability" and "significance"?

Quantitative vs. Qualitative Research Designs

Both the quantitative and qualitative research designs are empirical research designs used in social sciences and recently adopted in technical and business communication research projects. As empirical research designs, they undertake research on certain inconsistencies or problems through inductive processes. However, these two methods, as the names suggest, are significantly different in terms of the processes they use and the nature of the conclusion they draw. The nomenclatures “quantitative” and “qualitative,” unlike what we initially tend to predict, are quite misleading. The essential difference of these methods is not the use of quantity or numbers. Both the methods use numbers or quantities, though in varying degrees. But, what distinguishes them is that quantitative research is experimental and qualitative research is descriptive.
As quantitative research design is experimental, it divides the variables into two equal groups, one control and the other experimental (sometimes also called independent and dependent). Then the researcher applies certain treatment to the experimental group and observes the changes in it compared to the control group due to the treatment applied on it. So, the second important element of quantitative research is comparison. Next important aspect of it is its emphasis on cause and effect relationship. It tries to establish that the change has occurred as a result of the treatment applied and, therefore, the treatment is the cause for the effect the experiment displays. The other two features are random sampling (randomization) of the subjects and the generalizability of the conclusion of this research, though to be used with some caution. One major drawback of this method is the isolation of the variables from the other circumstances, which makes this method unnatural. Its result may not be applicable to real situations.
Qualitative research design is basically about “process and description” of a certain phenomenon or situation. It is a thus a descriptive research. The researcher observes a specific situation and “identifies key variables in a given situation” (Goubril, p. 584). Unlike, quantitative research, it does not divide subjects into two groups, rather studies some situation and the environment around it or the subjects to find solution to specific research problem. Unlike in quantitative research method, research subjects are not randomly chosen, rather the researcher tries to choose the most representative one(s), and even if sampling is done, it is purposeful sampling. In this research, no treatment is applied; the subjects are observed and studied in the natural(aistic) situation. Due to the absence of randomization of subjects and treatment, the researcher is required to provide complete description of the situation, the modality of data collection, the position and the role of the researcher, management and recording of data, and the strategies of data analysis in qualitative research method. Equally important is the difference in these two methods of the participation of the researcher as an active observer in qualitative research and a detached analyst in quantitative research. One important notion in qualitative research is triangulation (comparison of the information gathered at different times, analysis of the data by different researchers, and the use of different perspectives to analyze data), which is used to establish validity of qualitative research. The benefit of this method is the reflection of real situation, whereas its drawback is the less possibility of generalization in comparison to the quantitative research.
Now, for example, let us assume that most of the teachers, students, and the employees do not make optimum use of Blackboard system of Clemson University. If a quantitative research is to be done, the researcher should begin with a certain hypothesis, for instance, the reason behind this underuse of Blackboard system is the lack of training. Then the researcher first makes random sampling (in this case randomly choosing faculty, students, and employees), divides them into independent variables (group) and dependent variables (group) and gives training to the dependent/experimental group and sees the result. If the participants of training start making more use of the system than the other group, the hypothesis works (but probability measurement and other factors have to be considered).
If one is to conduct a qualitative research of the same phenomenon, he/she needs to observe few representative students and faculties (or employees) in their workplace, take interviews, and conduct surveys to explore into the nature of the problem. Here, unlike in the previous example, comparison, treatment, or sampling (?) are not used. Through observation, interviews, and surveys, the researcher tries to describe the phenomenon and identify the nature of the problem.

Validity and Reliability

These two concepts are central to empirical research design. They are concerned with measurement. Though they are related, there are certain differences between them. Validity is concerned with accuracy: how much accurately the research methods measure what they intend to measure. So, its focus is on the truthfulness of the result. Reliability, on the other hand, concerns the consistency of the result in different times. It refers to the issue of how much does the result of the test remains the same if the same “measures were applied and reapplied under precise replication of conditions” (Williams, p. 22). For instance, we can see how much the “speak test” conducted to measure the level of oral language competency of the foreign students measures their real capacity. Or similarly, how much does a TOEFL test measure the language skills. If it exactly reflects the language ability of the candidate or the subject, the test has good validity and if there is very little correspondence, it does not have good validity. In the above example, if the same test is administered again in the same situation, if it gives similar result, it is reliable. But here, the influence of the practice or the familiarity of the test has to be kept null.
Hence, validity concerns the truthfulness of the result of the measurement; reliability concerns the consistency. Williams distinguishes these concepts as follows: “While there are many and more complex approaches to assessing validity and reliability than this illustration provides, the ideas that evaluation of validity requires some type of outside standard, and that evaluation of reliability requires some way of comparing a measure with itself, remain basic considerations” (p.23). In other words, to check the validity, the result is measured against the external factor where as to check the reliability, one test is compared to the other similar test.

Probability and Significance

Probability and significance are related to statistical analysis. Probability is used literally to mean the probable or possible occurrence of certain event or phenomenon. Most common example is that of tossing a coin and predicting the probability of turning head or tail. If it is tossed once, then the probability of turning the head is 0.5 and the same is the case with the tail. The hundred percent probability (certainty) is taken as 1 and other values of probabilities remain between 0 to 1. The notion of probability is mostly used when conducting quantitative research and testing hypothesis. There are two kinds of hypothesis, research hypothesis and null hypothesis. And research hypothesis is regarded “valid” or “proven” when the probability of its alternative hypothesis (null hypothesis) is below certain preset level (generally 0.05). So, the probability of the research hypothesis being true or closer to true is high when the probability of null hypothesis being possible is below, e.g., 0.05. This level can vary in different cases.
“Significance” is quite a related term to probability. To reject the null hypothesis, certain level of probability has to be made a criterion. For instance, as stated in the previous paragraph, assume that the probability level of 0.05 is taken as sufficient for rejecting null hypothesis and verifying research hypothesis, this level is called “rejection region or the significance level” (Williams, p. 61). Williams defines it as “a level of probability set by the researcher as grounds for the rejection of the null hypothesis” (Williams, p. 61).