In lieu of an abstract, here is a brief excerpt of the content:

8 ■ ■ ■ ■ ■ ■ ■ ■ ■ Convincing Other People The Issues Formerly Known as Reliability, Validity, and Generalizability YOU HAVE JUST USED our data analysis procedure to construct a theoretical narrative. How should you evaluate the work you have done? Qualitative and quantitative methodologies answer this question differently. Quantitative methodology tries to exclude subjectivity, interpretation, and context from scientific practice. It requires that data analysis proce­ dures be “objective” and that theories be universally applicable. The re­ quirements of objectivity and universality are translated into statistical con­ cepts. Objectivity corresponds to the statistical concepts of reliability and validity, and universality corresponds to the statistical concept of general­ izability. As qualitative researchers we strongly disagree with the quantitative ap­ proach to evaluating research. We believe, instead, that subjectivity, inter­ pretation, and context are inevitably interwoven into every research pro­ ject. Furthermore, we believe that these elements of research practice are essential and should not be eliminated even if it were possible to do so. However, we agree with quantitative methodologists that standards for evaluating research are essential. We do not think that qualitative research is an area in which “anything goes.” 78 ❙ Convincing Other People In this chapter we will recommend standards for evaluating research that are consistent with the qualitative research paradigm, and therefore take into account subjectivity, interpretation, and context. In place of the quantitative concepts of reliability and validity, we suggest the qualitative concept of justifiability of interpretations. In place of the quantitative con­ cept of generalizability we suggest the qualitative concept of transferabil­ ity of theoretical constructs. You should know, however, that there are many different qualitative approaches to these issues. For alternatives to ours, you can consult Smith and Deemer (2000). Pursuing the Unreachable Ideal: A Skeptical Look at Reliability, Validity, and Generalizability When you studied the concepts of reliability, validity, and generalizability in statistics or research design courses, they were probably presented to you in the language of mathematics and statistical theory. Such a formal presentation is certainly necessary for learning how to do statistical com­ putations. In focusing on the mathematical details, however, students often lose sight of the philosophical issues involved in these concepts. Be­ cause it is precisely the philosophical issues that we want to explore, we will deal with the concepts simply, without the mathematical details. In the discussion that follows, we are going to assert that more is claimed for the statistical tools of reliability, validity, and generalizability than they actually deliver. We will show you that these tools can work only in an ideal situation that does not, and indeed cannot, obtain in practice. The Trouble with Reliability and Validity Reliability and validity are important criteria for evaluating quantitative re­ search because they are intended to assure the reader that the measuring scales are objective. Objectivity is difficult to define precisely; generations of philosophers have devoted their lives to the task with no end to their labors in sight. For our purposes, however, the definition is straightfor­ ward: Objectivity simply means the absence of subjectivity. If our measur­ ing scales are objective then we are studying the phenomenon as it really is, excluding our subjective biases about what we would like it to be. What is the connection between objectivity, reliability, and validity? We [18.219.132.200] Project MUSE (2024-04-26 01:54 GMT) Convincing Other People ❙ 79 begin considering this question by defining reliability. The way to deter­ mine whether a scale is reliable is to administer it twice. If the numerical score you get from the second administration of the scale is the same, or almost the same, as the numerical score you got from the first administra­ tion, then the measure is reliable. Conversely, if the numerical scores on the first and second administrations are wildly different, then the scale is not reliable. For example, if you measure satisfaction with fathering and get a value of 7 the first time and a value of 6.5 the second, then the satis­ faction with fathering scale is reliable. But if the first value is 7 and the sec­ ond value is 2, then the scale is not reliable. The reliability of a scale is a necessary condition for the scale to be ob­ jective. If you get a value of 7 the first time you measure something, and a value of 2 the second time you measure it, you clearly do not know the true value of what you are measuring. If you have...

Share