Rigor in empirical research has to do with both reliability and validity. In this short submission, one scrutinizes validity as applied to the methodological options for research into professional learning communities (PLC’s).
There are three types of validity and these are impacted by two different factors. Construct validity is a function of the study instrument (Creswell, 2003). This is, in fact, the most familiar aspect of validity: whether the study does measure what it purports to. Were a completely original and quantitative study instrument to be formulated for the research on PLC’s, due consideration would have to be given to Creswell’s construct of “…convergent and discriminant validity as two interlocking propositions” (2003, p. 64). For example, it would be useful to correlate shared values, team teaching, commitment, collaborative learning on the part of teachers, universal learning where students are concerned, and a culture of continuous improvement with, on the other hand, result/criterion variables such as scoring well on state achievement tests. If a mixed-methods research program is to advance the concept of PLC’s, quantitative measurement is needed to satisfy the criterion of observed convergent validity and downplay those PLC components that, while theoretically making sense (and possibly convergent), are empirically revealed to be divergent. Administrators and teachers themselves would therefore learn to take away time and other resources from the latter and emphasize those that more truly relate to desired student achievement goals.
Equally important, however, and more within the control of the researcher who must often utilize tests or measurement instruments devised by others is a robust research design that optimizes internal and external validity (Creswell, 2003).
When research is more truly and exclusively qualitative, the standard of rigor is more likely to consist of both internal and external validity. This is so because how the researchers, study participants, and readers of the report together assess the accuracy of the results has a great deal to do with the acceptance of the findings.
When a mixed-methods strategy is called for, as seems likely for this research into professional learning communities, it is unlikely that there is one single, an overarching measure of validity for the study as a whole. Rather, the researcher must prudently assess how valid each method is per se and how each might affect the validity of the next method in a contemplated series. For instance, validity for a quantitative study employing a standardized questionnaire might be adversely affected if a preceding qualitative study was the sole determinant of the target respondent profile and sampling approach.
Nonetheless, it is true that one way of enhancing validity and empirical insight when studying the potential for PLC’s is to collect multiple forms of qualitative data (e.g., observation, focus groups, depth interviews) or to employ several quantitative methods (e.g. experimental and quasi-experimental designs, surveys). Or, as Trochim and Donnelly suggest, “…integrate the qualitative and quantitative approaches into a new synthetic method…combine qualitative brainstorming and quantitative rating approaches into a new method” (2008, p. 144). Construct and internal validity then become a question of how well the combination reflects current “best practice”; breaths life into, and gives emphasis to, certain aspects of PLC’s; and in general, affirming which of the theoretical constructs relate best to the desired achievements of students.
As well, the authors put forward the view that coding qualitative data, while relying heavily on researcher judgment, prepares the findings for quantitative treatment. In turn, the very act of designing rating scales requires qualitative-style judgment and assumptions about what respondents could mean when they select a particular point on the scale. Thus, we reiterate the point made earlier in this submission: qualitative and quantitative methods complement and enhance the validity of a mixed-methods approach to investigating what makes for successful PLC’s.
References
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage Publications.
Trochim, W., & Donnelly, J. (2008). The research methods knowledge base (3rd ed.). Mason, OH: Cengage.