[ NB: "Annotations" are occasional posts that explore selections from my research reading—articles or books—in rhetoric, technical and professional communication, and related fields. ]
Wiles, R., Crow, G., & Pain, H. (2011). Innovation in qualitative research methods: A narrative review. Qualitative Research 11(5), 587–604.
There's a particular genre of academic article that I highly value, and Wiles et al. have produced a good example in this form. This article, a "narrative review," is similar to what MacNealy (1999) describes as meta analysis, a literature review "that is conducted empirically and analyzed statistically" (p. 109). While MacNealy's discussion of meta analysis largely relies on statistical measures of a group of previous experimental studies, the methodologically rigorous manner in which a meta analysis is carried out can provide researchers in a variety of fields with a very useful approach to exploring an existing body of research literature.
In short, the meta analysis or systematic literature review provides a valuable service to scholars in a given area; such articles are difficult to do well, and are fairly infrequent (Carliner, et al., 2011, have recently published a similar piece in the IEEE Transactions on Professional Communication, and I'll review that soon...). At the most basic level, the feature that distinguishes a meta analysis or systematic literature review from a bibliographic essay, annotated bibliography, or conceptual literature review is the systematicity involved in sampling and analysis. In any of these genres, a purposeful sample must be identified from the broader range of available articles in a given subject area.
Exactly how a researcher moves from the total population to this purposeful sample is explicitly articulated in a meta analysis or systematic lit review; this is important because such transparency means that the analysis is replicable. A solid meta analysis clearly defines the total population and criteria for inclusion, describes how and why articles were selected or excluded, and proceeds along a clearly defined schema for analysis. Performing this kind of rigorous, systematic review of literature in a given area can be tremendously useful for other researchers; this is the primary reason I appreciate a strong meta analysis or systematic lit review.
Wiles et al. (2011) provide qualitative researchers with a narrative review—a systematic literature review—of 57 articles published between 2000 and 2009 that make claims of innovation in qualitative research methods. Overall, this article is useful for two important reasons: first, it acts as a good model of how a researcher might conduct a systematic literature review—an approach that differs from MacNealy's (1999) meta analysis in that it relies on a qualitative coding schema rather than statistical measures as the primary method of analysis; and second, the article explores how and why researchers make claims to methodological innovation and explore the implications for doing so.
"A claim to innovation," they argue, "should be rooted in genuine attempts to improve some aspect of the research process" (p. 588). But in order to explore a given researcher's claims to methodological innovation, the authors first set out to define what constitutes such innovations (p. 588). Ultimately, the authors seek evidence for "Travers' (2009) assertion that there is an increasing tendency for authors to claim innovation, that such claims are exaggerated and that they are detrimental to qualitative social science" (p. 589). Wiles et al. list five research questions that act as an analytic schema for exploring the articles included in their sample.
Wiles et al. describe how their project "used traditional methods of qualitative systematic review for summarizing data as well as qualitative analysis of text to explore the narratives of the claims being made" (p. 589). They next describe their sampling procedures, moving through a systematic search of the all the articles published in 22 relevant journals during the 2000s. Wiles et al. provide a table of the journals where relevant papers were identified (based on their sampling criteria); more importantly, they have published a full list of these papers as an annotated bibliography at the National Center for Research Methods, so that other researchers can see which papers were identified from the population but excluded from the sample.
The article also explores "sites for innovation," the actual research sites where innovation claims were carried out. This is important, as new media environments were related to innovation claims, often because a researcher was adapting or extending methods intended for traditional (that is, non-digital) sites. Of the 57 papers in the sample, "The majority of innovations claimed were at the level of methods, techniques, or tools with only a minority (10 papers) focusing on methodology" (p. 592). Wiles et al. then describe six different categories of innovation claimed in the sample: creative methods, narrative methods (including autoethnography), mixed methods, online methods, software tools, and focus group methodology (p. 592).
After establishing their procedures and sample and identifying major categories of innovation claims, Wiles et al. define three analytic categories to understand innovations that emerged from their analysis of the articles in the sample; they suggested that innovations could be categorized at the level of inception, adaptation, or adoption. They note that "None of the authors defined innovation when they used the term," and that overall, "much innovation in social science research methods involves adapting established methods rather than inventing completely new methods" (p. 593). Claims at the inception level, then, "are those in which the authors claim to be using a new method, approach, or tool" (p. 593). Claims at the level of adaptation "are when an author claims an established method has been adapted or changed in order to improve the method or to meet specific needs within the research context" (p. 593). Finally, claims at the level of adoption "are when an author claims they are taking an established method, relatively unchanged, and applying it into a new discipline or sphere of study" (p. 594). Given these categories, Wiles et al. note that "the majority of papers appeared to claim innovation at the inception level (32 papers)" (p. 594).
In their next section, Wiles et al. explore why authors make claims to innovation, identifying three key reasons: theoretical, moral/ethical, and practical. These reasons were distributed in fairly uniform ways across the sample. Finally, the authors look at the uptake of innovations—are other scholars citing and deploying the innovations claimed by the researchers in the sample? A little more than half of the papers had between 0–3 citations in Google Scholar; however, nine papers had 12 or more citations (p. 599). It is interesting to note that "there was a markedly higher citation rate of papers on online software innovations" (p. 599).
In their discussion section, Wiles et al. argue that "This study indicates the majority of innovation or novelty claims in these papers might more appropriately be called developments in that they involve adapting methods either to meet the needs of a particular project or to meet some moral, ethical or theoretical standpoint" (p. 600). Most of the papers, therefore, draw from and adapt existing methods, and "there is little evidence of paradigmatic shifts in qualitative research methods within these 'innovations'" (p. 600). Wiles et al. suggest that "many of the 'innovations' identified in these papers are little more than the day-to-day adaptation that researchers have always, rightly, undertaken in applying methods to a specific research context" (p. 600). Trends toward over-claiming innovation, therefore, can be "potentially detrimental to qualitative social science" (p. 601).
Too many claims of innovation may dilute the appreciation and understanding of the ways in which contemporary researchers adopt and adapt well-established methods and methodologies.