1.20.2012

Annotations | "What Does the Transactions Publish?"

Annotations | "What Does the Transactions Publish?"

[ NB: “Annotations” are occasional posts that explore selections from my research reading—articles or books—in rhetoric, technical and professional communication, and related fields. ]

Carliner, S., Coppola, N., Grady, H., & Hayhoe, G. (2011). What does the Transactions publish? What do readers want to read? IEEE Transactions on Professional Communication 54(4), 341–359.

As I mentioned in my previous Annotations post, I’m a big fan of meta-analyses and systematic lit reviews. This recent piece by Carliner et al. isn’t exactly a meta-analysis, but it’s in the same ballpark,[1] and their analysis of what major journals in technical and professional communication have published in the last 5 years is really interesting to me (and, I hope, to you as well).

In addition to exploring the major genres and methods comprising the sample of articles they selected, Carliner et al. report findings from a survey of Transactions on Professional Communication (TPC) readers in order to determine what kinds of congruencies and gaps exist between what the TPC publishes and what TPC readers would like to read. One of the most interesting gaps they established was a desire among readers for more case studies, lit reviews, and tutorials; this is is especially surprising, since the TPC primarily publishes experimental and survey-based studies.

The impetus for the study[2] was a change in TPC editorship, and Carliner et al. “sought empirical evidence on which to base decisions about the future direction of the journal” (p. 341–42). In order to do so, the authors placed findings from the readership survey in conversation with their analysis of 5 years’ worth of peer-reviewed scholarship from: the TPC, Journal of Business and Technical Communication, Technical Communication, and Technical Communication Quarterly.

One handy subsection of this article is the lit review, which amounts to a review of previously published systematic lit reviews and meta-analyses. If I was wanting to know more about this methodology, the brief lit review provided here would get me off to a strong start.

As Carliner et al. note near the end of their lit review, the study actually contrasts “the content published by one peer-reviewed journal [the TPC] and other journals in the discipline with the preferences of readers” (p. 343, emphasis added). I think the italicized bit is key; if you’re an academic researcher in technical and professional communication, this article provides a very useful snapshot of the major journals in the field, the kinds of articles they publish, and the relationship among them.

Also useful, if you have an interest in conducting or evaluating systematic lit reviews, is the methodology section. I think the decision to adopt STC’s technical communication Body of Knowledge classification scheme is sound, and the authors provide a clear description of where and why they amended the scheme. They contend that the “full disclosure of the methodology and the dual coding[3] of each article ensures the trustworthiness of the data” (p. 345).

The authors provide findings for each journal, and then detail (in both table and narrative forms) findings across the journals—“the three topics that received the most coverage were: (1) Information Design and Development, (2) Deliverables, and (3) Academic Programs” (p. 347). “The three most common categories of research methods” across the articles in their sample were: “(1) Critical—Document Review, (2) Experience Report, and (3) Quantitative—Survey” (p. 347). The “critical document review” category, it should be noted, uses critical methods to explore texts without disclosing a sampling method or systematic methodology. My assumption is that most of the articles in this category were rhetorical analyses of one form or another.

The discussion of what readers of the TPC want was less interesting overall, but contains some very surprising tidbits. For one, though the sample was small (n=88, a 9.4% response rate), it was professionally diverse. Only 33%, for example, were academics, while 56% work in industry (p. 349). The survey used the classification scheme described above and asked respondents to rank the three topics of most interest and the three of least interest (p. 351). They did the same with research methods.[4]

As I mentioned briefly above, of most interest to readers were case studies and lit reviews, while survey-based studies and document analyses were of least interest. TPC rarely publishes case studies, and they publish a lot of survey-based work.

The authors note that the finding is somewhat problematic, as the term “case study” is broadly interpreted across journals. Personally, I follow MacNealy (1999) here, and see case studies as both systematic and rigorous, otherwise we’re not talking about a case study but a case history or experience report. I realize that I’m probably in the minority, though, and it’s a good idea if you’re conducting and writing about systematic qualitative case studies to clearly explain how your work is systematic, well-triangulated, and rigorous, so as to avoid the sometimes pejorative perception. But I digress…

Carliner et al. suggest that the survey findings indicate a “strong interest in research on communication in engineering workplaces,” and that “readers have a preference for applied research rather than the basic research that the Transactions has emphasized in recent years,” hence the preference for more in-depth (dare I say?) qualitative work (p. 352).

Overall, this was a fun read. The article provides a really nice snapshot of where we are in terms of current scholarship in technical and professional communication. And to me, it suggests a lot of opportunity for new kinds of work.


  1. Carliner et al. include a detailed reporting of their methods, which means that their study is replicable, or better yet, extensible (by covering a decade’s worth of articles from these and related journals, for example).  ↩

  2. Really, Carliner et al. conducted two different studies and compared them in this article; for the sake of clarity, I’m referring to the article itself as a “study.”  ↩

  3. Two of the authors coded each article by first conducting a series of “norming sessions” where they jointly coded two full issues of each publication under scrutiny. Because of this approach, the authors didn’t provide any measures of inter-coder reliability.  ↩

  4. Interestingly, they did not include ethnography, usability testing, and experience reports in the survey…  ↩

0 comments:

Post a Comment