Reviewing Article Abstracts: Content Analysis and Synthesis
Author: B. Jane Scales
After accepting new duties within Washington State University (WSU) Libraries’ public services department in the summer of 2013, I took responsibility for evaluating the quality of our instant messaging chat services. Instant messaging is one of several ways students and staff can receive assistance with their research. After our patrons request assistance, librarians work with students by conversing with them by exchanging messages online. These exchanges are saved automatically as transcripts which can be looked at later and evaluated. It’s quite common, in fact, for academic libraries to offer these services and study transcripts to provide better services.
My immediate problem, however, was not in analyzing the chat transcripts, but with learning: a) how other academic librarians had approached the task, b) what exactly they had examined with their analysis, and c) whether their methods and findings could shape our analysis. Only after getting a handle on these issues could I make a proposal to colleagues regarding the evaluation of our own institution’s services.
Essentially, I needed to quickly review and digest a large body of literature, organize and summarize it. A quick search in the database “Library, Information Science & Technology Abstracts” retrieved almost two-hundred articles covering the topic of chat transcript content analysis. From this set of articles, I selected thirty-one that appeared to me most recent, relevant and thorough in their analysis of reference chat transcripts.
Project Preparation and Planning
I proceeded by implementing these steps:
- Created a Word document (.docx) with the article citations and abstracts.
- Added this as a PD (primary document) into .ti.
- Applied in-vivo coding to key abstract phrases and identified emerging code families
- Associated in-vivo codes with a family
- Synthesized this information into a one-page summary
Figure 1. Article abstract primary document
Rather than consider each article abstract as its own PD, I included all of the abstracts as part of one PD, and coded each as an ABSTRACT to break up the document into manageable units. I knew that each ABSTRACT would be tied to the subsequent content coding, so this approach would work fine. I could always go deeper into any article, as citations to each were included. Moreover, the project, as far as I was concerned, was finite. I did not plan to add additional article abstracts and continue the analysis after my previously stated objectives had been reached. The PD, with single spaced text, was approximately thirteen pages long.
During the next stage of my analysis, I scanned through the abstracts and applied in-vivo coding to any phrases, terms, or sentences pertinent to my inquiry. King (2008:473-474) provides a good background and summary of in-vivo coding, noting that it is often used in the “earlier stages” of coding because it is so immediate. In-vivo coding uses the exact language within the text to code, so that it is very immediate and allows the researcher to examine important terms without thinking more abstractly by trying to fit the data into more abstract categories. That process can certainly happen later, however.
Figure 2. Examples of in-vivo coding
As I went through the process of in-vivo coding, I noticed that sometimes the length of the passages I selected were too long. Friese notes that in-vivo codes that are over 100 characters long are unwieldy in the right-hand margin, and can be less effective as a meaningful code as it becomes too complex. After highlighting a lengthier selection of text, I could edit the in-vivo code to a shorthand version before moving on (see Figure 3). This process of establishing in-vivo and modified in-vivo codes provided me with a good foundation of data upon which I could base my literature analysis.
Figure 3. A passage recoded to improve manageability
After concluding this modified in-vivo coding, my project contained 191 codes, which captured the essence of the article abstracts.
Emerging Code Families
ATLAS.ti assists researchers by facilitating the grouping of codes into “code families.” (The term “family” in .ti can apply to collections of codes, memos, or PDs.)
I next looked for meaningful overarching concepts in which my codes might fit, and created code families for these concepts. For example, my colleagues had asked how many chat transcripts we should look at to get a representative idea of their qualities. Of course, there are guidelines for qualitative researchers who are contemplating this issue (sample size), however, it was important in this situation to see what others in our academic discipline had done. The set of codes I identified relevant to the sample size analyzed by other institutions fit nicely into my “Number of Transcripts” code family.
Figure 4. Examples of Code Family Comment for “Number of Transcripts”
After reviewing the remaining codes, I identified seven additional code families and used the Code Family Manager screen to assign codes to one of the eight Code Families. Note that this approach is not recommended by Friese (2012:184-6) in chapter 5 of her book Qualitative Data Analysis with ATLAS.ti. Friese recommends using prefixes to collect related codes before proceeding to using Code Families. If this project was destined to become more detailed, this is the approach I would have taken. Given the context of the situation, however, the Code Families were sufficient in meeting my immediate analysis needs.
I was careful to develop a Comment for each Code Family to define its scope. For example, the Code Family “Communication” has an attached Comment that reads, “Elements of communication, communication theories, or exchange of information between librarian and patron.” From past experience, I knew that it is absolutely essential to write comments for Families and other conceptual elements that organize one’s research. Comments describing the purpose or parameters of coding elements can play a crucial role in ensuring one’s research approach remains consistent.
Figure 5. Code family “Communication”
Synthesis of Analysis Data
The analysis of literature I completed helped clarify a number of issues for my colleagues and me. For example, although the most heavily grounded family code pertained to communication issues, we were not interested in delving into these more abstract theories. We also learned that a concentration on more traditional models of analysis librarians have used to assess reference transactions did not transfer well to an online chat environment. As we looked at the in-vivo or modified in-vivo codes under the “Assessment of services” family, we found a number of ideas that espoused a “holistic evaluation” that struck us as the way to go.
This analysis helped me solve the immediate problem of getting a grasp on current issues likely to shape the direction of future research of our institutional chat reference transcripts. As we move forward, my colleagues and I have a well-mapped hermeneutic unit of the literature on analyzing chat reference transcripts, and assessing online reference services. The project to examine and map the articles abstracts can continue to provide a basis for our work and can help us quickly plug in to the issues that most concern us. Moreover, if we decide to publish in the future, much of the literature review will be already in place.
Still there are a number of loose ends that I am not sure of. Would it be worth my time to go back and correct the structure of my codes and family codes? What would be the advantages and where would it lead me? These questions make me wonder how other professionals use Atlas.ti as a means to manage work projects without a clear end or goal such as publication.
King, Andrew. (2008). In Vivo Coding. The Sage Encyclopedia of Qualitative Research Methods. (pp. 473-474). Thousand Oaks, CA: Sage Publications, Inc. http://dx.doi.org/10.4135/9781412963909.n240
Friese, Susanne. (2012). Qualitative Data Analysis with .ti. London: Sage Publications Ltd. pp. 184-6.
About the Author
B. Jane Scales is the Reference Team Leader, and E-Projects Librarian at the Washington State University Libraries. She holds a bachelor’s degree in Russian Language from Indiana University, a master’s in German Language and Literature from Ohio State University, and a master’s in information science (MLIS) from the University of Kentucky. Her research focus includes information literacy, online learning theories, and academic reference services.