The Process of Computer-Assisted Qualitative Data Analysis

August 31, 2019

Figure 1: The N-C-T model as navigation tool to guide you through the process of qualitative data analysis (Friese, 2019)

Dr. Susanne Friese
Author of the book Qualitative Data Analysis with ATLAS.ti


Computer-assisted qualitative data analysis consists of various consecutive phases, which are on the most general level: preparing data and creating a project file, coding the data, using the software to sort and structure the data and querying the data with the aim of discovering patterns and relations. The emphasis on coding will be different depending on the chosen methodological approach. The logic of the software, though, is built around coding. None of the analysis tools for querying the data can be used without you having coded the data. In coding the data, you describe what is in the data. These might be people, artifacts, organizations, emotions, attitudes, actions, strategies, consequences of actions, contextual factors and the like. Depending on the chosen methodological approach, this may mean that you are tagging the data at a nominal level or that you are developing code labels based on a more detailed interpretation of data segments. Once the data are coded and a code system is developed, it can be interrogated. Both phases are presented below and described in more detail in the book ” Qualitative Data Analysis With ATLAS.ti.”

Phase 1: description of the data – creation of a code system

The aim of descriptive-level analysis is to explore the data, to read or to look through them and to notice interesting things that you begin to collect during first-stage coding (cf Saldana, 2015). This requires reading transcripts, field notes, documents, reports, newspaper articles, etc., viewing video material or images or listening to audio files. Generating word clouds and word lists in ATLAS.ti may be a starting point when you have lots of data. To capture the interesting things that you notice, you may write down notes, mark the segments you find interesting, write comments or, as is most common, attach labels (= coding). These labels are referred to as ‘codes’ in the software, for historic reasons. You may also think of them as ‘tags’. At this point in the analysis process, the labels can be descriptive or already conceptual, lower- or higher-order. Developing codes and a code system is a process, and the labels you create at this stage of the analysis process are likely to change. Thus, you do not have to worry too much whether a label is right or wrong. Reading further, you will very likely notice a few things that are like others you have noticed before. If they fit under a label that you already have, you apply it again. If an issue is similar but does not quite fit a tag that you already have, renaming it may allow you to subsume the data segments.

The labels do not have to be perfect yet. You can continue to collect more similar data segments and later, when you review them, it will be easier to think of better and more fitting labels to cover the substance of the material you have collected. The intellectual work that you do at this stage is the same as described in the past for manual ways of analysis. As Strauss and Corbin wrote in 1998:

As the researcher moves along with analysis, each incident in the data is compared with other incidents for similarities and differences. Incidents found to be conceptually similar are grouped together under a higher-level descriptive concept. (73)

The way you code and what you code can be manifold depending on the underlying research questions, research aim and overall methodology you are using. To name just a few of the various procedures that you find in the literature: descriptive or topic coding (Miles, Huberman and Saldaña, 2014; Saldaña, 2015; Richards and Morse, 2013; Wolcott, 1994); process coding (Bodgan and Biklen, 2007; Charmaz, 2002; Corbin and Strauss, 2008); initial or open coding (Charmaz, 2006; Corbin and Strauss, 2008; Glaser, 1978); emotion coding (Goleman, 1995; Prus, 1996); values coding (Gable and Wolf, 1993; LeCompte and Preissle, 1993); narrative coding (Cortazzi, 1993; Riessman, 2008); provisional coding (Dey, 1993 ).

You may choose to follow just one of the suggested procedures or combine them. The things you collect in your data may include themes, emotions and values at the same time. You can code the data using deductively derived codes as in provisional coding; or you can develop codes inductively (e.g. initial or open coding) or abductively, which is often the case when developing categories. See chapter 5 in Qualitative Data Analysis With ATLAS.ti.

Often there is a lack of methodological understanding of what a code is. Software like ATLAS.ti “just” offers the tools to perform coding – manifest in functions like creating codes, deleting, renaming, grouping, merging or splitting them. Thinking of coding as an act of collecting will help you to better understand that a properly developed code is more than just a descriptive label for a data segment and that it does not make sense to attach a new label to everything one notices in the data.

The aim of the first phase of coding is to develop a code list that describes the issues, aspects, phenomena, themes that are in the data, naming them and trying to make sense of them in terms of similarities and differences. This results in a structured code list which you can apply to the rest of the data during second-stage coding. Very likely the code list will need to be refined further and there will be a few more cycles of noticing and collecting until all the data are coded and the coding schema is fully developed. In parallel you can comment on data segments and begin to write memos.

Phase 2: querying data – finding answers – identifying relationships

At some point, all data are coded, and you can enter the next phase of analysis. So far, you have been working at the data level. The aim now is to look at the data from a different angle: the perspective of the research questions. Starting from one of your questions, you begin to query the data based on your coding. ATLAS.ti offers a variety of analysis tools such as the Code Document Table, code co-occurrence analyses, the Query Tool, and the networks. The results of queries can be displayed in the form of numbers, the coded quotations or as a visualization. However, the actual analysis takes place during the process of writing comments and memos by summarizing and interpreting the results that you see. While writing, you move the analysis further step by step, dig deeper, look at details and begin to understand how it all fits together. You find recommendations on how to work with comments and memos in ATLAS.ti throughout the book Qualitative Data Analysis With ATLAS.ti.

When beginning to see how it all fits together, visualization tools like the network function can be used. Working with networks stimulates a different kind of thinking and allows further explorations. Networks can also be used as a means of talking with others about a finding or about an idea to be developed. Before you reach the last step of the analysis, several networks will probably have been drawn, redrawn, deleted and created anew. The aim is to integrate all the findings and to gain a coherent understanding of the phenomenon studied; or, if theory building was your aim, to visualize and to present a theoretical model.

The N-C-T method of computer-assisted analysis

The starting point for your data analysis should be the methodology you have chosen. Based on this, you need to ask yourself which function or tool in the software you need to use to advance your analysis. It is important that you always start with your research goals and do not let a software tool drive the analysis. Ask yourself step by step along the way what you want to achieve; think about which functions and tools in the software can help you to achieve it – this is the process of translating your methodology into executable steps in ATLAS.ti (see also Woolf, 2014; Woolf and Silver, 2017). The N-C-T method described below describes a generic approach to analysis. I recommend making it the core of your computer-assisted analysis, variated by and adapted to the chosen methodological approach. This will ensure that you make the best of the tools that ATLAS.ti offers.

The N-C-T model of computer-assisted analysis consists of three components: N stands for Noticing, C for Collecting and T for Thinking.

Figure 1: The N-C-T model as navigation tool to guide you through the process of qualitative data analysis (Friese, 2019)

Noticing refers to the process of finding interesting things in the data when reading through transcripts, field notes, documents, reports, newspaper articles, etc.; or when viewing video material or images; or when listening to audio files. In order to capture these, the researcher may write down notes, mark the segments or attach preliminary codes. Codes may be derived inductively or deductively. At this point, the level of a code does not play a role. Codes may be descriptive or already conceptual. The important point is to mark those things that are interesting in the data and to name them.

Collecting: Reading further, you will very likely notice a few things which are similar to some you have noticed before. They may even fit under the same code name. If a similar issue does not quite fit under the same heading as the first issue you noticed, you can simply rename the code to subsume the two. Even if the term is not yet the perfect code label, it does not matter. It will be easier over time when you continue to collect more similar data segments to think of a better and more fitting label.

Thinking about things: We need to think when noticing things, when coming up with good names for codes, or when developing categories and subcategories. We need to do some more thinking when it comes to finding patterns and relations in the data. This mostly takes place after coding when asking, ‘How do the various parts of the puzzle fit together? How can we integrate the various aspects of the findings in order to develop a comprehensive picture of the phenomenon studied?’ A lot of the thinking then happens while writing memos.

Figure 2: The recursive process of noticing, collecting and thinking (Friese, 2019)

Figure 2 shows that noticing, collecting and thinking goes hand in hand and back and forth. It’s not a straight path through your data. You will rework your codes and the code system several times, you may come to the realization that you need more data even in the second analysis phase. If you query your data, you may still be adapting codings. With every cycle you go through, the changes will be smaller, and you will understand your data better, until you are satisfied with the insights you have gained. Overall, I can assure you – it is a rewarding process. Figure 3 summarizes the process of computer-assisted N-C-T analysis.

Figure 3: The process of computer-assisted qualitative data analysis (Friese, 2019).

References and further reading

Bazeley, Pat (2013). Qualitative Data Analysis: Practical Strategies. London: Sage.

Bodgan, Robert C. and Biklen, Sari Knopp (2007). Qualitative Research for Education: An Introduction to Theories and Methods(5th ed.). Boston, MA: Pearson Education.

Bong, Sharon A. (2002, May). Debunking myths in qualitative data analysis. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, 3(2),

Bourdon, Sylvain (2002, May). The integration of qualitative data analysis software in research strategies: Resistances and possibilities [30 paragraphs]. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, 3(2), Art. 11,

Charmaz, Kathy (2002). Qualitative interviewing and grounded theory analysis, in Jaber F. Gubrium and James A. Holstein (eds), Handbook of Interview Research: Context & Method. Thousand Oaks, CA: Sage. pp. 675-84.

Charmaz, Kathy (2006/2014). Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. London: Sage.

Corbin, Juliet and Strauss, Anselm (2008/2015). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (3rdand 4th ed.). Thousand Oaks, CA: Sage.

Cortazzi, Martin (1993).Narrative Analysis. London: Falmer Press.

Dey, Ian (1993). Qualitative Data Analysis: A User-friendly Guide for Social Scientists. London: Routledge.

Elo, Satu and Kyngas, Helvi (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107-15.

Friese, Susanne, Soratto, Jacks and Pires, Denise (2018).Carrying out a computer-aided thematic content analysis with ATLAS.ti.MMG Working Paper 18-02.

Friese, Susanne (2019). Qualitative Data Analysis With ATLAS.ti, 3rdedition. London: SAGE.

Friese, Susanne (2018). Computergestütztes Kodieren am Beispiel narrativer Interviews, in Christian Pentzold, Andreas Bischof and Nele Heise (Hrsg.) Praxis Grounded Theory.Theoriegenerierendes empirisches Forschen in medienbezogenen Lebenswelten. Ein Lehr- und Arbeitsbuch, S. 277-309. Wiesbaden: Springer VS.

Friese, Susanne (2016b ). CAQDAS and grounded theory analysis.Working Papers WP 16-07, October 2016b (MMG Working Papers Print)

Friese, Susanne (2014). On methods and methodologies and other observations. In: Friese, Susanne and Ringmayr, Thomas (eds) ATLAS.ti User Conference 2013: Fostering Dialog on Qualitative Methods. Berlin: University Press, Technical University Berlin.

Gable, Robert K. and Wolf, Marian B. (1993). Instrument Development in the Affective Domain: Meaning Attitudes and Values in Corporate and School Settings (2nd ed.). Boston, MA: Kluwer Academic.

Gibbs, Graham (2007). Analysing Qualitative Data(Qualitative Research Kit). London: Sage.

Glaser, Barney G. (1978). Theoretical Sensitivity: Advances in the Methodology of Grounded Theory. Mill Valley, CA: Sociological Press.

Goleman, Daniel (1995). Emotional Intelligence. New York: Bantam Books.

Kelle, Udo (2004). Computer-assisted qualitative data analysis, in Seale, C., Gobo, G., Gubrium, J.F. and Silverman, D. (eds) Qualitative Research Practice, 473-89. London: Sage.

Komalsingh Rambaree (2014). Three methods of qualitative data analysis using ATLAS.ti: ‘A posse ad esse‘. In: Susanne Friese and Thomas Ringmayr (eds) ATLAS.ti User Conference 2013: Fostering Dialog on Qualitative Methods.Berlin: University Press, Technical University Berlin.

LeCompte, Margarete Diane and Preissle, Judith (1993). Ethnography and Qualitative Design in Educational Research(2nd ed.). San Diego: Academic Press.

Miles, Matthew B., Huberman, Michael, Saldaña, Jhonny (2014). Qualitative Data Analysis(3rd ed.) Thousand Oaks, CA: Sage.

Prus, Robert C. (1996). Symbolic Interaction and Ethnographic Research: Intersubjectivity and the Study of Human Lived Experience. Albany, NY: SUNY Press.

Richards, Lyn and Morse, Janice M. (2013). Readme First: For a User’s Guide to Qualitative Methods, 3rd edn. Thousand Oaks, CA: Sage.

Richards, Lyn (2009). Handling Qualitative Data: A Practical Guide, 2nd ed., London: Sage.

Riessman, Catherine K. (2008).Narrative Methods for the Human Sciences. Thousand Oaks, CA: Sage.

Saldaña, Jonny (2009/2013/2015). The Coding Manual for Qualitative Researchers. London: Sage.

Silver, Christiana and Lewins, Ann (2014). Using Software in Qualitative Research: A Step-by-step Guide, chapter 7. London: Sage.

Starks, Helene and Brown Trinidad, Susan (2007). Choose your method: A comparison of phenomenology, discourse analysis, and Grounded Theory. Qualitative Health Research, 17 (10), 1372-80.

Wolcott, Harry F. (1994). Transforming Qualitative Data: Description, Analysis, and Interpretation. Thousand Oaks, CA: Sage.

Woolf, Nick (2014). Analytic strategies and analytic tactics. In: Susanne Friese and Thomas Ringmayr (eds) ATLAS.ti User Conference 2013: Fostering Dialog on Qualitative Methods. Berlin: University Press, Technical University Berlin,

Woolf, Nick and Silver, Christina (2018). Qualitative Analysis Using ATLAS.ti: The Five level QDA® Method. Routledge: New York.

Share this Article

The all-new ATLAS.ti 9 and ATLAS.ti Cloud have been launched!

From now on, all ATLAS.ti 9 licenses include full access to ATLAS.ti Cloud at no extra cost.

ATLAS.ti 9 represents a huge step forward: Implementation of new technologies (e.g., A.I./machine learning), a sleek and unified design between platforms, and laser-focused concentration on user experience and usability set new standards.

A flexible and easy-to-use license management system makes it a breeze to manage licenses seamlessly between computers, and to distribute individual seats under a multi-user license.