Interview with Dr. Gerda Casimir

October 25, 2021

We had the great pleasure to interview Dr. Gerda Casimir, an Independent Certified ATLAS.ti Professional Junior Trainer based in the Netherlands, about her recent publication in Quality & Quantity. Click here to see her article.  

 

 

 

 

 

Thank you for your time! Can you please tell our readers about yourself?

 My name is Gerda Casimir. I studied home economics – also called domestic science, or, in later programs, consumer studies – at Wageningen University, the Netherlands. After a career in an institute of higher education I returned to Wageningen for a PhD, and was offered a job as an assistant professor in the chair group Sociology of Consumption and Households. Wageningen is originally an agricultural university, currently focusing on healthy food and living environment, including forestry and the maritime world. Consequently, it has a long tradition in mixed-methods research. Our group was no exception to that. In our research we always used different methods: from actually measuring (for instance installing water meters to register water use), quantitative measures like surveys or diary methods, to qualitative research in the form of interviews, case studies and focus group discussions.

I got involved in qualitative data analysis. One reason was that we wanted to support those analyses with computer software, and since I was an advanced computer user, due to my experience in my former job, I was asked to take part. That is how I came to know ATLAS.ti, and I introduced it in the university. In 2015, I retired, but I am still once or twice a year involved in courses on qualitative data analysis, in particular ATLAS.ti practicalities.

My PhD research was partly based upon qualitative methods: interviews and focus group discussions on the changes in the division of household labour when people start working from home. A decade later I analysed newspaper clippings of the past ten years, addressing the same topic. Furthermore, I published on consumer and household behaviour, often related to gender aspects. And I supervised students in different stages of their study and in different research areas.

There are many types of qualitative data addressed in those projects: interviews, case studies, and focus group discussions, but also scientific articles (literature review) and newspaper clippings. Plus the answers to open questions in a bigger survey. For all these analyses, I, or my students, did use ATLAS.ti, with the exception of my PhD research, since we didn’t have it then – 1995-2001 – yet.

Congratulations on your recent publication in Quality & Quantity! Can you please tell us more about it?

In the courses and some of the publications, I co-operated with colleagues of the research methodology group. We teach students that qualitative research is as valuable as quantitative, and that you can at least try to address qualitative data as systematically and transparently as possible. A clear separation of the (first) coding phase and data analysis phase is one of the strategies to obtain transparency. However, we observed a lack of good textbooks and articles in this respect. Contrary to quantitative research, there are few guidelines for the presentation of qualitative analysis within mixed methods and interdisciplinary research. Qualitative researchers do not show the same kind of consensus about data analysis as quantitative researchers.

These observations were reason to write this article. The emphasis was on methods of analysis and the corresponding presentation of results. To demonstrate our intentions with a concrete example we used transcripts of interviews with international graduate students (masters and PhD). The content was about the use of ICT technology to maintain ties with household members back home, and the meaning of ‘household’ as experienced by those students.

How did ATLAS.ti help?

We used ATLAS.ti, both for the literature review and to analyse the interviews. For the interviews, we elaborated four different methods of analysis: content analysis, domain analysis, metaphor analysis and membership categorisation analysis. For each method, we distinguished a coding phase and an analysis phase.

For the content analysis, we applied a top-down coding strategy, using a coding scheme derived from an earlier article. The analysis consisted of an overview of frequencies of the codes allotted. The number of frequencies we derived from a code-document table,

The other three methods asked for bottom-up coding. There the analysis consisted of categorising these codes into bigger concepts. We used both the group function and the network function.

The metaphor analysis did not yield enough metaphors to categorise them. Therefore, we decided to present the results in the form of a list of metaphors found, using the report function of the code manager, indicating that we wanted all quotations coded with a metaphor in the output.

The domain analysis consisted of two parts: the meaning of the concept ‘household’, and the content of communication with household members. In both cases, the first coding phase resulted in low-level codes, so-called ‘folk terms’, expressions used by the interviewees. Many of those codes were generated with in-vivo coding, converting text to codes, though we often adjusted spelling and/or word order to obtain codes that could be applied for similar situations in the rest of the transcripts. In the analysis phase we related the codes to cover terms, which themselves were part of a higher-level concept. The concept household resulted in a network picture, the bitmap export of which we included in the article. For the content of communication, we also used the network function. Here we added the quotations connected to the codes to the network.

The Membership Categorisation Analysis consisted of a list of criteria (Membership Categorization Devices) by which individuals classified others as being part of their household. We presented this list in a table, where the first column contained the Devices: being family, sharing, emotional criteria and cultural criteria. The second column represented the terms used by the interviewees, the initial codes. Examples of these are: having blood ties, sharing a roof, feeling responsible or ‘in my country…’. In a third column we added explanatory remarks, like: “Being present is a criterion to include, being absent not necessarily a criterion to exclude.”

It is not possible to indicate my favourite or most useful features of ATLAS.ti. What I do like is the flexibility of the program, not assuming a particular approach. The concepts behind the program are clear, and it is easy to learn. It is very useful for both top-down and bottom-up coding, where the latter can be in-vivo coding and coding by the researcher as well. The network function is handy and beautiful, it’s interactivity with the data is nice.

Students find the distinction between code relations in a network and code groups –first-class and second-class links – often confusing. Once it is clear, you can play with it. For instance, importing a group in a network and then creating links between the members of this group and a higher-level concept, works fast and flawless.

I very much like the code-co-occurrence and the code-document tables. For this particular article, we did not make use of the first, but I used it extensively with my newspaper clippings analysis. For that article, I also applied the auto-coding function. That was not only time-saving, it also made sure I didn’t overlook things. Using code groups and document groups, I could present overall graphics with occurrences of certain concepts per year and per type of publication medium.

In a systematic literature review, the query function appeared to be indispensable: which concepts are addressed within the theoretical framework or the data collection section, and which in the results section? Or in two or three of these sections?

Is there anything else you would like to add?

Students often ask how many interviews make it worthwhile to use the program. Or, in other words, when does the time to learn the program outweigh the return? For myself, the answer is easy: I can always use it, no matter how small or how big my dataset is. But that is because I know the program and like to make use of the many features.

 

Thank you, Dr. Gerda Casimir!

You can contact Dr. Gerda Casimir by writing to [email protected]

Share this Article