Newsletter 2013-1


Welcome to the first issue of INSIDE ATLAS.ti this year.

We would like to start this year with an invitation: This September 12-14, join us in Berlin and take part in the first ATLAS.ti User Conference: Fostering Dialog on Qualitative Methods.

What initially began as a Train-the-Trainers workshop has now grown into plans for a full-blown conference. We are very excited about this great opportunity to meet researchers from all over the world and from a variety of fields—after all, there is virtually no academic discipline in which ATLAS.ti is not put to productive use.

In this issue, you'll find the Call for Papers and some preliminary information about the conference. We are planning sections on teaching qualitative methods, using ATLAS.ti in the context of various methodologies (ethnography, discourse analysis, phenomenology, grounded theory, etc.), and on the use of ATLAS.ti in actual research projects. We are also soliciting topic ideas for round tables. Your input and your proposals will be highly appreciated and will be crucial to making this event a success. We hope to hear from you, and we hope to see you in Berlin later this year!

Happy coding!


ATLAS.ti User Conference 2013 – Call for Papers

Deadlines

We are accepting abstract submissions for three categories of presentations: workshops, individual papers, and roundtable presentations. Abstracts will be accepted between March 1st and April 19th 2013 at 6:00 pm CET. Information on registration will be provided on the conference website starting on March 1st.

All submitted abstracts will be evaluated with regard to their suitability for the conference. The contact author of a submitted abstract will be notified of acceptance or rejection by Friday, May 17, 2012. It is mandatory that authors of accepted presentations attend the ATLAS.ti conference to present their work.

Topics and Proposals

You are invited to submit abstracts in the following categories:

  • Workshops. Two-hour workshops.
  • Individual papers. 20-minute presentations.
  • Roundtable presentations: 15-minute presentations.

Workshops

Workshop proposals can be submitted around the following topics:

  • Specific applications of ATLAS.ti, such as literature reviews, geo-coding, multi-media analysis, concept mapping, and survey analysis.
  • Using ATLAS.ti in association with other tools, such as geographic information system (GIS) software, statistical analysis software, and social network analysis software.
  • Using ATLAS.ti in the context of specific methodologies, such as photo voice, community asset mapping, participatory research, ethnography, discourse analysis, and others.

Individual Papers

Proposals for individual papers can be submitted on any of the following broad topics:

  • Teaching qualitative methods - Are you teaching a qualitative method class including the use of ATLAS.ti as an analysis tool? We invite you to share your experience, the do's and the don’ts, the successes and failures, and anecdotes and outcomes.
  • Multidisciplinary use of ATLAS.ti - ATLAS.ti users come from a multiplicity of academic disciplines, use the software to research a variety of subject-matters, and approach their analysis guided by different methodological traditions within the qualitative paradigm. What are you studying with the aid of ATLAS.ti? What methodologies are guiding your research? How does your disciplinary background shape the way you approach data analysis with ATLAS.ti? Tell us about your experience.
  • Doing research with ATLAS.ti - We invite presentations that focus on the use of ATLAS.ti throughout the research process. We would like presenters to discuss how research questions, hypotheses, and theories inform analysis, how ATLAS.ti users approach the literature review and how the integrate it into the data analysis process, as well as how they triangulate different methods of data collection in the context of an analysis project. Further we are interested in issues related to how you assured quality criteria and how you reported and presented your findings.

Roundtable Presentations

This is the "joker" category – you are free to suggest any topic for a round table that relates to ATLAS.ti and that you think interests other people as well. Surprise us with a good idea!

Abstract Submission

Please use the submission form to prepare and submit your proposal:

http://www.emailmeform.com/builder/form/s66xQ9cdFw00dVj03a

Conference Web Site

To learn more about the conference, including speakers, session formats, venue, and registration, visit the conference website:

http://conference.atlasti.com

This web site will be updated continuously with news and scheduling information as it becomes available.


Best Practice:
How to save your ATLAS.ti project

This article aims to clear up some of the prevailing misconceptions about how to best save and back up your ATLAS.ti projects, and provide some easy-to-follow instructions.

First of all remember that your ATLAS.ti project consist of your HU file AND your documents as separate entities. This is still the case in version 7. The HU file does not “contain” your document; it only contains the work you do on your documents, such as the comments you write, all codes and coded segments you create, the quotations, the memos, the families, networks and all relations.

The only exception to this rule would be if you work exclusively with internal documents, for example if you have imported survey data and have added no other documents, or if you work with internal text documents only. In all other cases, the HU file contains references to your primary document files that are stored either in one of the Libraries or externally via links. Even if stored in a library, these document remain external in the sense that they are still separate entities, i.e. they never become a physical part of the HU file.

Thus, as long as you only make sure that your HU file is safe, you are really only saving one part of your project. In case of a computer crash or other disaster, the HU file by itself will be of little use as a backup. Equally, if you want to work on a different computer, it is also not sufficient to only take your HU file along. You need to take both, the HU file AND your documents.

The only way to safely back up your HU file and your documents together is via a Copy Bundle file. (Exception: If you really work with internal documents exclusively, the HU file is indeed sufficient, see above.)

Here is what you need to do in order to fully save your ATLAS.ti project:

Select Project / Save (or Save As...)

Select Project / Save Copy Bundle

The following window opens:

Click the Create Bundle button.

If you want to exclude documents from the bundle, you can either uncheck individual documents, or set a PD family as a global filter first and then select the option "Apply current PD filter."

Copy bundle files can also be used to preserve a project at various stages, e. g., the open coding phase, or the first-cycle or second-cycle coding phase, etc.

If you work with externally linked documents – i.e., you have used the following option: Documents / New / Assign External Documents – you still can move HU file and your documents manually. You may want to consider this option if you work with video files and the size of all files together inflates the size of the copy bundle file (> 2GB).

Unpacking The Copy Bundle File

When To

You need to create and unpack a copy bundle file:

  • When transferring a project to a different computer for the first time.
  • When you work in a team and you want to provide your team members with a copy of the Master project. The team members need to unbundle the file.
  • When you add new data sources to a project and want to work on the second computer (again)
  • When you have edited your data sources and want to work on the second computer (again)
  • When you have lost your project and want to restore it from a copy bundle backup.

How to

This is how you unbundle a copy bundle file:

Either double-click on the file in the Windows Explorer or open ATLAS.ti first and select Project / Unpack Copy Bundle

Select a copy bundle file to unpack and the Unpack Copy Bundle window opens.

There is one "little" issue that often causes confusion – the color of the box behind the field HU Path:

By default ATLAS.ti lists as HU Path the previous location where the HU file was stored when the copy bundle file was created. In the figure above, it is a path under my user name, Susanne. If you were to open the bundle on your computer, the colored box at the end of the line would probably be red indicating that the path cannot be created. This is also the case if a copy bundle file was created where the HU is stored on a server and you want to unpack it to your local drive.

The simple solution is to click on the file loader icon and select a location that does exist on your computer. Then the box turns green and, voila!, you can click on the Unbundle button and everything is well 🙂

The box remains yellow if the HU file already exists in the selected folder. In Migrate mode, the HU file will be overwritten if the copy bundle file includes a newer version of the HU. In Restore mode, it will be overwritten nonetheless.

One more detail

Now, why does it say: “X number of documents will be excluded as identical files exist?”

The column "Target Location" indicates where the documents will be stored when unpacking the file. <Local Managed> means that they will be upacked into the library.

<HUPATH> or <TBPATH> or an absolute path reference are possible other locations for externally linked files.

If the documents already exist at this location, then there is no need to unpack them. And this is why you see the message that 0 documents will be unbundled / or a total of x documents will be excluded.

There is nothing to worry about if you see this message!


IQM Dissertation Award
Winning thesis of the 2011/2012 competition

The International Institute for Qualitative Methodology (IIQM) at the University of Alberta offers an annual award to the best Masters level dissertation and the best PhD level dissertation, from any academic discipline, containing research based on qualitative methodology. This award is sponsored by ATLAS.ti. We would like to congratulate last year's winner Justin Page and introduce his dissertation: Power, science and nature in the Great Bear Rainforest : an actor-network analysis of an integrated natural resource management project

Justin Page’s research focuses on the relationship between society and natural resources, spanning studies of forestry, aquaculture, mining and resource-dependent communities. He earned his PhD in Sociology from the University of British Columbia, where he drew on environmental sociology and science studies to explain the creation of the Great Bear Rainforest. Post-doctoral positions have focused on the public acceptability of environmental remediation and the resilience of coastal BC communities. Justin now works as a Social Scientist in an Vancouver-based environmental consulting firm.

Justin Page about his work with ATLAS.ti:

“I have used ATLAS.ti extensively in a number of projects, including a study of resilience in resource-dependent communities, public acceptability of waste treatment technologies in the mining sector, and environmentalists' establishment of the Great Bear Rainforest. I find ATLAS.ti's coding feature to be essential when working with qualitative data. Coding enables me to manage large amounts of data by teasing it apart into discrete categories, which I then subject to conceptual analysis. As my work is informed by a theoretical style of research called actor-network theory, I find ATLAS.ti's network view to be a particularly useful tool. This tool enables me to graphically represent relationships among different entities that have come together to form a network. Wherever I am working within Atlas TI - whether selecting individual quotations, coding, or reflecting on the coded material - I write extensive memos. I find the memo function to be integral to qualitative analysis as, in my experience, analysis takes place through the process of writing.”

Dissertation Abstract

This dissertation explores the potential contribution of actor-network theory to the investigation of power and hierarchy, science and politics, and the relationship between nature and society in integrated natural resource management (INRM) projects. INRM consists of natural resource management approaches that seek to devolve power and authority from governments and experts to stakeholders, take account of people as part of ecosystems, and directly link conservation and development. While INRM projects represent an important evolution in resource management, they come with particular sets of problems. Specifically, (1) the devolution of decision-making authority to communities provokes issues of power and hierarchy as groups vie to ensure that their interests are adequately taken into account, (2) critiques of expert-led processes shift responsibility for knowledge production to stakeholder groups, thus raising questions about the relationship between science and politics, and (3) attempts to link ecology and economy require a difficult re-conceptualization of the link between nature and society. Actor-network theory (ANT) avoids presuppositions about power, science, nature, and society in order to study how they are produced as effects of networks, thus offering unique conceptual tools to study INRM as a complex, contingent, and innovative network-building process.

A qualitative case study of the “Great Bear Rainforest” agreement on British Columbia’s west coast is undertaken to explore these issues in INRM. Analysis of interviews with 34 individuals from environmental organizations, forestry companies, First Nations, consultancies and local and provincial governments, as well as analysis of textual material, reveals how environmentalists (1) generated power by building a network of activists, bears, forest products customers and forestry companies, (2) simultaneously deployed science and politics in their network-building activities and (3) moved away from attempts to purify networks into “nature” and “society,” working instead to directly link ecosystem integrity and human well-being in a new, common “collective” of humans and nonhumans.

The research provides significant detail and analysis of a particular case of INRM that will be of use to INRM practitioners, advocates and activists. Additionally, the research demonstrates the applicability of ANT to the investigation of power, science, and nature in INRM projects.

Application of ATLAS.ti

I used the qualitative analysis software ATLAS.ti to code the interviews. Since I did not come to the field with predefined theories or conceptual frameworks, I chose an “open coding” process (Strauss et al., 1990) in which I read the interviews and developed codes from the text as I went along. Codes were primarily organizational and descriptive, with primary codes including: actors, agreements, EBM, ecology, economy, governance, groups, ideas, knowledge, law, land use planning, people/nature, relationships, and “vision.” Each code included sub-codes. For example, “ecology” encompassed: conservation management, ecological integrity, old growth, operating areas, protection, and risk. Some of these sub-codes were further subdivided. For example, “protection” encompassed: conservancies, moratoria, and precaution. With the large number of codes, sub-codes, sub-sub codes, and so on, I developed over 260 codes in total. However, there are only 15 first order organizational and substantive codes. One first order code – “ANTcodes” – functioned as a theory code, seeing as it included 15 concepts related to ANT. Since the purpose of the coding exercise was not to rearrange the data into categories in order to facilitate comparisons between things in the same category but to group information into descriptive categories so as to learn how elements are connected, a finer degree of precision and a larger-than-normal number of codes is warranted. 3185 codings were applied to 1885 quotations.

I used ATLAS.ti’s “network view” function to visually arrange and connect the descriptive codes with one another. This function is primarily designed as a theory-building device, in which codes (representing concepts) are connected with one another. However, as my codes are primarily descriptive, the linking function allows me to visually represent elements that are linked in the field. These network views complemented my sketches of relationships, allowing me to trace the formation of the GBR network and, in ATLAS.ti, to link that formation to quotations that talked about it. Finally, I also engaged in writing as I coded. First I attached comments to quotations. The comments contextualized the quote and highlighted issues that I believed to be important about the quote. Second, I wrote five (self-defined) categories of memos: application, commentary, method, queries, and theory. In the first category, I wrote memos that applied ANT concepts to the empirical material. The “commentary” memos commented on particular aspects of the case, such as the market campaign and protected areas.

“Method” memos focused on issues like the work plan, things I needed to do, thoughts about how I might structure the report, reflections on the coding process, and so forth. “Queries” included memos on questions that I had about the material and case, and things that I needed further information on. Finally, “theory” memos reflected on ANT concepts, often with the empirical material serving as a source of prompts and examples. As for the memos written during the collection of textual and audio-visual material, they will also serve as sources of data.

The full dissertation can be downloaded from https://circle.ubc.ca/handle/2429/28974


New Analysis Tools

Code - Cooccurence Explorer

The following example analysis follows up on a article published in the July 2012 edition of INSIDE ATLAS.ti . As in the previous edition, the example is based on the ATLAS.ti 7 sample project "Children & Happiness stage II," which can be accessed via the Help menu.

Open the Memo Manager by clicking on the Memo button. You find a description of the sample project in the first memo. In addition you find a few research question memos. This time we want to take a look RQ 1 again, but from a different perspective. The question is: Is happiness defined differently by those who have children as compared to those who do not have children?

We first examined this question in the July newsletter using the network view function. Below, I explain how this question can also be explored using the Code-Cooccurence-Table.

Open the Code Manager.

In the side panel for Families, you find a code family with the name "def happiness + attribute codes".

Right-click and select Set global filter.

If you have successfully activated a global filter, the family icon changes to a filter and the name turns dark red:

Select the the Code Cooccurence Table option from the Analysis menu.

Add the two attribute codes (families who don't have children and families who have children) to the columns of the table:

Select the "def happiness" codes to be displayed in the rows of the table:

This results in the following table:

The first number in the cell shows the frequency of cooccurence. The second number of the c-coefficient, similar to a correlation coefficient. However, since the data set is small, this coefficient has no meaning. Therefore it is best to deactivate it. Read more about the c-coeffiecient and when it makes sense to use it in the program manual.

Click on the button with the letter C and the two red triangles that make it look like a C wearing a bow-tie.

Further display options can be set via the colored circle and the settings buttons:

My favorite setting is to use no color for the cells (none) and to use the code colors as header background:

You can also adjust the default width for column and row headers. Choosing the described options, this results in the following display of the table:

Clicking on a cell, opens the list of cooccuring quotations. Click on a quotation to view it in the context of your data.

Note: Regarding the number of quotations that are shown if you click on a cell, please read the detailed explanation in the manual.

Reading the table

The numbers in the table have exploratory character. In most cases they cannot be interpreted directly. Maybe a high frequency is simply a result of one person talking about a lot about one issue. You always need to inspect the data behind the numbers.

What the table provides is a quick overview where there might be something interesting in the data. To achieve the same result using the query tool, it would have been necessary to click 10 (2 x 5) queries, three of them would have had nil finds.

What we see from the table is that fewer non-parents in the sample provide a definition for happiness at all. It is an issue parents mention more often and their definitions include a greater variety of aspects. We do, however, need to keep in mind that the sample contains much fewer non-parents than parents and thus, a higher occurrence of parent responses is to be expected.

In inspecting the table, your task is to spot where there might be something interesting in data, e g. the nine responses of parents and the one from the non-parent about happiness meaning fulfillment. In the code-cooccurence table you can click through the quotations and view them in context, but you cannot output them as text.* Therefore the next logical step is to open to the query tool. Now we have a specific question in mind and we know what we are looking for.

In the next issue we will explain how to find an answer to this more specific question and explore the Query Tool in some detail.

If you want to export the table to Excel, click on the Excel button in the tool bar of the Code Cooccurence Table.


The Word Cruncher Reloaded

With the update to version 7.0.85, the Word Cruncher has received a new look and the various options related to counting words in your primary documents are integrated now much more smoothly. This allowed us to streamline the menu options, i. e. some menu options are gone altogether, others have changed. Under the Analysis menu, you now only find the entry Word Cruncher. The word cloud option has been integrated into the word cruncher tool.

More Options Around Word Clouds

For easy access, the word cloud option is now included in the context menu for primary documents. Open the primary documents manager and right click on a document:

In addition to removing words from the cloud view, you can immediately add them to the stop (exception) list. Another new option is to sort words by length.

If you are curious how often a word occurs in your text, just move over the word with your mouse:

Additional Options Related To Word Counts

Let's now take a look at the Word Cruncher Tool:

The Excel table now includes relative counts for each PD. This option was previously only available in the built-in tool. It is also possible to get aggregate counts, for instance based on documents of a PD family.

The Word lengths option lets you sort all words by length (in Excel) and delete very short words (e.g. words up to 3 characters).

If you are interested in an aggregated count for a document group:

Create an appropriate PD family (if it does not yet exist).

Set this PD family as global filter (right-click on the family in the side panel of the PDocs Manager)

Open the Word Cruncher tool.

Deactivate the option "Separate counts for each PD".

Improved Word Boundary Processing

The new word cruncher uses a better procedure to recognize word boundaries. Higher accuracy also means that a higher number of checks are performed, which in turn may slow down the search process. If you feel that the performance of the old Word Cruncher was sufficient for your needs and you are happy with the way ATLAS.ti recognizes word boundaries, activate the option "Use legacy word recognition.

'Stop- and Go-List'

The "stop-list" is no longer just a list to exclude words from being counted. You can also use it as go-list. This means only words that are included in the list are counted.

The new Edit List button provides immediate access to the list. You may want to work with dictionaries that included only special terms (e.g. Psychological dictionaries). You can copy and paste the terms into the exclusion list and search only for those words in your data.