Inter-coder Agreement (ICA)

You can test inter-coder agreement in text, audio and video documents. Image documents are not supported. This also applies to image quotations in text PDF documents.

As you need to start from a common Master project when setting up a project for ICA analysis, you cannot check for inter-coder agreement if one of the coders is using the Web version. All coders need to use the desktop version (Mac or Windows). For a more detailed explanation, see Merging Projects.

Why It Matters

The purpose of collecting and analyzing data is that researchers find answers to the research questions that motivated the study in the first place. Thus, the data are the trusted ground for any reasoning and discussion of the results. Therefore, the researchers should be confident that their data has been generated taking precaution against distortions and biases, intentional or accidental, and that the mean the same thing to anyone who uses them. Reliability grounds this confidence empirically (Krippendorff, 2004).

Richards (2009) wrote: "But being reliable (to use the adjective) beats being unreliable. If a category is used in different ways, you will be unable to rely on it to bring you all the relevant data. Hence, you may wish to ensure that you yourself are reliably interpreting a code the same way across time, or that you can rely on your colleagues to use it in the same way" (p. 108).

There are two ways to rationalize reliability, one routed in measurement theory, which is less relevant for the type of data that ATLAS.ti users have. The second one is an interpretivist conception of reliability. When collecting any type of interview data or observations, the phenomena of interest usually disappears right after it has been recorded or observed. Therefore, the analyst's ability to examine the phenomena relies heavily on a consensual reading and use of the data that represent the phenomena of interest. Researchers need to presume that their data can be trusted to mean the same to all of their users.

This means "that the reading of textual data as well as of the research results is replicable elsewhere, that researchers demonstrably agree on what they are talking about. Here, then, reliability is the degree in which members of a designated community agree on the readings, interpretations, responses to, or uses of given texts or data. [...] Researchers need to demonstrate the trustworthiness of their data by measuring their reliability" (Krippendorff, 2004, p. 212).

Testing the reliability of the data is a first step. Only after establishing that the reliability is sufficiently high, it makes sense to proceed with the analysis of the data. If there is considerable doubt what the data mean, it will be difficult to justify the further analysis and the results of this analysis.

ATLAS.ti's inter-coder agreement tool lets you assess the agreement of how multiple coders code a given body of data. In developing the tool we worked closely together with Prof. Klaus Krippendorff, one of the leading experts in this field, author of the book Content Analysis: An Introduction of Its Methodology, and the originator of the Krippendorff's alpha coefficient for measuring inter-coder agreement.

The need for such a tool as an integrated element in ATLAS.ti has long been evident and has been frequently requested by users. By its nature, however, it could not and cannot be a magic "just click a button and hope for the best" solution kind of tool. If you randomly click on any of the choices that ATLAS.ti offers to calculate an inter-coder agreement coefficient, ATLAS.ti will calculate something. Whether the number you receive will be meaningful and useful depends on how you have set up your project and the coding.

This means if you want to test for inter-coder agreement, it requires at least a minimal willingness to delve into the basic theoretical foundations of what inter-coder agreement is, what it does and can do, and what it cannot do. In this manual, we provide some basics, but this cannot be a replacement for reading the literature and coming to understand the underlying assumptions and requirements for running an inter-coder agreement analysis.

Please keep in mind that the inter-coder agreement tool crosses the qualitative-quantitative divide. Establishing inter-coder agreement has its origin in quantitative content analysis (see for instance Krippendorff, 2018; Schreier, 2012). If you want to apply it and want to adhere to scientific standards, you must follow some rules that are much stricter than those for qualitative coding.

If you want to develop a code system as a team, yes, you can start coding independently and then see what you get. But this approach can only be an initial brainstorming at best. It cannot be used for testing inter-coder agreement.

A good time for checking inter-coder agreement is when the principal investigator has built a stable code system and all codes are defined. Then two or more additional coders that are independent of the person who developed the code system code a sub-set of the data. This means, this is somewhere in the middle of the coding process. Once a satisfactory ICA coefficient is achieved, the principal investigator has the assurance that his or her codes can be understood and applied by others and can continue to work with the code system.

Literature

Guba, Egon G. and Lincoln, Yvonna S. (2005). Competing paradigms in qualitative research, in Denzin, N. and Lincoln, Y.S. (eds) The Sage Handbook of Qualitative Research, 191–225. London: Sage.

Harris, Judith, Pryor, Jeffry and Adams, Sharon (2006). The Challenge of Intercoder Agreement in Qualitative Inquiry. Working paper.

Krippendorff, Klaus (2018). Content Analysis: An Introduction to Its Methodology. 4th edition. Thousand Oaks, CA: Sage.

MacPhail, Catherine, Khoza, Nomhle, Abler, Laurie and Ranganathan, Meghna (2015). Process guidelines for establishing Intercoder Reliability in qualitative studies. Qualitative Research, 16 (2), 198-212.

Rolfe, Gary (2006). Validity, trustworthiness and rigour: quality and the idea of qualitative research. Methodological Issues in Nursing Research, 304-310.

Richards, Lyn (2009). Handling Qualitative Data: A Practical Guide, 2nd edn. London: Sage.

Schreier, Margrit (2012). Qualitative Content Analysis in Practice. London: Sage.