Principles+of+Analysis

=Introduction to Qualitative Data Analysis= flat Congratulations, you have data! So. . . now what? Are you relieved to finally be through transcribing hours and hours of text? Were you lucky enough to find someone else’s data you can jump right into? Either way, it is important to understand the overarching principles of qualitative data analysis before you begin. These steps, tips, and tools can help ease the analysis process for first-timers or offer additional insights to more seasoned researchers.

=So what is Qualitative Data Analysis?=

Qualitative data analysis involves a search for patterns in the data and trying to explain those patterns. Figure 1 illustrates some common ways that students understand qualitative analysis, with coding and interpretation figuring in prominent fashion. Overall, the process of analysis begins with your research questions, continues through the data collection phase, and remains integral throughout the research project ;.

Figure 1: //Words that a qualitative analysis class said came to mind when thinking of qualitative data analysis//


Data analysis is an iterative process, meaning that we constantly assess and re-assess the data in an effort to locate emergent themes throughout the process. When conducting qualitative data analysis we may have specific research questions in mind; however, the data can offer additional insights not previously identified. These new or emergent themes can expand our understanding of the research and allow greater depth in understanding human nature. This process of discovering new information and tailoring the research objectives helps to elucidate the notion of qualitative data analysis being iterative. So, when engaging in data analysis, don’t be afraid to remember the circular nature of the research and the potential for new and interesting findings.

Figure 2: I have the data, now what? Qualitative data analysis.


=Methods Made Easy=

This section describes four major steps, illustrated in Figure 2, that comprise the qualitative data analysis process. It is important to remember that in practice you will go backwards and forwards during your analysis (it’s an iterative process) and that sometimes these steps will overlap.


 * First, you will start by reading your data. As you become familiar with the data you will be able to notice and identify initial patterns. Nuances between primary and secondary data are discussed.
 * Second, you will begin coding your data. This will require the development of a coding scheme. An important outcome of this step will be the codebook; which contains the codes, definitions, and exemplary quotes.
 * Third, you will verify the data by looking at aspects such as the trustworthiness of the data, issues related to coder bias, and potentially calculating inter-coder reliability.
 * Fourth, you will “describe the big picture” of the research by interpreting your data. Here you will answer the question of what it all means by reducing and synthesizing your findings. Common outcomes of this step include themes, taxonomies, and theory.

Below we describe these 4 steps in more detail. Examples are provided to help during your qualitative data analysis process journey.

=Reading=

First, let’s establish a couple of definitions:

//Iterative process//: We talk about this a lot in qualitative data analysis, and it basically means that you go back and forth a lot in the process. It’s more cyclical in nature than it is linear. Think of it as “Lather, rinse, repeat”.

//Emergent themes//: These are themes are come up or “emerge” from the data while you’re reading or coding it. Basically, they’re themes that you weren’t clever enough to come up with before you started analyzing data.

The very first step in the analysis process is to “be one” with the data. Take some time to meditate with it, if that’s your kind of thing. Or take a nap with it and hope you’ll absorb it by osmosis. Alternately, you can get really technical and read it. This process can be quick and easy or long and laborious, depending on your relationship to the data.

If you have been a part of the research process from inception through data collection, then this step may be rather easy. Reading (or re-reading) the data gives you a chance to step back from the analysis and read the thick descriptions provided by the participants. Taking this fresh approach to the data can allow for a deeper understanding of emergent themes. It is always important to begin with this step and not jump into the coding process so we can be familiar with all of the data. This can also be extremely useful when identifying emergent themes.

On the other hand, you may be analyzing secondary data. If this is the case, then a few other steps should precede the reading step. It is important to understand the who, what, when, and where of the initial research project:


 * Who conducted the research?
 * What where the research questions?
 * What was the research design?
 * When were the data collected?
 * Who were the research participants?
 * What analysis plan was used?
 * What were the findings?

After answering these questions, and fully understanding the background of the initial research process, you can dive into the text. That way, once you understand what questions can be answered by the data you have on hand, it will be easier to formulate new research questions.

Taking the time to familiarize oneself with the data will help ease researchers into the second step, coding.

=Coding=

During this step you are again going to be reading over your data in a detailed fashion. When reading it over, you should be looking for important and significant segments of text and tagging/marking those. A more detailed discussion of the software to help with the coding process is available on the wiki.

There are two major types of codes: inductive and deductive. Inductive codes come from the data or ideas that are brought by the participants, while deductive codes come from your interview guide, theory, and the literature. Another way to think of it is that inductive codes are those that are identified by the participants, whereas deductive codes are identified by the researcher/research teams.

An essential analytic tool that you will develop and use during your research project is the Codebook. The codebook is where you define your codes and start your explanatory analysis of the data.

Figure 3. Sample codebook
(Adapted from p. 54)
 * Code name || Short Definition || Full definition || When to use || When not to use ||
 * Software || Software used for qualitative analysis || Computer software that is used by researchers to help with the transcription, coding, or analysis process || When a participant mentions a software they use to help in their process || When a participant brings up a software unrelated to the process ||

A critical exercise during the coding process is to identify exemplary quotes (great examples) that emerge from the data. These quotes can represent the theme, but more importantly they connect your interpretation with the participants’ own words, thus providing insight to your data and validating your findings.

An example:

A qualitative study was undertaken to explore and understand the experiences of everyday life of young people with attention deficit/hyperactivity, and autism spectrum disorders. The following segment of data is an exemplary quote for the subtheme “stress and rest” identified by the authors in their research project. In particular it shows what the authors interpreted as common social situations that caused stress and obsessive thoughts in the participants.

“Carina: You know what it is like, some idiot asks you to hold someone’s child. You know what absolutely must not happen and that takes up all your thoughts. One should not throw the kid on the floor, or throw it out of the window. One should not say mean things about the kid. Carina: It is a total stress factory.” P. 4

=Verifying= Study design may directly affect the strategies used in interpreting the data and subsequent analysis. They demonstrated that, dependent upon the study design, special considerations for culture, self, collaboration, circularity, trustworthiness, and evidence deconstruction may be needed. For example, researchers suggest that including culture in analysis may be approached in many ways; grounded theory may lend itself to inclusion of researchers from a cultural group similar to study participants, whereas a narrative analysis may emphasize the need for cultural self-awareness. Further considerations for understanding the data by study design can be found in the work of Yeh and Inman.

Early on in analysis, and often throughout the data collection process, it is important to establish the trustworthiness of your data; often this means establishing reliability and validity ;. Reliability means that replicating the study would yield similar results. Validity refers to how correct the study findings are, or if the measure is actually measuring what the researcher thinks it is.

Ulin and colleagues present four categories to test the credibility, dependability, confirmability, and transferability of the qualitative data.
 * Credibility: Researchers suggest that you measure your data credibility by determining if interpretations of the data are consistent with the data collected, and if the findings can be understood by the study population. Key techniques to establish credibility include: looking for the unexpected within the data, testing other explanations of the data, and searching for any explanations for data that appears inconsistent after triangulation.
 * Dependability: Dependability of the data can also be tested. This includes replicating both the study results and the methods used to obtain these results.
 * Confirmability: Confirmability reminds researchers to be aware of their own subjectivity and to track the work that they are completing. The goal is to reduce research bias.
 * Transferability: Transferability refers to the ability to apply study findings in the same context, or in another context. In qualitative research, transferability of the data may be difficult to apply, but is most frequently applied to studies designed to test a model or build a theory.

One way to test for data reliability and validity is through triangulation of the data – or comparing your research findings with other similar studies, secondary data sources, or what your population thinks of your data. It is important to start these processes early on in the interpretation phase. Additional ways to ensure validity and reliability of your data include: multi coders and intercoder agreement checks, creating and audit trail, supporting themes and interpretations with quotes.

=Interpretation= According to Bradley, when it comes to the interpretation of qualitative data analysis, there are three Ts to keep in mind: Taxonomies, themes, and (sometimes) theory


 * Taxonomies: Taxonomies are overarching principles that tie different themes together. The same way in which taxonomies are used to help classify different types of plants or animals, taxonomies in qualitative research are used to help classify different themes that emerge from the data.
 * Themes: Themes are smaller groupings of codes that are used to help sort out the codes that are used in analysis. Several themes can come together to help make up a taxonomy.
 * Theory: Theory is used to help predict or explain a particular behavior. It can be, although it is not always, the outcome of qualitative data analysis. The themes and taxonomies can help to create new or enhance existing theories.

Figure 4: An example of the taxonomy - theme - code relationship


 An example: The authors of this entry asked an advanced qualitative methods class to text them with their response to the question There were several different words that kept coming up, like the names of specific software, and the roles that different research team members can play. Out of this, two distinct but related concepts emerged: people and software.

<span style="font-family: Georgia,serif;"> On their own, these themes did not appear to have much in common. However, they could both be classified under the taxonomy of “tools”. Keeping in mind that tools can come in the form of people (team members) and of things (software), this taxonomy helps to account for a major aspect of what students in a qualitative methods research class think of when they think of qualitative data analysis.

<span style="font-family: Georgia,serif;"> Qualitative data interpretation assists researchers in making sense of data beyond coding.

<span style="font-family: Georgia,serif;">Figure 5: What does the data mean? Data interpretation
<span style="font-family: Georgia,serif;">

=<span style="font-family: Georgia,serif;">Tips from the Field = <span style="font-family: Georgia,serif;">Ever feel like you want to pull out your hair? Here are some tips from the field to help keep you sane! <span style="font-family: Georgia,serif;"> As Guest et al. said, “[q]ualitative data analysis is both an art and science”
 * 1) <span style="font-family: Georgia,serif;">When lost or overwhelmed: go back to your research questions or simply take some time away from the data. Literally stand up and walk away from it!
 * 2) <span style="font-family: Georgia,serif;">Monitor time spent coding. Find your sweet spot.
 * 3) <span style="font-family: Georgia,serif;">Be as rested as you can.
 * 4) <span style="font-family: Georgia,serif;">Find a comfortable area to do your work.
 * 5) <span style="font-family: Georgia,serif;">Draw diagrams: these help to “see” relationships in your data (software can help: Atlas.ti)
 * 6) <span style="font-family: Georgia,serif;">Talk to your research team. Discussing ideas with your team is an essential part of the analysis.
 * 7) <span style="font-family: Georgia,serif;">Let your brain do its work!