Trustworthiness in Qualitative Research
This year I am taking a class taught by my advisor Miriah Meyer on ontologies, epistemologies, and research paradigms. I will have to write another post on what those words mean at some point, but for now I want to focus on an interesting conversation we have been having: how can researchers communicate rigor if their work is qualitative?
Learning to Trust: a Positivist Perspective
Positivism is a research paradigm often associated with the natural sciences. In the world of positivism, there are objective and generalizable truths that can be measured. Within positivism, research results are judged according to the following criteria:
Internal Validity: how certain are you that the independent variable has affected the dependent variable?
External Validity: how representative is the sample population (and by extension your results) of the general population?
Reproducibility & Reliability: how likely are the results to show up again in another experiment? how well can another researcher run your experiment?
Objectivity: are the measurements value-free?
These criteria are often associated with quantitative studies, but they become problematic when there is no one answer to your research, no one singular truth, and when the socio-, historical-, political- contexts of the experiment can greatly affect your results.
Enter, Lincoln, Guba, and interpretivism.
Establishing Trustworthiness
The following section is my take on organizing the work presented in Establishing Trustworthiness. Written in 1985, this document from my understanding is a seminal piece in interpretivist research methods. Lincoln and Guba set out to both make a case for interpretivism (the research paradigm that multiple realities exist and it is the role of the researcher to interpret it) and provide practical methods for establishing trust and rigor in qualitative research.
Credibility
How congruent are the findings with reality?
Positivist Analogy: internal validity
Methods
Adopting appropriate research methods
Establishing a rapport with the participants through long-term engagement
Expanding participants to include randomly selected individuals
Analyzing edge cases and how they fit or don't fit within your research conclusions
Debriefing with peers
Checking your conclusions with the research participants
Triangulating results using different sources of data, methods of collection, and interpretations
Transferability
To what extent are the findings applicable to other contexts? It is important to note that the onus of transferring research results falls on the consumer of the research.
Positivist Analogy: external validity
Methods
Using thick descriptions to communicate the context
Including information regarding the type of data used (or omitted), length of study, participants, and other relevant details
Dependability
Have you provided enough information so that other researchers could repeat the work?
Positivist Analogy: reproducibility & reliability
Methods
Explaining the research design in rich details
Enumerating the methods of gathering data
Reflexive journaling
Confirmability
Does your data support your interpretations?
Positivist Analogy: objectivity
Methods
Including an audit trail for review
Maintaining transparency about your methods, research design, and conclusion development
As you can see, even with these methods, it is hard to grapple with what makes research trustworthy. Especially in a non-positivist paradigm, we are relying on individual interpretations, where there will not be one single correct answer. For me, this is the beauty of science that is often overlooked: as humans we have the innate ability to observe the world around us and communicate our observations with others to build collective knowledge, knowing that we may not be 100% correct all the time.