In mixed methods, validity is a topic of discussion among researchers. While a lot use it in their discussions, use of the term validity is constantly being challenged. Read more about the background and the basics of reliability and validity in mixed methods research in this article.
Over time, discussions on reliability and validity in mixed methods research have evolved to address the unique challenges of integrating qualitative and quantitative approaches. Early efforts sought to align validity with the established criteria used in both traditions. Tashakkori and Teddlie (1998) were among the first to examine how validity considerations could bridge these methodological paradigms, setting the foundation for assessing rigor in mixed methods research.
By the mid-2000s, scholars expanded the discourse to include validation strategies specific to mixed methods. Onwuegbuzie and Johnson (2006) introduced a framework of nine types of legitimation, identifying quality concerns unique to mixed methods studies. These included sample integration legitimation, which evaluates how different participant groups contribute to the findings, and paradigmatic mixing legitimation, which considers the compatibility of underlying philosophical assumptions.
Subsequent contributions refined these perspectives. Dellinger and Leech (2007) incorporated validity into a broader construct validation framework, categorizing strategies across quantitative, qualitative, and mixed methods perspectives. Teddlie and Tashakkori (2009) advanced this work by introducing concepts such as design quality, which assesses the appropriateness of research design for addressing specific questions, and interpretive rigor, which evaluates how well findings align with theoretical frameworks and real-world applications.
As mixed methods research matured, scholars emphasized the importance of tailoring validation strategies to specific study designs. O’Cathain (2010) proposed a structured framework for maintaining validity across all stages of a mixed methods study, from planning to dissemination. Creswell and Plano Clark (2011) reinforced this perspective by stressing the need to align validity considerations with different mixed methods approaches. Ivankova (2014) applied these principles in practice, demonstrating how researchers can ensure rigor in explanatory sequential designs by integrating qualitative and quantitative data in a structured manner.
These evolving discussions highlight the increasing sophistication of validation strategies in mixed methods research. From early conceptualizations of validity to contemporary frameworks addressing study-specific concerns, researchers have developed nuanced approaches to ensuring rigor. As the field continues to advance, further refinements will likely emerge, strengthening the quality and credibility of mixed methods research.
In qualitative analysis validity is emphasized more than reliability. Qualitative validity ensures the accuracy of findings, which is determined through researchers, participants, and peer reviewers. Over time, alternative terms for qualitative validity, such as trustworthiness and authenticity, have emerged. Establishing qualitative validity can be challenging due to the wide range of approaches available, but it is critical for ensuring credible research (Creswell & Plano Clark, 2018).
Checking for validity in qualitative data analysis involves assessing the accuracy of the collected data. This includes ensuring credibility, transferability, dependability, and confirmability. Strategies for validating qualitative research findings include:
Reliability in qualitative research plays a secondary role compared to validity. It primarily concerns the consistency of coding and interpretation among researchers. One key aspect of qualitative reliability is the intercoder agreement, which involves multiple coders applying the same codes to text passages and then comparing their results. To maintain reliability, researchers should:
While reliability is less emphasized in qualitative research, ensuring consistency in coding enhances the trustworthiness of findings. Together, qualitative validity and reliability contribute to the overall rigor of qualitative research.
In quantitative studies, validity and reliability are the key data quality measures. Validity refers to whether a measurement actually measures what it is intended to measure. In other words, a valid tool in quantitative analyses would accurately capture the phenomenon being studied, without being influenced by external or unrelated factors. There are different types of validity stemming from multiple analysis methods:
Reliability refers to the consistency or stability of a measurement over time or across different raters or instruments. Thus, a reliable measurement tool would produce consistent quantitative findings when the research is repeated under similar conditions. There are several types of reliability related to different statistical procedures:
Validity threats in mixed methods research arise from the integration of qualitative and quantitative approaches. Each mixed methods design presents unique challenges that can compromise the study’s credibility, requiring specific strategies to mitigate these risks.
A common validity threat in a convergent mixed methods design is the failure to use parallel concepts for both the quantitative and qualitative data collection. When concepts are not aligned, mixed methods research designs may risk convergent validity, a type of content validity that can make the integration of findings problematic. To address this, researchers should create parallel questions that ensure consistency across data strands.
Another challenge is the use of unequal sample sizes in quantitative and qualitative data collection. This can skew results, especially if the comparison is at the individual level. To minimize this threat, researchers should use equal sample sizes when comparing data for each participant or acknowledge different sampling intents, such as using qualitative data for in-depth insights while quantitative data provides broader generalization.
Keeping results from different databases separate is another threat that undermines the purpose of a convergent design. Without a proper integration strategy, the results remain isolated, reducing the study’s overall coherence and internal consistency. Researchers should apply a convergent data analysis strategy, such as joint displays or side-by-side comparisons, to merge and compare the findings effectively.
Additionally, failing to resolve disconfirming results weakens the study’s face validity, or how much study seems to capture what it intends to analyze. When discrepancies arise, researchers should engage in further analysis, re-examine data, or consider new interpretations to ensure the validity of the integrated findings.
In an explanatory sequential design, one significant threat to meaningful triangulation is failing to identify key quantitative results that require explanation. If researchers overlook important findings, the qualitative follow-up may not adequately address the core issues. To prevent this, researchers should consider all possible explanations for both significant and nonsignificant predictors in the quantitative phase.
Another challenge is not using qualitative data to explain surprising or contradictory quantitative results, which could harm the content validity and internal consistency of the multiple methods. Gaps in interpretation could weaken the study’s credibility. To mitigate this, qualitative data collection should be designed to probe unexpected results through open-ended questions that explore underlying reasons. In addition, researchers may qualitative methods such as member checking and peer debriefing.
Another issue is failing to connect the initial quantitative results with the qualitative follow-up. If the two phases are not meaningfully linked, the study loses its explanatory power. Purposefully selecting qualitative participants based on quantitative results ensures that the qualitative data provides relevant and targeted explanations, strengthening the integration of findings.
An important validity threat in exploratory sequential design is failing to build the quantitative component based on qualitative findings. If the quantitative phase does not directly stem from qualitative insights, the study lacks internal consistency. Researchers should explicitly outline how each major qualitative finding informs the design of the quantitative phase.
Additionally, failing to develop rigorous quantitative features undermines the credibility of the study. Researchers should adopt systematic procedures, such as using psychometrically sound instruments or piloting intervention materials, to enhance the reliability of the quantitative phase.
Another common mistake is selecting participants for the quantitative phase from the same pool as the qualitative sample. This can limit generalizability and introduce biases. To counteract this, researchers should use a larger and independent sample for the quantitative phase to ensure broader applicability of the findings.
In transformative or participatory-social justice designs, the previously mentioned validity and reliability concerns for quantitative and qualitative methods are pertinent, but there are additional concerns unique to within this kind of social and behavioral research. For one, failing to clearly identify the participatory focus or social justice lens weakens the study’s impact. Researchers should establish this focus early and ensure that all methodological decisions align with the participatory framework.
Another issue is not specifying the core design embedded within the participatory approach. Without this clarity, the study lacks methodological coherence. Identifying the core mixed methods design and linking it to the social justice framework ensures rigor.
An additional threat is failing to connect integrated results to potential action and social change. Without explicit links to real-world applications, the study may not achieve its intended purpose. Researchers should develop joint displays that connect specific findings to actionable steps for social change.
Another common issue is marginalizing participants in the research process. This contradicts the participatory approach and can undermine the study’s credibility. Researchers should involve participants at all stages, from decision-making to implementation, ensuring their voices shape the study’s direction.
Ensuring reliability and validity in mixed methods research requires a thoughtful approach that integrates qualitative and quantitative methods effectively. While the concept of validity has been debated in the field, researchers continue to refine frameworks to assess rigor in mixed methods studies. Various strategies, including triangulation, member checking, and systematic data integration, contribute to strengthening validity in qualitative research, while quantitative research relies on statistical measures to ensure accuracy and replicability.
Each mixed methods design presents unique challenges that require tailored strategies to mitigate validity threats. Convergent designs must align parallel concepts across data sources, explanatory sequential designs need strong linkages between quantitative and qualitative phases, and exploratory sequential designs must ensure that the quantitative component builds directly on qualitative insights. Participatory-social justice designs demand clear articulation of the social justice lens and active involvement of participants to uphold the integrity of the research process.
As mixed methods research continues to evolve, scholars are developing increasingly sophisticated approaches to validating findings. The integration of qualitative and quantitative methods enhances the depth and applicability of research, providing a comprehensive understanding of complex phenomena. By addressing validity threats systematically and maintaining methodological rigor, researchers can produce credible, well-supported conclusions that contribute meaningfully to their respective fields.