Although programme evaluation is increasingly routinised across the academic health sciences, there is scant research on the factors that shape the scope and quality of evaluation work in health professions education. Our research addresses this gap, by studying how the context in which evaluation is practised influences the type of evaluation that can be conducted. Focusing on the context of accreditation, we critically examine the types of paradoxical tensions that surface as evaluation-leads consider evaluation ideals or best practices in relation to contextual demands associated with accreditation seeking.
Our methods were qualitative and situated within a critical realist paradigm. Study participants were 29 individuals with roles requiring responsibility and oversight on evaluation work. They worked across 4 regions, within 26 academic health science institutions. Data were collected using semi-structured interviews and analysed using framework and matrix analyses.
We identified three overarching themes: (i) absence of collective coherence about evaluation practice, (ii) disempowerment of expertise and (iii) tensions as routine practice. Examples of these latter tensions in evaluation work included (i) resourcing accreditation versus resourcing robust evaluation strategy (performing paradox), (ii) evaluation designs to secure accreditation versus design to spur renewal and transformation (performing–learning paradox) and (iii) public dissemination of evaluation findings versus restricted or selective access (publicising paradox). Sub-themes and illustrative data are presented.
Our study demonstrates how the high-stakes context of accreditation seeking surfaces tensions that can risk the quality and credibility of evaluation practices. To mitigate these risks, those who commission or execute evaluation work must be able to identify and reconcile these tensions. We propose strategies that may help optimise the quality of evaluation work alongside accreditation-seeking efforts. Critically, our research highlights the limitations of continually positioning evaluation purely as a method versus as a socio-technical practice that is highly vulnerable to contextual influences.