Pamela R. Buckley, Katie Massey Combs, Karen M. Drewelow, Brittany L. Hubler, Marion Amanda Lain
{"title":"观察保真度测量的有效性证据,为循证干预措施的推广提供依据","authors":"Pamela R. Buckley, Katie Massey Combs, Karen M. Drewelow, Brittany L. Hubler, Marion Amanda Lain","doi":"10.1177/0193841x241248864","DOIUrl":null,"url":null,"abstract":"As evidence-based interventions are scaled, fidelity of implementation, and thus effectiveness, often wanes. Validated fidelity measures can improve researchers’ ability to attribute outcomes to the intervention and help practitioners feel more confident in implementing the intervention as intended. We aim to provide a model for the validation of fidelity observation protocols to guide future research studying evidence-based interventions scaled-up under real-world conditions. We describe a process to build evidence of validity for items within the Session Review Form, an observational tool measuring fidelity to interactive drug prevention programs such as the Botvin LifeSkills Training program. Following Kane’s (2006) assumptions framework requiring that validity evidence be built across four areas (scoring, generalizability, extrapolation, and decision), confirmatory factor analysis supported the hypothesized two-factor structure measuring quality of delivery (seven items assessing how well the material is implemented) and participant responsiveness (three items evaluating how well the intervention is received), and measurement invariance tests suggested the structure held across grade level and schools serving different student populations. These findings provide some evidence supporting the extrapolation assumption, though additional research is warranted since a more complete overall depiction of the validity argument is needed to evaluate fidelity measures.","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":"11 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Validity Evidence for an Observational Fidelity Measure to Inform Scale-Up of Evidence-Based Interventions\",\"authors\":\"Pamela R. Buckley, Katie Massey Combs, Karen M. Drewelow, Brittany L. Hubler, Marion Amanda Lain\",\"doi\":\"10.1177/0193841x241248864\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As evidence-based interventions are scaled, fidelity of implementation, and thus effectiveness, often wanes. Validated fidelity measures can improve researchers’ ability to attribute outcomes to the intervention and help practitioners feel more confident in implementing the intervention as intended. We aim to provide a model for the validation of fidelity observation protocols to guide future research studying evidence-based interventions scaled-up under real-world conditions. We describe a process to build evidence of validity for items within the Session Review Form, an observational tool measuring fidelity to interactive drug prevention programs such as the Botvin LifeSkills Training program. Following Kane’s (2006) assumptions framework requiring that validity evidence be built across four areas (scoring, generalizability, extrapolation, and decision), confirmatory factor analysis supported the hypothesized two-factor structure measuring quality of delivery (seven items assessing how well the material is implemented) and participant responsiveness (three items evaluating how well the intervention is received), and measurement invariance tests suggested the structure held across grade level and schools serving different student populations. These findings provide some evidence supporting the extrapolation assumption, though additional research is warranted since a more complete overall depiction of the validity argument is needed to evaluate fidelity measures.\",\"PeriodicalId\":47533,\"journal\":{\"name\":\"Evaluation Review\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evaluation Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/0193841x241248864\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evaluation Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/0193841x241248864","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
Validity Evidence for an Observational Fidelity Measure to Inform Scale-Up of Evidence-Based Interventions
As evidence-based interventions are scaled, fidelity of implementation, and thus effectiveness, often wanes. Validated fidelity measures can improve researchers’ ability to attribute outcomes to the intervention and help practitioners feel more confident in implementing the intervention as intended. We aim to provide a model for the validation of fidelity observation protocols to guide future research studying evidence-based interventions scaled-up under real-world conditions. We describe a process to build evidence of validity for items within the Session Review Form, an observational tool measuring fidelity to interactive drug prevention programs such as the Botvin LifeSkills Training program. Following Kane’s (2006) assumptions framework requiring that validity evidence be built across four areas (scoring, generalizability, extrapolation, and decision), confirmatory factor analysis supported the hypothesized two-factor structure measuring quality of delivery (seven items assessing how well the material is implemented) and participant responsiveness (three items evaluating how well the intervention is received), and measurement invariance tests suggested the structure held across grade level and schools serving different student populations. These findings provide some evidence supporting the extrapolation assumption, though additional research is warranted since a more complete overall depiction of the validity argument is needed to evaluate fidelity measures.
期刊介绍:
Evaluation Review is the forum for researchers, planners, and policy makers engaged in the development, implementation, and utilization of studies aimed at the betterment of the human condition. The Editors invite submission of papers reporting the findings of evaluation studies in such fields as child development, health, education, income security, manpower, mental health, criminal justice, and the physical and social environments. In addition, Evaluation Review will contain articles on methodological developments, discussions of the state of the art, and commentaries on issues related to the application of research results. Special features will include periodic review essays, "research briefs", and "craft reports".