在项目影响评估中评估情境因素的协议:高等教育中STEM性别平等干预的案例研究

IF 1.1 3区 社会学 Q2 SOCIAL SCIENCES, INTERDISCIPLINARY
Suzanne Nobrega, Kasper Edwards, Mazen El Ghaziri, Lauren Giacobbe, Serena Rice, Laura Punnett
{"title":"在项目影响评估中评估情境因素的协议:高等教育中STEM性别平等干预的案例研究","authors":"Suzanne Nobrega, Kasper Edwards, Mazen El Ghaziri, Lauren Giacobbe, Serena Rice, Laura Punnett","doi":"10.1177/10982140231152281","DOIUrl":null,"url":null,"abstract":"<p><p>Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are available for assessing competing causes in the program environment. Effect Modifier Assessment (EMA) is a method previously used in smaller-scale studies to assess possible competing causes of observed changes following an intervention. In our case study of a university gender equity intervention, EMA generated useful evidence of competing causes to augment program evaluation. Top-down administrative culture, poor experiences with hiring and promotion, and workload were identified as impeding forces that might have reduced program benefits. The EMA addresses a methodological gap in theory-based evaluation and might be useful in a variety of program settings.</p>","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2023-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633285/pdf/","citationCount":"0","resultStr":"{\"title\":\"A Protocol to Assess Contextual Factors During Program Impact Evaluation: A Case Study of a STEM Gender Equity Intervention in Higher Education.\",\"authors\":\"Suzanne Nobrega, Kasper Edwards, Mazen El Ghaziri, Lauren Giacobbe, Serena Rice, Laura Punnett\",\"doi\":\"10.1177/10982140231152281\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are available for assessing competing causes in the program environment. Effect Modifier Assessment (EMA) is a method previously used in smaller-scale studies to assess possible competing causes of observed changes following an intervention. In our case study of a university gender equity intervention, EMA generated useful evidence of competing causes to augment program evaluation. Top-down administrative culture, poor experiences with hiring and promotion, and workload were identified as impeding forces that might have reduced program benefits. The EMA addresses a methodological gap in theory-based evaluation and might be useful in a variety of program settings.</p>\",\"PeriodicalId\":51449,\"journal\":{\"name\":\"American Journal of Evaluation\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2023-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633285/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Evaluation\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/10982140231152281\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Evaluation","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/10982140231152281","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

缺乏实验设计的项目评估往往无法产生影响的证据,因为没有可用的对照组。如果评估人员沿着理论因果链收集证据并确定可能的竞争原因,那么基于理论的评估可以产生项目因果影响的证据。然而,很少有方法可用于评估项目环境中的竞争原因。效应修正评估(EMA)是一种以前在小规模研究中使用的方法,用于评估干预后观察到的变化的可能竞争原因。在我们对大学性别公平干预的案例研究中,EMA产生了竞争原因的有用证据,以加强项目评估。自上而下的管理文化、糟糕的招聘和晋升经验以及工作量被认为是阻碍项目效益降低的因素。EMA解决了基于理论的评估中的方法差距,可能在各种项目环境中有用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Protocol to Assess Contextual Factors During Program Impact Evaluation: A Case Study of a STEM Gender Equity Intervention in Higher Education.

Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are available for assessing competing causes in the program environment. Effect Modifier Assessment (EMA) is a method previously used in smaller-scale studies to assess possible competing causes of observed changes following an intervention. In our case study of a university gender equity intervention, EMA generated useful evidence of competing causes to augment program evaluation. Top-down administrative culture, poor experiences with hiring and promotion, and workload were identified as impeding forces that might have reduced program benefits. The EMA addresses a methodological gap in theory-based evaluation and might be useful in a variety of program settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
American Journal of Evaluation
American Journal of Evaluation SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
4.40
自引率
11.80%
发文量
39
期刊介绍: The American Journal of Evaluation (AJE) publishes original papers about the methods, theory, practice, and findings of evaluation. The general goal of AJE is to present the best work in and about evaluation, in order to improve the knowledge base and practice of its readers. Because the field of evaluation is diverse, with different intellectual traditions, approaches to practice, and domains of application, the papers published in AJE will reflect this diversity. Nevertheless, preference is given to papers that are likely to be of interest to a wide range of evaluators and that are written to be accessible to most readers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信