C. Johnstone, A. Hayes, Elisheva Cohen, Hayley Niad, George Laryea-Adjei, K. Letshabo, Adrian Shikwe, A. Agu
{"title":"A Human Rights-Based Evaluation Approach for Inclusive Education","authors":"C. Johnstone, A. Hayes, Elisheva Cohen, Hayley Niad, George Laryea-Adjei, K. Letshabo, Adrian Shikwe, A. Agu","doi":"10.1177/10982140231153810","DOIUrl":"https://doi.org/10.1177/10982140231153810","url":null,"abstract":"This article reports on ways in which United Nations human rights treaties can be used as a normative framework for evaluating program outcomes. In this article, we conceptualize a human rights-based approach to program evaluation and locate this approach within the broader evaluation literature. The article describes how a rights-based framework can be used as an aspirational set of indicators for program evaluations to promote activities that align with internationally agreed-upon human rights norms. We then describe a case study of the evaluation through which this method was developed, including its sampling design, methodology, and findings. The United Nations International Children’s Fund (UNICEF) inclusive education evaluation described highlighted the need for conceptual clarity around what inclusive education is, and the importance of contextualized innovation toward meeting the educational rights of children with disabilities. Human rights perspectives and evaluation designs can help create such clarity, but should also be used with care.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44337859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Book Review: Leading Change Through Evaluation: Improvement Science in Action by Kristen L. Rohanna","authors":"Valerie Marshall","doi":"10.1177/10982140231153376","DOIUrl":"https://doi.org/10.1177/10982140231153376","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49292859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suzanne Nobrega, Kasper Edwards, Mazen El Ghaziri, Lauren Giacobbe, Serena Rice, Laura Punnett
{"title":"A Protocol to Assess Contextual Factors During Program Impact Evaluation: A Case Study of a STEM Gender Equity Intervention in Higher Education.","authors":"Suzanne Nobrega, Kasper Edwards, Mazen El Ghaziri, Lauren Giacobbe, Serena Rice, Laura Punnett","doi":"10.1177/10982140231152281","DOIUrl":"10.1177/10982140231152281","url":null,"abstract":"<p><p>Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are available for assessing competing causes in the program environment. Effect Modifier Assessment (EMA) is a method previously used in smaller-scale studies to assess possible competing causes of observed changes following an intervention. In our case study of a university gender equity intervention, EMA generated useful evidence of competing causes to augment program evaluation. Top-down administrative culture, poor experiences with hiring and promotion, and workload were identified as impeding forces that might have reduced program benefits. The EMA addresses a methodological gap in theory-based evaluation and might be useful in a variety of program settings.</p>","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44206689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From the Co-Editors: Honoring the Past to Inform Current and Future Evaluation","authors":"J. Hall, Laura R. Peck","doi":"10.1177/10982140231169134","DOIUrl":"https://doi.org/10.1177/10982140231169134","url":null,"abstract":"Volume","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"172 - 174"},"PeriodicalIF":1.7,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42584731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rebecca M. Teasdale, Jennifer R. McNeilly, Maria Isabel Ramírez Garzón, J. Novak, Jennifer C. Greene
{"title":"“A Lot of It Really Does Come Down to Values”: An Empirical Study of the Values Advanced by Seasoned Evaluators","authors":"Rebecca M. Teasdale, Jennifer R. McNeilly, Maria Isabel Ramírez Garzón, J. Novak, Jennifer C. Greene","doi":"10.1177/10982140231153805","DOIUrl":"https://doi.org/10.1177/10982140231153805","url":null,"abstract":"This study challenges persistent misrepresentations of evaluation as a value-neutral inquiry process by presenting an empirical study that deepens understanding of evaluators’ values and how they “show up” in evaluation practice. Through semistructured interviews and inductive analysis, we examined the values advanced by a sample of eight experienced evaluators. We surfaced and examined 12 values, which we organized into five clusters, that shaped the constitutive elements of the studies these evaluators conducted and guided how the evaluators positioned their work. Our findings provide empirical evidence about the role of values in evaluation practice and can support evaluators in reflecting on their own values and enacting their professional and ethical responsibilities to identify and articulate their values in the context of evaluation practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"453 - 473"},"PeriodicalIF":1.7,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46490295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social Ontology and Evaluation—A Comment on “Framing Evaluation in Reality: An Introduction to Ontologically Integrative Evaluation”","authors":"R. Picciotto","doi":"10.1177/10982140221134779","DOIUrl":"https://doi.org/10.1177/10982140221134779","url":null,"abstract":"According to Jennifer Billman, western evaluation bias against indigenous thinking is due to ontological incompetence. If so, the solution she offers (a highly abstract list of criteria) is inadequate since it fails to address let alone resolve a wide range of philosophical dilemmas at the intersection of logic and ontology. Furthermore, it fails to “frame evaluation in reality” since it ignores the patent fact that, in the market society, positivist evaluators dominate. They are value free, embrace a “clockwork” conception of the natural and social world, and do not question decision makers' goals. By contrast, constructivist evaluators recognize that social facts differ from natural facts since they are socially constructed and clustered within institutions that define roles, norms and expectations. It follows that constructivist evaluation holds the key to the problem identified by Billman since it resists capture by vested interests, gives pride of place to the relational context and embraces the validity of indigenous thinking.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"109 - 113"},"PeriodicalIF":1.7,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41432469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What to Expect When You Don’t Know What You are Expecting: Vigilance and the Monitoring and Evaluation of an Uncertain World","authors":"R. Goble, Edward R. Carr, Jon Anderson","doi":"10.1177/10982140221079639","DOIUrl":"https://doi.org/10.1177/10982140221079639","url":null,"abstract":"Complexity and uncertainty are long-standing challenges for global development projects. Coping with both requires flexibility and adaptation, the ability to identify unexpected circumstances, seize opportunities, and respond to threats. Vigilance is critical; it resides within the domains of monitoring, evaluation, and learning. In practice, maintaining vigilance is difficult, partly because effective vigilance has a dual nature. Normal, Type 1 vigilance, is anchored in knowing what to look for. It demands focus and attention to designated indicators. Type 2 vigilance looks for what project preparations failed to anticipate. It demands defocusing and openness; it sits outside contemporary design of monitoring and evaluation as it must question the assumptions in project design and implementation. We consider the role of both types of vigilance in global development and difficulties in maintaining both simultaneously. We identify pathways for improving the practice of vigilance and suggest practical steps in a template for pilot efforts.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"74 - 89"},"PeriodicalIF":1.7,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48669010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Section Editor's Note: Using Power Insights to Better Plan Experiments","authors":"Laura R. Peck","doi":"10.1177/10982140231154695","DOIUrl":"https://doi.org/10.1177/10982140231154695","url":null,"abstract":"How many people need to be in my evaluation in order to be able to detect a policyor programrelevant impact? If the program being evaluated is assigned to participants at an aggregate “cluster” or group level—such as classrooms filled with students—how many of those groups do I need? How many participants within each group? What if I am interested in subgroup effects; how many people or groups do I need then? Answers to the questions are essential for smart planning of experimental evaluations and are the motivation for this Experimental Methodology Section. Before I summarize the contributions of this Section’s three articles, let me first define some key concepts and explain what I see to be the main issues for this piece of experimental evaluation work. To begin, statistical “power” refers to an evaluation’s ability to detect an effect that is statistically significant; and minimum detectable effects (MDEs) are the smallest estimated effect that a given design can detect as statistically significant. Ultimately, the effect size is what a given evaluation is designed to estimate, and the evaluator will have to determine (1) what sample design and size is needed to detect that effect, or (2) what MDE is feasible, given budget and sample design and size realities. Several interrelated factors influence a study’s MDE, including (as drawn partly from Peck, 2020, Appendix Box A.1) the choices and realities of statistical significance threshold, statistical power, variance of the impact estimate, the level and variability of the outcome measure, and the clustered nature of the data, as elaborated next. Statistical significance threshold. The statistical significance level is the probability of identifying a false positive result (also referred to as Type I error). The MDE becomes larger as the statistical significance level decreases. All else equal, an impact must be larger to be detected with a statistical significance threshold of 1% than with a statistical significance threshold of 10%. Substantial debate in statistics and related fields focuses on “the p-value” and its value to establishing evidence (e.g., Wasserstein & Lazar, 2016). Statistical power. The statistical power is equal to the probability of correctly rejecting the null hypothesis (or, one minus the probability of a false negative result, or Type II error). In other words, power relates to the analyst’s ability to detect an impact that is statistically significant, should it exist. Statistical power is typically set to 80%, although other values may be reasonable too. Missing the detection of a favorable impact (Type II error) has lower up-front cost implications for the study, relative to falsely claiming that a favorable impact exists (Type I error). That said, an insufficiently powered study might lead to not generating new information (or, worse, to incorrect null findings), an ill-funded investment.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"114 - 117"},"PeriodicalIF":1.7,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44262632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From the Co-Editors: There's Always Room for Improvement: Building Better Practices and Methods for a Brighter Future","authors":"J. Hall, Laura R. Peck","doi":"10.1177/10982140231154683","DOIUrl":"https://doi.org/10.1177/10982140231154683","url":null,"abstract":"Since becoming evaluators, we have observed how the fi eld of evaluation has grown and changed. Major areas of development we have witnessed include increased attention to evaluation capacity-building initiatives, diversity, equity, and inclusion efforts, as well as demands for more adaptive evaluative strategies and techniques for improving the quality of evaluation planning and resulting evidence. Many of these areas of development in evaluation practice are in response to increased national and global complexity and uncertainty. Although the fi eld has evolved in response to these challenges, we recognize that there is always room for improvement. We anticipate ongoing complexity and uncertainty as contemporary political, social, economic, and environmental shifts take place in our world. As such, we desire to push the fi eld toward a more inclusive, adaptive, restor-ative, and effective evaluation praxis. This desire led us to assemble evaluation scholarship for this fi rst issue of volume 44 in the form of fi ve articles, a commentary, and a section on experimental methodology, including three articles. Separately, the articles in this issue extend the fi eld of evaluation ’ s development in the areas of evaluation capacity building (ECB), responsive and equity-oriented efforts, vigilant evaluation practice, and effective methodology. Collectively, the articles address the growing complexity of our world, providing insights and techniques to build better practices and methods for a brighter future. In the fi rst article, Gregory Phillips II, Dylan Felt, Esrea Perez-Bill, Megan M. Ruprecht, Erik Elías Glenn, Peter Lindeman, and Robin Lin Miller propose an evaluation orientation that is responsive to the LGBTQ + community ’ s interests and needs. They abbreviate into LGBTQ + individuals who identify as lesbian, gay, bisexual, transgender, queer, intersex, and Two-Spirit, inclusively along with other sexual and gender minorities; and they consider the intersectionality of these identity traits with those who are “ also Black, Indigenous, and People of Color (BIPOC), those who are dis-abled, and those who are working-class, poor, and otherwise economically disadvantaged, among","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"4 - 6"},"PeriodicalIF":1.7,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45835158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feminist Evaluation Using Feminist Participatory Action Research: Guiding Principles and Practices","authors":"Kaisha Crupi, N. Godden","doi":"10.1177/10982140221148433","DOIUrl":"https://doi.org/10.1177/10982140221148433","url":null,"abstract":"There is a lack of instructional literature on how to conduct a feminist evaluation to highlight and transform systemic issues in gendered and intersecting power relations. Feminist Participatory Action Research (FPAR) enables a process for conducting community-driven, -led and -owned feminist evaluations that drive social justice actions. By undertaking a critical review of existing literature, this article presents guiding principles and practices in how to conduct a feminist evaluation using FPAR. These principles and practices provide a framework for those who are seeking an evidence base for transformative social justice action in communities, particularly those who are working with complexity in systems-change interventions with multiple stakeholders.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47870586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}