{"title":"From the Co-Editors: Building Evaluative Capacity to Examine Issues of Power, Address Sensitive Topics, and Generate Actionable Data","authors":"J. Hall, Laura R. Peck","doi":"10.1177/10982140221134238","DOIUrl":"https://doi.org/10.1177/10982140221134238","url":null,"abstract":"The fourth issue of volume 43 is our fi rst issue as Co-Editors-in-Chief of the American Journal of Evaluation. We have arrived at this point on our journey as Editors of AJE re fl ecting on our capacity as evaluators. While we are seasoned evaluators with decades of experience between us, we fi nd it is necessary to reexamine our role and capacity as evaluators and ask ourselves re fl ective questions such as What authority do we have as evaluators to address issues of power and privilege in the context of an evaluation? How do we determine if our evaluation approaches address vulnerable communities and sensitive topics respectfully? What analytic capacity do we have to produce valid and actionable evidence? And, what is within our capacity, as evaluators, to generate positive change for individuals, communities, and society? The articles we have assembled for this issues provide informed thinking on these and related topics based on the evaluation literature and other fi elds of study. Together, the discourse provided in the seven articles and three method notes in this issue will undoubtedly open up possibilities to re fl ect on and enhance your evaluative capacity as it has ours. The lead article in this issue, Critical Evaluation Capital (CEC): A New Tool for Applying Critical Race Theory to the Evaluand by Alice E. Ginsberg, centers issues of power in evaluation practice by presenting a tool to support critical evaluation approaches that challenge the notion of objectivity, consider evaluation a value-laden enterprise, and position the role of the evaluator as an agent for change. Informed by the lens of critical race theory and community cultural wealth, Ginsberg ’ s tool enhances the capacity of evaluators to pay attention to different types of power within the context of an evaluand. Speci fi cally, the CEC tool converts issues of power into several overlapping categories of “ capital. ” Each category is de fi ned and provides thought-provoking questions useful to explore our authority as evaluators and the role of power and privilege in an evaluation context. To conclude the article, Ginsberg retroactively applies the CEC tool to an evaluation. By","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48654353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Boyce, Tiffany L. S. Tovey, Onyinyechukwu Onwuka, J. R. Moller, Tyler Clark, Aundrea Smith
{"title":"Exploring NSF-Funded Evaluators’ and Principal Investigators’ Definitions and Measurement of Diversity, Equity, and Inclusion","authors":"A. Boyce, Tiffany L. S. Tovey, Onyinyechukwu Onwuka, J. R. Moller, Tyler Clark, Aundrea Smith","doi":"10.1177/10982140221108662","DOIUrl":"https://doi.org/10.1177/10982140221108662","url":null,"abstract":"More evaluators have anchored their work in equity-focused, culturally responsive, and social justice ideals. Although we have a sense of approaches that guide evaluators as to how they should attend to culture, diversity, equity, and inclusion (DEI), we have not yet established an empirical understanding of how evaluators measure DEI. In this article, we report an examination of how evaluators and principal investigators (PIs) funded by the National Science Foundation's Advanced Technological Education (ATE) program define and measure DEI within their projects. Evaluators gathered the most evidence related to diversity and less evidence related to equity and inclusion. On average, PIs’ projects engaged in activities designed to increase DEI, with the highest focus on diversity. We believe there continues to be room for improvement and implore the movement of engagement with these important topics from the margins to the center of our field's education, theory, and practice.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45738847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparison of Fidelity Implementation Frameworks Used in the Field of Early Intervention","authors":"Colombe Lemire, M. Rousseau, C. Dionne","doi":"10.1177/10982140211008978","DOIUrl":"https://doi.org/10.1177/10982140211008978","url":null,"abstract":"Implementation fidelity is the degree of compliance with which the core elements of program or intervention practices are used as intended. The scientific literature reveals gaps in defining and assessing implementation fidelity in early intervention: lack of common definitions and conceptual framework as well as their lack of application. Through a critical review of the scientific literature, this article aims to identify information that can be used to develop a common language and guidelines for assessing implementation fidelity. An analysis of 46 theoretical and empirical papers about early intervention implementation, published between 1998 and 2018, identified four conceptual frameworks, in addition to that of Dane and Schneider. Following analysis of the conceptual frameworks, a four-component conceptualization of implementation fidelity (adherence, dosage, quality and participant responsiveness) is proposed.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47289679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Just Give Me an Example! Exploring Strategies for Building Public Understanding of Evaluation","authors":"Sarah Mason","doi":"10.1177/10982140211061018","DOIUrl":"https://doi.org/10.1177/10982140211061018","url":null,"abstract":"Evaluators often lament that the general public does not understand what we do. Yet, there is limited empirical research on what the general public does know—and think—about program evaluation. This article seeks to expand our understanding in this domain by capturing views about evaluation from a demographically representative sample of the U.S population. This article also explores different strategies for describing program evaluation to the general public. Using an experimental design, it builds on previous research by Mason and Hunt, testing a set of hypotheses about how to enhance communication about evaluation. Findings suggest that public understanding of evaluation is indeed low, although two specific communication strategies—using well-known examples of social programs and including a why statement that describes the purpose of evaluation—can strengthen understanding among members of the public.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46576725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Book Review: Evaluation in Today's World: Respecting Diversity, Improving Quality, and Promoting Usability","authors":"R. Woodland","doi":"10.1177/10982140221134246","DOIUrl":"https://doi.org/10.1177/10982140221134246","url":null,"abstract":"For those of us who teach program evaluation, it can be an exciting prospect to enter the summer with a new textbook to consider for inclusion in our fall courses. I had the good fortune to review Evaluation in Today’s World: Respecting Diversity, Improving Quality, and Promoting Usability by Veronica Thomas and Patricia Campbell. It is an accessible, comprehensive, and provocative text that is appropriate for inclusion in a number of courses that are typically taught in a program evaluation certificate sequence or other graduate curricula. The book is organized into 16 chapters, each of which includes learning goals and are replete with helpful visuals, case studies, suggested text reflection and discussion activities, and commentaries from evaluation scholars. The first half of the book explores the context and foundations of social justice, cultural competence, and program evaluation, while the second half of the book presents specifics for how to conduct socially just evaluation. For helpful reference, the book also includes the American Evaluation Association’s Guiding Principles (AEA, 2018) and the Joint Committee on Standards for Educational Evaluation Program Evaluation Standards (Yarbrough et al., 2010), as well as a Glossary of all the bolded terms included in the chapters. In the book, the reader encounters what one would expect to see in standard textbooks in evaluation, including the historical evolution of the field, influential scholars, and an overview of types of evaluation. However, what makes this text particularly compelling is that typical evaluation topics are explicated through the lens of social justice. Indeed, the book’s title matches its intent. Evaluation in today’s world means thinking and doing evaluation in what is unquestionably a racialized society where grave inequities exist and undemocratic relationships persist among people. The authors situate social justice at the heart of evaluation and assert that “evaluators have an ethical obligation to eliminate, or at least mitigate, racial (and other) biases” in our work (p. 42). They acknowledge that evaluators cannot “solve the racism problem,” but entreat us to “at least elevate this harsh reality in the discourse on the eradication of social problems that derive from a national legacy of structural racism, exploitation, and bigotry,” and warn, “evaluations that ignore these factors obscure the impact of social forces on social problems” (p. 218).","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41316592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Wingate, Kelly N. Robertson, Michael FitzGerald, Lana J. Rucks, Takara Tsuzaki, C. Clasen, J. Schwob
{"title":"Thinking Outside the Self-Report: Using Evaluation Plans to Assess Evaluation Capacity Building","authors":"L. Wingate, Kelly N. Robertson, Michael FitzGerald, Lana J. Rucks, Takara Tsuzaki, C. Clasen, J. Schwob","doi":"10.1177/10982140211062884","DOIUrl":"https://doi.org/10.1177/10982140211062884","url":null,"abstract":"In this study, we investigated the impact of the evaluation capacity building (ECB) efforts of an organization by examining the evaluation plans included in funding proposals over a 14-year period. Specifically, we sought to determine the degree to which and how evaluation plans in proposals to one National Science Foundation (NSF) program changed over time and the extent to which the organization dedicated to ECB in that program may have influenced those changes. Independent raters used rubrics to assess the presence of six essential evaluation plan elements. Statistically significant correlations indicate that proposal evaluation plans improved over time, with noticeable differences before and after ECB efforts were integrated into the program. The study adds to the limited literature on using artifacts of evaluation practice rather than self-reports to assess ECB impact.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44000010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What Is and What Should Be Needs Assessment Scales: Factors Affecting the Trustworthiness of Results","authors":"J. Altschuld, H. Hung, Yi-Fang Lee","doi":"10.1177/10982140211017663","DOIUrl":"https://doi.org/10.1177/10982140211017663","url":null,"abstract":"Surveys are frequently employed in needs assessment to collect information about gaps (the needs) in what is and what should be conditions. Double-scale Likert-type instruments are routinely used for this purpose. Although in accord with the discrepancy definition of need, the quality of such measures is being questioned to the point of suggesting that the results are not to be trusted. Eight factors supporting that proposition are described with explanations of how they operate. Literature-based examples are provided for improving surveys with double scales especially as they relate to attenuating the effects of the factors. Lastly, lessons learned are offered with a call for more research into this issue in assessing needs.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42584516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Shand, Stephen M. Leach, Fiona M. Hollands, Florence Chang, Yilin Pan, B. Yan, D. Dossett, Samreen Nayyer-Qureshi, Yixin Wang, Laura Head
{"title":"Program Value-Added: A Feasible Method for Providing Evidence on the Effectiveness of Multiple Programs Implemented Simultaneously in Schools","authors":"R. Shand, Stephen M. Leach, Fiona M. Hollands, Florence Chang, Yilin Pan, B. Yan, D. Dossett, Samreen Nayyer-Qureshi, Yixin Wang, Laura Head","doi":"10.1177/10982140211071017","DOIUrl":"https://doi.org/10.1177/10982140211071017","url":null,"abstract":"We assessed whether an adaptation of value-added analysis (VAA) can provide evidence on the relative effectiveness of interventions implemented in a large school district. We analyzed two datasets, one documenting interventions received by underperforming students, and one documenting interventions received by students in schools benefiting from discretionary funds to invest in specific programs. Results from the former dataset identified several interventions that appear to be more or less effective than the average intervention. Results from the second dataset were counterintuitive. We conclude that, under specific conditions, program VAA can provide evidence to help guide district decision-makers to identify outlier interventions and inform decisions about scaling up or disinvesting in such interventions, with the caveat that if those conditions are not met, the results could be misleading.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44274992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Tentative Promise of Social Return on Investment","authors":"Kim M. Siegal","doi":"10.1177/10982140211072420","DOIUrl":"https://doi.org/10.1177/10982140211072420","url":null,"abstract":"Social return on investment (SROI), an evaluation method that compares monetized social value generated to costs invested, is in ascendance. Conceptually akin to cost–benefit analysis, it shares some of its challenges; however, these are heightened due to the expressed promise of using SROI to compare programs and inform philanthropic and public investment decisions. In this paper, I describe the landscape of SROI studies to date, including a review of a representative sample of SROI evaluations, which have been vetted by Social Value International. I also draw on the experience of an organization that has used SROI in earnest as a decision-making tool to provide an assessment of both the methods that underpin it and the ways in which it is applied. I conclude by offering some recommendations to consider to get the most value from this evaluation method while avoiding some potential pitfalls.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49604772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From the Interim Co-Editors: Thinking Inclusively and Strategically to Address the Complexity of Our World","authors":"J. Hall, Laura R. Peck","doi":"10.1177/10982140221111272","DOIUrl":"https://doi.org/10.1177/10982140221111272","url":null,"abstract":"We are excited to present the third issue of volume 43 of the American Journal of Evaluation ( AJE ). This is the fi rst issue that we have stewarded as Interim Co-Editors-in-Chief. This issue contains six articles and a Method Note. This issue also includes a section on economic evaluation with a note from the Section Editor, Brooks Bowden. While each article is distinct with its own content and methodological focus, as a collective, these articles give practical guidance on how evaluation practice can be more inclusive and strategically modi fi ed to address the complexity and social issues in our world. It is our aim to re fl ect as much of the diversity of the evaluation fi eld as possible in each issue; and we believe this issue offers something for most evaluation scholars and practitioners. The fi rst article in this issue is authored by Melvin M. Mark, former Editor of AJE. In his article, Mark argues for the necessity of planning for change as program modi fi cations will inevitably occur. Recognizing not all program changes can be predetermined, he suggests that evaluators can work with stakeholders to make informed decisions about possible adaptions. Building on these, and related arguments, he reviews various forms of program modi fi cations and then offers a range of options for how evaluators can plan for such modi fi cations; or, a priori planning for potential adap-tions . Mark outlines the general steps for a priori planning, providing concrete examples of how evaluators can incorporate these steps into their practice. The practical questions included in this piece will prove helpful for evaluators, along with stakeholders, to generate ideas for possible program adaptations.Inthesecond article, Jennifer J. Esala, Liz Sweitzer, Craig Higson-Smith, and Kirsten L. Anderson discuss human rights issues in the context of advocacy evaluation in the Global South. These authors highlight a number of urgent issues not adequately covered in the literature on advocacy evaluation in the Global South. Evaluators and others interested in advocacy evaluation in Global South contexts will fi nd this piece particularly informative because it provides a literature review focused on how work","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46425650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}