Roni M. Ellington, Clara B. Barajas, A. Drahota, C. Meghea, Heatherlun S. Uphold, Jamil B. Scott, E. Lewis, C. Furr-Holden
{"title":"An Evaluation Framework of a Transdisciplinary Collaborative Center for Health Equity Research","authors":"Roni M. Ellington, Clara B. Barajas, A. Drahota, C. Meghea, Heatherlun S. Uphold, Jamil B. Scott, E. Lewis, C. Furr-Holden","doi":"10.1177/1098214021991923","DOIUrl":"https://doi.org/10.1177/1098214021991923","url":null,"abstract":"Over the last few decades, there has been an increase in the number of large federally funded transdisciplinary programs and initiatives. Scholars have identified a need to develop frameworks, methodologies, and tools to evaluate the effectiveness of these large collaborative initiatives, providing precise ways to understand and assess the operations, community and academic partner collaboration, scientific and community research dissemination, and cost-effectiveness. Unfortunately, there has been limited research on methodologies and frameworks that can be used to evaluate large initiatives. This study presents a framework for evaluating the Flint Center for Health Equity Solutions (FCHES), a National Institute of Minority Health and Health Disparities (NIMHD)-funded Transdisciplinary Collaborative Center (TCC) for health disparities research. This report presents a summary of the FCHES evaluation framework and evaluation questions as well as findings from the Year-2 evaluation of the Center and lessons learned.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"357 - 377"},"PeriodicalIF":1.7,"publicationDate":"2022-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44854393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Counterfactual Definition of a Program Effect","authors":"C. S. Reichardt","doi":"10.1177/1098214020975485","DOIUrl":"https://doi.org/10.1177/1098214020975485","url":null,"abstract":"Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between what happened after the program was implemented and what would have happened if the program had not been implemented, but everything else had been the same. Such a definition is often said to be linked to the use of quantitative methods. But the definition can be used just as effectively with qualitative methods. To demonstrate its broad applicability in both qualitative and quantitative research, I show how the counterfactual definition undergirds seven common approaches to assessing effects. It is not clear how any alternative to the counterfactual definition is as generally applicable as the counterfactual definition.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"158 - 174"},"PeriodicalIF":1.7,"publicationDate":"2022-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43178433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sample Selection in Randomized Trials With Multiple Target Populations","authors":"Elizabeth Tipton","doi":"10.1177/1098214020927787","DOIUrl":"https://doi.org/10.1177/1098214020927787","url":null,"abstract":"Practitioners and policymakers often want estimates of the effect of an intervention for their local community, e.g., region, state, county. In the ideal, these multiple population average treatment effect (ATE) estimates will be considered in the design of a single randomized trial. Methods for sample selection for generalizing the sample ATE to date, however, focus only on the case of a single target population. In this paper, I provide a framework for sample selection in the multiple population case, including three compromise allocations. I situate the methods in an example and conclude with a discussion of the implications for the design of randomized evaluations more generally.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"70 - 89"},"PeriodicalIF":1.7,"publicationDate":"2022-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44663900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Howley, Johnavae Campbell, Kimberly S. Cowley, Kimberly Cook
{"title":"Pathways and Structures: Evaluating Systems Changes in an NSF INCLUDES Alliance","authors":"C. Howley, Johnavae Campbell, Kimberly S. Cowley, Kimberly Cook","doi":"10.1177/10982140211041606","DOIUrl":"https://doi.org/10.1177/10982140211041606","url":null,"abstract":"In this article, we reflect on our experience applying a framework for evaluating systems change to an evaluation of a statewide West Virginia alliance funded by the National Science Foundation (NSF) to improve the early persistence of rural, first-generation, and other underrepresented minority science, technology, engineering, and mathematics (STEM) students in their programs of study. We begin with a description of the project and then discuss the two pillars around which we have built our evaluation of this project. Next, we present the challenge we confronted (despite the utility of our two pillars) in identifying and analyzing systems change, as well as the literature we consulted as we considered how to address this difficulty. Finally, we describe the framework we applied and examine how it helped us and where we still faced quandaries. Ultimately, this reflection serves two key purposes: (1) to consider a few of the challenges of measuring changes in systems, and (2) to discuss our experience applying one framework to address these issues.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"632 - 646"},"PeriodicalIF":1.7,"publicationDate":"2022-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45335353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extending Evaluation Capacity Building Theory to Improvement Science Networks","authors":"Kristen Rohanna","doi":"10.1177/1098214020963189","DOIUrl":"https://doi.org/10.1177/1098214020963189","url":null,"abstract":"Evaluation practices are continuing to evolve, particularly in those areas related to formative, participatory, and improvement approaches. Improvement science is one of the evaluative practices. Its strength is that it seeks to embrace stakeholders’ and frontline workers’ knowledge and experience, who are often tasked with leading improvement activities in their organizations. However, very little guidance exists on how to develop crucial improvement capacity. Evaluation capacity building literature has the potential to fill this gap. This multiple methods case study follows a networked improvement community’s first year in a public education setting as network leaders sought to build capacity by incorporating Preskill and Boyle’s multidisciplinary model as its guiding framework. The purpose of this study was to better understand how to build improvement science capacity, along with what facilitates implementation and beneficial learnings. This article ends by reconceptualizing and extending Preskill and Boyle’s model to improvement science networks.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"46 - 65"},"PeriodicalIF":1.7,"publicationDate":"2021-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44111882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conceptualizing and Engaging in Reflective Practice: Experienced Evaluators’ Perspectives","authors":"Tiffany L. S. Tovey, Gary J. Skolits","doi":"10.1177/1098214020983926","DOIUrl":"https://doi.org/10.1177/1098214020983926","url":null,"abstract":"The purpose of this study was to determine professional evaluators’ perceptions of reflective practice (RP) and the extent and manner in which they engage in RP behaviors. Nineteen evaluators with 10 or more years of experience in the evaluation field were interviewed to explore our understanding and practice of RP in evaluation. Findings suggest that RP is a process of self and contextual awareness, involving thinking and questioning, and individual and group meaning-making, focused on facilitating growth in the form of learning and improvement. The roles of individual and collaborative reflection as well as reflection in- and on-action are also discussed. Findings support a call for the further refinement of our understanding of RP in evaluation practice. Evaluators seeking to be better reflective practitioners should be competent in skills such as facilitation and interpersonal skills, as well as budget needed time for RP in evaluation accordingly.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"5 - 25"},"PeriodicalIF":1.7,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47861524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acknowledgment of Reviewers","authors":"J. Acree","doi":"10.1177/10982140211058700","DOIUrl":"https://doi.org/10.1177/10982140211058700","url":null,"abstract":"The following individuals gave generously of their time and expertise to serve as reviewers for one or more manuscripts between August 1, 2019, and July 31, 2020. The editorial team of the American Journal of Evaluation thanks these colleagues for their service to the journal, the field of evaluation, and to the authors whose work has benefited from their feedback. Please note that in some cases, reviewers’ institutional affiliations may have changed.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"42 1","pages":"606 - 611"},"PeriodicalIF":1.7,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43381476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Use of Evaluability Assessments in Improving Future Evaluations: A Scoping Review of 10 Years of Literature (2008–2018)","authors":"Steven Lam, K. Skinner","doi":"10.1177/1098214020936769","DOIUrl":"https://doi.org/10.1177/1098214020936769","url":null,"abstract":"Since the beginning of the 21st century, evaluability assessments have experienced a resurgence of interest. However, little is known about how evaluability assessments have been used to improve future evaluations. In this article, we identify characteristics, challenges, and opportunities of evaluability assessments based on a scoping review of case studies published since 2008 (n = 59). We find that evaluability assessments are increasingly used for program development and evaluation planning. Several challenges are identified: politics of evaluability; ambiguity between evaluability and evaluation, and limited considerations of gender equity and human rights. To ensure relevance, evaluability approaches must evolve in alignment with the fast-changing environment. Recommended efforts to revitalize evaluability assessment practice include the following: engaging stakeholders; clarifying what evaluability assessments entail; assessing program understandings, plausibility, and practicality; and considering cross-cutting themes. This review provides an evidence base of practical applications of evaluability assessments to support future evaluability studies and, by extension, future evaluations.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"42 1","pages":"523 - 540"},"PeriodicalIF":1.7,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47106366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hewawitharana, Janice Kao, C. Rider, Evan Talmage, S. Costello, K. Webb, Wendi Gosliner, G. Woodward-Lopez
{"title":"Method for Scoring Dose of Multicomponent Interventions: A Building Block for Future Evaluations","authors":"S. Hewawitharana, Janice Kao, C. Rider, Evan Talmage, S. Costello, K. Webb, Wendi Gosliner, G. Woodward-Lopez","doi":"10.1177/1098214020962223","DOIUrl":"https://doi.org/10.1177/1098214020962223","url":null,"abstract":"Schools are a critical setting for improving child nutrition and food security and preventing obesity in the United States. The U.S. Department of Agriculture mandates that the Supplemental Nutrition Assistance Program–Education, known as CalFresh Healthy Living (CFHL) in California, implements obesity prevention efforts that utilize multicomponent policy, systems, and environmental change interventions supplemented with direct and indirect education. However, evaluation of these complex interventions has proven challenging due to a lack of established evaluation methods, particularly for comprehensively measuring the dose of multicomponent interventions. This article proposes and demonstrates a method to score the dose of multicomponent California Department of Public Health–funded CFHL school interventions received by children attending public schools, using administrative data collected by CFHL in California.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"193 - 213"},"PeriodicalIF":1.7,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49411820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"There’s So Much to Do and Not Enough Time to Do It! A Case for Sentiment Analysis to Derive Meaning From Open Text Using Student Reflections of Engineering Activities","authors":"Abhik Roy, Karen E. Rambo‐Hernandez","doi":"10.1177/1098214020962576","DOIUrl":"https://doi.org/10.1177/1098214020962576","url":null,"abstract":"Evaluators often find themselves in situations where resources to conduct thorough evaluations are limited. In this paper, we present a familiar instance where there is an overwhelming amount of open text to be analyzed under the constraints of time and personnel. In instances when timely feedback is important, the data are plentiful, and answers to the study questions carry lower consequences, we build a case for using a machine learning, in particular a sentiment analysis. We begin by explaining the rationale for the use of sentiment analysis and provide an introduction to this method. Next, we provide an example of a sentiment analysis leveraging data collected from a program evaluation of an engineering education intervention, specifically to text extracted from student reflections of course activities. Finally, limitations of sentiment analysis and related techniques are discussed as well as areas for future research.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"42 1","pages":"559 - 576"},"PeriodicalIF":1.7,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45871179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}