{"title":"What have I done for the cause today? Evaluators, evaluation, and CREA","authors":"L. Neubauer, A. Boyce, Nicole R. Bowman","doi":"10.1002/ev.20580","DOIUrl":"https://doi.org/10.1002/ev.20580","url":null,"abstract":"This article considers the volume's intentions, process, and gaps while inviting attention to the impact and influence of CREA and Dr. Hood on evaluators and evaluation. Three of the co‐editors author this conclusion by revisiting the key questions and drawing connections between the positions and statements advocated by the contributing authors. The final editor offers final thoughts in the epilogue. CRE is a community, grassroots, and international movement; readers are invited to consider how integrity, courage, and action exist in their everyday evaluation practice.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":" 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139789694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What have I done for the cause today? Evaluators, evaluation, and CREA","authors":"L. Neubauer, A. Boyce, Nicole R. Bowman","doi":"10.1002/ev.20580","DOIUrl":"https://doi.org/10.1002/ev.20580","url":null,"abstract":"This article considers the volume's intentions, process, and gaps while inviting attention to the impact and influence of CREA and Dr. Hood on evaluators and evaluation. Three of the co‐editors author this conclusion by revisiting the key questions and drawing connections between the positions and statements advocated by the contributing authors. The final editor offers final thoughts in the epilogue. CRE is a community, grassroots, and international movement; readers are invited to consider how integrity, courage, and action exist in their everyday evaluation practice.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"52 3-4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139849298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. O’Hara, G. McNamara, Martin Brown, Shivaun O’Brien, Denise Burns, Sarah Gardezi
{"title":"Reconceptualizing evaluation and assessment from a culturally responsive standpoint – An Irish perspective","authors":"J. O’Hara, G. McNamara, Martin Brown, Shivaun O’Brien, Denise Burns, Sarah Gardezi","doi":"10.1002/ev.20569","DOIUrl":"https://doi.org/10.1002/ev.20569","url":null,"abstract":"This article explores the impact that Professor Stafford Hood had on the development of culturally responsive evaluation and assessment (CRE/A) in Ireland. Starting with a brief outline of the demographic and cultural changes that have happened in Ireland since the mid‐1990s, the article discusses the initial encounters with Professor Hood and his introduction of the theories, practice and praxis of CRE/A to a group of Irish scholars. This engagement was formalized by the establishment of the CREA‐Dublin, hosted in Dublin City University. The article examines how CREA‐Dublin has used the culturally responsive lens to critique evaluation, assessment, and quality assurance practices within Ireland and across the European Union (EU). Outlining the impact of several major EU funded projects as well as locally initiated research, the article concludes by highlighting the centrality of Professor Hood as a scholar and an individual to the transformation of research and practice in the fields of evaluation and assessment on the island of Ireland and beyond.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":" 81","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139789174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Disrupting evaluation? Emerging technologies and their implications for the evaluation industry","authors":"Steffen Bohni Nielsen","doi":"10.1002/ev.20558","DOIUrl":"https://doi.org/10.1002/ev.20558","url":null,"abstract":"Abstract This article surveyed different emerging technologies (ET), in particular artificial intelligence, and their burgeoning application in the evaluation industry. Evidence suggests that evaluators have been relatively slow in adopting ET in their practice. However, more recent data suggest that ET adoption is increasing. This article then analyzed if, and how, ET affect the evaluation industry and evaluation practice. The article finds that program evaluation is one of several competing forms of knowledge production informing decision‐making, particularly in the government and not‐for‐profit sectors. Therefore, evaluation faces a number of challenges stemming from ET. In this article, it is argued that evaluators must, albeit critically, embrace ET. Most likely, ET will complement evaluation practice and, in some instances, displace human tasks.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135195254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A special delivery by a fork: Where does artificial intelligence come from?","authors":"Izzy Thornton","doi":"10.1002/ev.20560","DOIUrl":"https://doi.org/10.1002/ev.20560","url":null,"abstract":"Abstract In this article, I discuss the use of artificial intelligence (AI) in evaluation and its relevance to the evolution of the field. I begin with a background on how AI models are developed, including how machine learning makes sense of data and how the algorithms it develops go on to power AI models. I go on to explain how this foundational understanding of machine learning and natural language processing informs where AI might and might not be effectively used. A critical concern is that AI models are only as strong as the data on which they are trained, and evaluators should consider important limitations when using AI, including its relevance to structural inequality. In considering the relationship between AI and evaluation, evaluators must consider both AI's use as an evaluative tool and its role as a new subject of evaluation. As AI becomes more and more relevant to a wider array of fields and disciplines, evaluators will need to develop strategies for how good the AI is (or is not), and what good the AI might (or might not) do.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136261480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editors’ notes","authors":"Sarah Mason, Bianca Montrosse‐Moorhead","doi":"10.1002/ev.20563","DOIUrl":"https://doi.org/10.1002/ev.20563","url":null,"abstract":"New Directions for EvaluationVolume 2023, Issue 178-179 p. 7-10 EDITORIAL Editors’ notes Sarah Mason, Sarah Mason University of Mississippi, Oxford, Mississippi, USASearch for more papers by this authorBianca Montrosse-Moorhead, Corresponding Author Bianca Montrosse-Moorhead [email protected] orcid.org/0000-0001-8566-0347 University of Connecticut, Storrs, Connecticut, USA Correspondence Bianca Montrosse-Moorhead, University of Connecticut, Storrs, Connecticut, USA. Email: [email protected]Search for more papers by this author Sarah Mason, Sarah Mason University of Mississippi, Oxford, Mississippi, USASearch for more papers by this authorBianca Montrosse-Moorhead, Corresponding Author Bianca Montrosse-Moorhead [email protected] orcid.org/0000-0001-8566-0347 University of Connecticut, Storrs, Connecticut, USA Correspondence Bianca Montrosse-Moorhead, University of Connecticut, Storrs, Connecticut, USA. Email: [email protected]Search for more papers by this author First published: 03 November 2023 https://doi.org/10.1002/ev.20563Read the full textAboutPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onEmailFacebookTwitterLinkedInRedditWechat REFERENCES Leeuw, F. L. (2020). Program evaluation B: Evaluation, big data, and artificial intelligence: Two sides of one coin. In E. Vigoda-Gadot & D. R. Vashdi (Eds.), Handbook of research methods in public administration, management and policy (pp. 277–297). Elgar. Mason, S. (2023). Finding a safe zone in the highlands: Exploring evaluator competencies in the world of AI. New Directions for Evaluation, 2023(178–179), 11–22. OpenAI. (2022). ChatGPT (November 2022 version) [Large language model]. Retrieved from http://chat.openai.com/chat Teasdale, R. M. (2021). Evaluative criteria: An integrated model of domains and sources. American Journal of Evaluation, 42(3), 354–376. https://doi.org/10.1177/1098214020955226 Teasdale, R. M. (2022). Representing the values of program participants: Endogenous evaluative criteria. Evaluation and Program Planning, 94, 102123. https://doi.org/10.1016/j.evalprogplan.2022.102123 Teasdale, R., Strasser, M., Moore, C., & Graham, K. (2023). Evaluative criteria in practice: Findings from an analysis of evaluations published in evaluation and program planning. Evaluation and Program Planning, 97, 102226. https://doi.org/10.1016/j.evalprogplan.2023.102226 Thornton, I. (2023). A special delivery by a fork: Where does Artificial Intelligence come from? New Directions for Evaluation, 2023(178–179), 23–32. Volume2023, Issue178-179Special Issue: Evaluation and Artificial Intelli","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135195068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vision for an equitable AI world: The role of evaluation and evaluators to incite change","authors":"Aileen M. Reid","doi":"10.1002/ev.20559","DOIUrl":"https://doi.org/10.1002/ev.20559","url":null,"abstract":"Abstract The advent of generative AI such as ChatGPT has propelled the field of evaluation into conversations about the use of AI in the field and the ethics of knowledge generation. While there are many benefits of AI, as with any new technology there can be collateral damage. The discourse about AI and evaluation provides another opportunity to center equity in our work as evaluators by asking, how can evaluation contribute to the public good in an AI world? This article highlights contextual concerns with AI from an ecosystem perspective, placing emphasis on structural and racial/ethnic inequities, bias, and prejudice. The author issues a clarion call for the field of evaluation to act collectively to incite change by being proactive, embracing our professional responsibility and critical voice, and employing evidence‐based practice. Evaluators are encouraged to exercise our social and political responsibility through courageous leadership and advocacy to attend to the values of stakeholders and advance an equitable AI world.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135219666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zach Tilton, John M. LaVelle, Tian Ford, Maria Montenegro
{"title":"Artificial intelligence and the future of evaluation education: Possibilities and prototypes","authors":"Zach Tilton, John M. LaVelle, Tian Ford, Maria Montenegro","doi":"10.1002/ev.20564","DOIUrl":"https://doi.org/10.1002/ev.20564","url":null,"abstract":"Abstract Advancements in Artificial Intelligence (AI) signal a paradigmatic shift with the potential for transforming many various aspects of society, including evaluation education, with implications for subsequent evaluation practice. This article explores the potential implications of AI for evaluator and evaluation education. Specifically, the article discusses key issues in evaluation education including equitable language access to evaluation education, navigating program, social science, and evaluation theory, understanding evaluation theorists and their philosophies, and case studies and simulations. The paper then considers how chatbots might address these issues, and documents efforts to prototype chatbots for three use cases in evaluation education, including a guidance counselor, teaching assistant, and mentor chatbot for young and emerging evaluations or anyone who wants to use it. The paper concludes with ruminations on additional research and activities on evaluation education topics such as how to best integrate evaluation literacy training into existing programs, making strategic linkages for practitioners, and evaluation educators.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135219667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial intelligence and validity","authors":"Tarek Azzam","doi":"10.1002/ev.20565","DOIUrl":"https://doi.org/10.1002/ev.20565","url":null,"abstract":"Abstract This article explores the interaction between artificial intelligence (AI) and validity and identifies areas where AI can help build validity arguments, and where AI might not be ready to contribute to our work in establishing validity. The validity of claims made in an evaluation is critical to the field, since it highlights the strengths and limitations of findings and can contribute to the utilization of the evaluation. Within this article, validity will be discussed within two broad categories: quantitative validity and qualitative trustworthiness. Within these categories, there are multiple types of validity, including internal validity, measurement validity, establishing trustworthiness, and credibility, to name a few. Each validity type will be discussed within the context of AI, examining if and how AI can be leveraged (or not) to help establish a specific validity type, or where it might not be possible for AI (in its current form) to contribute to the development of a validity argument. Multiple examples will be provided throughout the article to highlight the concepts introduced.","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135195069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hacking by the prompt: Innovative ways to utilize ChatGPT for evaluators","authors":"Silva Ferretti","doi":"10.1002/ev.20557","DOIUrl":"https://doi.org/10.1002/ev.20557","url":null,"abstract":"Abstract “Hacking by the prompt”—writing simple yet creative conversational instructions in ChatGPT's message window—revealed many valuable additions to the evaluator's toolbox for all stages of the evaluation process. This includes the production of terms of reference and proposals for the dissemination of final reports. ChatGPT does not come with an instruction book, so evaluators must experiment creatively to understand its potential. The surprising performance of ChatGPT leads to the question: will it eventually substitute for evaluators? By describing ChatGPT through four personality characteristics (pedantic, “I know it all,” meek, and “speech virtuoso”), this article provides case examples of the potential and pitfall of ChatGPT in transforming evaluation practice. Anthropomorphizing ChatGPT is debatable, but the result is clear: tongue‐in‐cheek personality characteristics helped hack ChatGPT more creatively while remaining aware of its challenges. This article combines practical ideas with deeper reflection on evaluation. It concludes that ChatGPT can substitute for evaluators when evaluations mostly focus on paperwork and conventional approaches “by the book” (an unfortunate trend in the sector). ChatGPT cannot substitute engagement with reality and critical thinking. Will ChatGPT then be a stimulus to rediscover the humanity and the reality we lost in processes?","PeriodicalId":35250,"journal":{"name":"New Directions for Evaluation","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135195223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}