{"title":"Decolonizing Science: Undoing the Colonial and Racist Hegemony of Western Science","authors":"M. Held","doi":"10.56645/jmde.v19i44.785","DOIUrl":"https://doi.org/10.56645/jmde.v19i44.785","url":null,"abstract":"Decolonization is the complicated and unsettling undoing of colonization. In a similarly simplified definition, science is a structured way of pursuing knowledge. To decolonize science thus means to undo the past and present racist and colonial hegemony of Western science over other, equally legitimate, ways of knowing. This paper discusses the paradigmatic prerequisites and consequences of decolonizing Western science. Only if Western science is toppled from its pedestal and understood in a cultural way can it engage with other sciences at eye level. Such equal collaboration that results in the co-creation of new knowledge based on the scientific method and Indigenous scientific inquiry is what decolonizing science is all about. What it looks like in practice is highly variable as there is no one-size-fits-all approach due to the fact that Indigenous knowledge is rooted in the local, the land. Therefore, decolonizing science is much more a path than a destination. This path, however, will also pave the way to a new multiparadigmatic space. A quick look into the history and philosophy of science reveals that new paradigms have always emerged after a few trailblazers started engaging in a new way of doing science.","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44569185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concluding Thoughts and Reflections on the Special Issue on Program Evaluation Standards","authors":"Arthur Hernandez","doi":"10.56645/jmde.v19i43.855","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.855","url":null,"abstract":"This paper seeks to address the issue of relevance of the current edition of the Program Evaluation Standards published by the Joint Committee on Standards for Educational Evaluation. The paper addresses concerns related to the nature of the standards, their current applicability to practice, their comprehensiveness and completeness. The presented conclusions are that the standards are applicable, relevant, complete, and comprehensive.","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47657821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"\"Best Tradition\": CREATE, JCSEE and the Program Evaluation Standards","authors":"Corrie Klinger, D. Klinger","doi":"10.56645/jmde.v19i43.833","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.833","url":null,"abstract":"Background: Evaluation “is a task in the best tradition of the most abstract theoretical science as well as the most practical applied science” (Scriven, 1968, p .9). The Program Evaluation Standards of the Joint Committee on Standards for Educational Evaluation (JCSEE) operationalize the theoretical aspects of evaluation and, when used, facilitate sound evaluation methods in applied settings. Between the publications of the first and second editions of The Program Evaluation Standards, the Center for Research on Educational Accountability and Teacher Evaluation (CREATE) was funded in 1990 at Western Michigan University with federal monies of $5.2 million, and between 1990 and 1995 by the United States Department of Education, Office of Educational Research and Improvement (OERI). CREATE was established for the betterment of evaluation within the educational context (Stufflebeam, 1991; Stufflebeam & Shinkfield, 1994). CREATE’s mandate and subsequent mission furthered the work of the Program Evaluation Standards and the JCSEE by using the standards in applied settings. Keeping to Scriven’s notion of evaluation as the best tradition, the collaborative work between CREATE and JCSEE is a well-established tradition that furthers the development of theoretical aspects of evaluation and the application of the evaluation standards.\u0000Purpose: Examine CREATE’s impact on the Program Evaluation Standards’ theoretical development and applied use.\u0000Setting: Not applicable.\u0000Intervention: Not applicable\u0000Research Design: Not applicable.\u0000Data Collection and Analysis: Systematic review of the theoretical development and applied use of the Program Evaluation Standards in the books, journal articles, monographs, special papers, meeting minutes, conference programs, and presentations associated with CREATE.\u0000Findings: CREATE has contributed to the operationalization of the theoretical aspects of evaluation with the Program Evaluation Standards and facilitated their use in applied settings. CREATE has also furthered the work of the Personnel Evaluation Standards and the Classroom Assessment Standards (formerly the Student Evaluation Standards). Leading scholars from CREATE and the JCSEE have contributed to the standards since the 1990s. Members of CREATE have published a notable range of books, journal articles, monographs, special papers and conference presentations related to the Program Evaluation Standards. Organizational capacity and shared goals of both the JCSEE and CREATE guided the practical application and theoretical development of the Program Evaluation Standards.\u0000Keywords: Program Evaluation Standards; Consortium for Research on Educational Assessment and Teaching Effectiveness; Joint Committee for Standards on Educational Evaluation","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":"45 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41314667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brad R. Watts, R. Castillo, John Akwetey, Dung Pham
{"title":"Program Evaluation Standards in Evaluation Scholarship and Practice","authors":"Brad R. Watts, R. Castillo, John Akwetey, Dung Pham","doi":"10.56645/jmde.v19i43.825","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.825","url":null,"abstract":"Background: The Program Evaluation Standards that were developed and approved by the Joint Committee on Standards for Educational Evaluation have served as a resource to the broader evaluation field for over four decades. However, little evidence has been collected regarding the extent to which the standards have influenced the field through scholarship or professional practice.\u0000Purpose: This study seeks to estimate the prevalence of the Program Evaluation Standards in evaluation scholarship and professional practice.\u0000Setting: Not applicable.\u0000Intervention: Not applicable.\u0000Research Design: The study combines a systematic review of evaluation literature with a survey of American Evaluation Association (AEA) and Canadian Evaluation Society (CES) members.\u0000Data Collection and Analysis: A systematic review of articles published in 14 evaluation-specific journals from 2010 to 2020 was conducted to identify and typify articles citing the standards. Additionally, AEA and CES members were surveyed, with a focus on knowledge and use of the standards. Descriptive analyses are presented to quantify the prevalence of the standards in evaluation scholarship and practice, respectively.\u0000Findings: The systematic review revealed that 4.48% of the 4,460 articles published in 14 evaluation-specific journals from 2010 to 2020 contained some use of the standards. Survey results show that 53.14% of AEA members and 67.12% of CES members are familiar with the standards and that, among those with knowledge of the standards, most AEA (67.67%) and CES (71.74%) members use them at least “occasionally” in their professional work, education, and scholarship activities.\u0000Keywords: program evaluation standards; Joint Committee on Standards for Educational Evaluation; American Evaluation Association; Canadian Evaluation Society; systematic review; research on evaluation","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45372768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Goldie MacDonald, Kim Castelin, Naje' George, Asmith Joseph
{"title":"What do we know about how the Program Evaluation Standards are used in public health?","authors":"Goldie MacDonald, Kim Castelin, Naje' George, Asmith Joseph","doi":"10.56645/jmde.v19i43.847","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.847","url":null,"abstract":"Background: Released by the Centers for Disease Control and Prevention (CDC), Framework for Program Evaluation in Public Health prominently features the program evaluation standards (1999). The program evaluation standards (PES) include 30 statements in five domains: utility, feasibility, propriety, accuracy, and evaluation accountability. Despite decades of attention to the PES among framework users and others, how public health professionals apply these standards in their work is not well understood.\u0000Purpose: The study sought to identify notable commonalities in how the PES are used in public health.\u0000Setting: Application of the PES in evaluative work in public health and allied fields.\u0000Intervention: Not applicable.\u0000Research Design: The study included a search of subscription and nonsubscription sources to identify documents that included explicit content concerning use of standards in evaluative work in public health. Documents identified were screened using predetermined criteria to include or exclude each item in the study. Items included were reviewed and coded using codes developed before examining all documents. For each code, reviewers discussed data from all documents to identify commonalities and variations in application of standards.\u0000Findings: The literature search returned 405 documents to be screened (179 from subscription and 226 from nonsubscription sources). Thirty-eight items were included in the study based on initial screening (11 from subscription and 27 from nonsubscription sources). The study revealed that authors discussed standards as a regular component of evaluation work, but precisely how standards were used was not always explained in detail. Also, authors did not always discuss standards statements but sometimes solely focused on general domains (e.g., feasibility or accuracy). When authors discussed specific statements, they were more descriptive in how they applied the PES (i.e., compared with articles that focused on general domains). Overall, authors placed far greater emphasis on Accuracy and Utility standards, compared with Propriety, Evaluation Accountability, or Feasibility. In many cases, authors used the PES in combination with other resources (e.g., checklists, guidelines, or other standards). Although program evaluation is crucial to public health practice, the mechanics of how professionals consider, integrate, or use evaluation standards is not fully understood.\u0000Keywords: program evaluation; program evaluation standards; public health","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41575741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to the Special Issue on the Program Evaluation Standards","authors":"Brad R. Watts, M. Steinberg","doi":"10.56645/jmde.v19i43.849","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.849","url":null,"abstract":"This one-page paper introduces the special issue devoted to the Program Evaluation Standards with a brief summary of the purpose of the special issue and an overview of the individual papers is provided.","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42782148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Program Evaluation Standards for Utility Facilitate Stakeholder Internalization of Evaluative Thinking in the West Virginia Clinical Translational Science Institute.","authors":"Reagan Curtis, Abhik Roy, Nikki Lewis, Evana Nusrat Dooty, Taylor Mikalik","doi":"10.56645/jmde.v19i43.831","DOIUrl":"10.56645/jmde.v19i43.831","url":null,"abstract":"<p><strong>Background: </strong>The program evaluation standards (PES) can be considered established criteria for high-quality evaluations. We emphasize PES Utility standards and evaluation capacity building as we strive for meaningful application of our work in the real world.</p><p><strong>Purpose: </strong>We focused our methodology on understanding how stakeholders discussed utility and how their perceptions related to our evaluation work aligned with the Utility domain of the program evaluation standards.</p><p><strong>Setting: </strong>The West Virginia Clinical Translational Science Institute (WVCTSI), a statewide multi-institutional entity for which we have conducted tracking and evaluation since 2012.</p><p><strong>Intervention: </strong>Sustained collaborative engagement of evaluation stakeholders with the goal of increasing their utilization of evaluation products and evaluative thinking.</p><p><strong>Research design: </strong>Case study.</p><p><strong>Data collection and analysis: </strong>We interviewed five key stakeholders. We used themes developed from coding of interview data to inform document analyses. We used interview and document analyses to develop additional themes and illustrative examples, as well as to develop and describe a five-level evaluation uptake scale.</p><p><strong>Findings: </strong>We describe shifts in initiation, use, and internalization of evaluative thinking by non-evaluation personnel. These shifts prompted our development and application of an evaluation uptake scale to capture increased evaluation capacity among stakeholders over time. We discuss how focus on the PES Utility standards and evaluation capacity building facilitated these shifts, and their implications for maximizing utility of evaluation activity in large, complex programmatic evaluations.</p>","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":"49-65"},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936652/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46193609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Dissemination Research Approaches to Understand the Awareness, Adoption, and Use of The Program Evaluation Standards","authors":"Julie Q. Morrison, Kay M. Cunningham","doi":"10.56645/jmde.v19i43.835","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.835","url":null,"abstract":"Background: The adoption and use of effective, legally defensible, and ethically sound practices relies on the successful dissemination of evidence-based practices and professional standards. The field of program evaluation has standards, competencies, and principles, yet little is known about how these are utilized by education-focused program evaluators.\u0000Purpose: The purpose of this study is to examine the dissemination and use of the program evaluation standards established by the Joint Committee on Standards for Educational Evaluation, relative to the dissemination and use of the American Evaluation Association’s (AEA’s) guiding principles and AEA’s evaluator competencies.\u0000Setting: The SIGnetwork, a network of evaluators of State Personnel Development Grants (SPDGs) funded by the U.S. Department of Education, Office for Special Education Programs (OSEP).\u0000Intervention: NA\u0000Research Design: Descriptive research.\u0000Data Collection and Analysis: Data collection involved administering an online survey to members designated as evaluators in the SIGnetwork directory. Descriptive statistics were used to summarize the data collected via the online survey.\u0000Findings: Using the formative audience research approach to understanding dissemination, the results of the study support previous findings that awareness of the standards was inconsistent among a sample of AEA members. Respondents self-reported low to moderate levels of familiarity with The Program Evaluation Standards and the other two guidance documents: Guiding Principles for Evaluators and AEA Evaluator Competencies. Using the audience segmentation research approach to understanding dissemination, the results of this study indicate that participants who were AEA members were more likely than those who were not members of AEA to report being familiar with the standards and to have earned an advanced degree related to their role as an evaluator.\u0000Keywords: Joint Committee on Standards for Educational Evaluation, American Evaluation Association, program evaluation standards","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47137304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aspirations, Challenges, and Influence of the Program Evaluation Standards 3rd Edition","authors":"Paula E. Egelson","doi":"10.56645/jmde.v19i43.839","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.839","url":null,"abstract":"Background: The revisions to The Program Evaluation Standards, third edition (2011) were substantial. The authors of this revision were the most knowledgeable individuals to question about aspects of the PES.\u0000Purpose: To better understand the historical roots and intent of The Program Evaluation Standards, to analyze impact, and to look toward the future, it was critical to examine the aspirations, challenges, and impact associated with the PES via interviews with the authors.\u0000Setting: Not applicable.\u0000Intervention: Not applicable.\u0000Research Design: This qualitative study included three basic interview questions, with supporting follow-up questions for each of the basic questions.\u0000Data Collection and Analysis: The results of this study were determined by the authors’ responses to interview questions, which were then coded by emergent patterns and themes.\u0000Findings: Results of the interviews found that the authors aspired to provide relevant, up-to-date, and useful standards to guide evaluators, stakeholders, and students. They were able to successfully resolve challenges associated with the PES third edition revisions, and overall these resolved challenges made the edition stronger. Finally, the authors integrated the standards into their professional work, which positively influenced them, students, and other stakeholders.\u0000Keywords: program evaluation standards; interviews; aspirations; impact; challenges.","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43187818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How should Program Evaluation Standards inform the use of cost-benefit analysis in evaluation?","authors":"J. King","doi":"10.56645/jmde.v19i43.829","DOIUrl":"https://doi.org/10.56645/jmde.v19i43.829","url":null,"abstract":"Background: Cost-benefit analysis (CBA), like any other evaluation method, should be used in ways that uphold program evaluation standards and should be subjected to metaevaluation. In contrast to the broad remit of program evaluation standards, guidelines for economic evaluation focus mainly on technical aspects of evaluation quality, aimed at ensuring precision, accuracy, and reliability. Can CBA be conducted in adherence both to program evaluation standards and to its own methodological principles, or are there areas where expectations conflict?\u0000Purpose: Assess the potential for CBA to be conducted in keeping with the Program Evaluation Standards (PES) of the Joint Committee on Standards for Educational Evaluation.\u0000Setting: Analysis applies to any setting in which CBA is being considered as an evaluation method.\u0000Intervention: N/A\u0000Research Design: Methodological principles underpinning CBA were systematically assessed against the PES, to determine the extent to which CBA can be conducted in a manner aligned with these standards. CBA was rated according to whether it can follow each standard in principle, not the extent to which economists follow a given standard in practice.\u0000Data Collection and Analysis: This assessment was undertaken from a theoretical perspective, through analysis of relevant literature. The ratings are evaluative; they represent the judgments of the author, made on the basis of explicit definitions.\u0000Findings: Some ethical principles espoused in the PES are also required in CBA. On the other hand, some of the PES are not explicit requirements in CBA, though they could be applied by evaluators or economists when conducting a CBA. However, some PES logically cannot be met by CBA if it is used as a stand-alone method. All PES can theoretically be met when an evaluation combines CBA with other methods. In order to use CBA in adherence to PES, evaluators and economists must take an explicit interest in the effects of their analysis on people’s lives. This has significant implications for the way CBA should be used, including the nature and extent of stakeholder involvement, the potential use of CBA in conjunction with other methods, and decisions about when not to use CBA. As with any evaluation method, deliberation is necessary over whether, when, and how to use CBA.\u0000Keywords: cost-benefit analysis, program evaluation standards, metaevaluation","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41349929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}