Rachael R. Kenney, L. Haverhals, Krysttel C Stryczek, Kelty B. Fehling, Sherry L Ball
{"title":"Site Visit Standards Revisited: A Framework for Implementation","authors":"Rachael R. Kenney, L. Haverhals, Krysttel C Stryczek, Kelty B. Fehling, Sherry L Ball","doi":"10.1177/10982140221079266","DOIUrl":"https://doi.org/10.1177/10982140221079266","url":null,"abstract":"Site visits are common in evaluation plans but there is a dearth of guidance about how to conduct them. This paper revisits site visit standards published by Michael Patton in 2017 and proposes a framework for evaluative site visits. We retrospectively examined documents from a series of site visits for examples of Patton's standards. Through this process, we identified additional standards and organized them into four categories and fourteen standards that can guide evaluation site visits: team competencies and knowledge (interpersonal competence, cultural humility, evaluation competence, methodological competence, subject matter knowledge, site specific knowledge), planning and coordination (project design, resources, data management), engagement (team engagement, sponsor engagement, site engagement), and confounding factors (neutrality, credibility). In the paper, we provide definitions and examples from the case of meeting, and missing, the standards. We encourage others to apply the framework in their contexts and continue the discussion around evaluative site visits.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"253 - 269"},"PeriodicalIF":1.7,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44489033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building a Welcoming Evaluation Community in Remembrance of George Julnes","authors":"Guili Zhang","doi":"10.1177/10982140221079189","DOIUrl":"https://doi.org/10.1177/10982140221079189","url":null,"abstract":"The passing of George Julnes, Editor of the American Journal of Evaluation (AJE), brought deep sorrow to the evaluation community. We lost a dedicated colleague and even better friend. George was a welcoming face of the American Evaluation Association (AEA) and an exemplary leader. He supported AEA membership and leadership, contributed to an internationally inclusive AEA, maintained a strong AJE editorial team, and adapted AJE to meet the new reporting standards of the American Psychological Association. Through his dedication and efforts, George helped shape AEA's professional future. His torches are being picked up by many others who have been inspired by his vision and dedication to building a warm, welcoming, professional evaluation community.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"306 - 308"},"PeriodicalIF":1.7,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45822347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Riding Shotgun Down Evaluations’ Highways: A Tribute to the Legacy of George Julnes","authors":"S. Donaldson","doi":"10.1177/10982140221077938","DOIUrl":"https://doi.org/10.1177/10982140221077938","url":null,"abstract":"George’s untimely passing this year was an earthquake in my world. Why George? He is the last colleague I would ever imagine leaving us this early. I was so fortunate to be working with him closer than ever this year as one of his Associate Editors for the American Journal Evaluation (AJE). He seemed so happy and healthy, and it is unbelievable I will only hear his voice in my head moving forward as I finish up the AJE editing we were working on together. However, the memories of our evaluation adventures and the many insights he generously shared with me about the field of evaluation will live with me until it is my time, and his legacy will inform evaluation theory and practice forever. George and I had numerous discussions and shared many meaningful evaluation adventures over the years. As I reflect, one of the themes that emerges is he mostly seemed to prefer the drivers’ seat, while I was happy to ride shotgun and support him as we navigated some of evaluations’ most challenging highways. Space limits prevent me from outlining and describing the plethora of major contributions George made to advancing evaluation theory and practice across his prolific career. I was thrilled when his impressive body of written work was honored by the American Evaluation Association (AEA) with the Paul F. Lazarsfeld Evaluation Theory Award in 2015. Instead, I will provide a few brief reflections related to working closely with George on topics such as “What counts as credible and actionable evidence in evaluation practice?” in our adventures when we served on the AEA Board together; and on riding shotgun with him as he was serving as Editor-in-Chief for AJE during a global pandemic. So what counts as credible and actionable evidence, Professor Julnes? It depends on the context you are working in, Professor Donaldson. George was not a fan of the raging debates in the field over the years about the superiority of paradigms, approaches, and methods for evaluating high stake evaluation questions. He seemed to believe most evaluation approaches and methods had their place in a large evaluation tent. A better question in his view was how do we choose methods in relation to the contexts we face in practice. He and his close friend and colleague, Debra Rog, emphasized that:","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"298 - 300"},"PeriodicalIF":1.7,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43606313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Framing Evaluation in Reality: An Introduction to Ontologically Integrative Evaluation","authors":"J. Billman","doi":"10.1177/10982140221075244","DOIUrl":"https://doi.org/10.1177/10982140221075244","url":null,"abstract":"For over 30 years, calls have been issued for the western evaluation field to address implicit bias in its theory and practice. Although many in the field encourage evaluators to be culturally competent, ontological competence remains unaddressed. Grounded in an institutionalized distrust of non-western perspectives of reality and knowledge frameworks, this neglect threatens the validity, reliability, and usefulness of western designed evaluations conducted in non-western settings. To address this, I introduce Ontologically Integrative Evaluation (OIE), a new framework built upon ontological competence and six foundational ontological concepts: ontological fluidity, authenticity, validity, synthesis, justice, and vocation. Grounding evaluation in three ontological considerations—what there is, what is real, and what is fundamental—OIE systematically guides evaluators through a deep exploration of their own and others’ ontological assumptions. By demonstrating the futility of evaluations grounded in a limited ontological worldview, OIE bridges the current divide between western and non-western evaluative thinking.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"90 - 108"},"PeriodicalIF":1.7,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48071796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. LaVelle, Clayton L. Stephenson, S. Donaldson, Justin D Hackett
{"title":"Findings From an Empirical Exploration of Evaluators’ Values","authors":"J. LaVelle, Clayton L. Stephenson, S. Donaldson, Justin D Hackett","doi":"10.1177/10982140211046537","DOIUrl":"https://doi.org/10.1177/10982140211046537","url":null,"abstract":"Psychological theory suggests that evaluators’ individual values and traits play a fundamental role in evaluation practice, though few empirical studies have explored those constructs in evaluators. This paper describes an empirical study on evaluators’ individual, work, and political values, as well as their personality traits to predict evaluation practice and methodological orientation. The results suggest evaluators value benevolence, achievement, and universalism; they lean socially liberal but are slightly more conservative on fiscal issues; and they tend to be conscientious, agreeable, and open to new experiences. In the workplace, evaluators value competence and opportunities for growth, as well as status and independence. These constructs did not statistically predict evaluation practice, though some workplace values and individual values predicted quantitative methodological orientation. We conclude by discussing strengths, limitations, and next steps for this line of research.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42936989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing the Elephant in the Room: Exploring the Impostor Phenomenon in Evaluation","authors":"J. LaVelle, Natalie D. Jones, S. Donaldson","doi":"10.1177/10982140221075243","DOIUrl":"https://doi.org/10.1177/10982140221075243","url":null,"abstract":"The impostor phenomenon is a psychological construct referring to a range of negative emotions associated with a person's perception of their own \"fraudulent competence\" in a field or of their lack of skills necessary to be successful in that field. Anecdotal evidence suggests that many practicing evaluators have experienced impostor feelings, but lack a framework in which to understand their experiences and the forums in which to discuss them. This paper summarizes the literature on the impostor phenomenon, applies it to the field of evaluation, and describes the results of an empirical, quantitatively focused study which included open-ended qualitative questions exploring impostorism in 323 practicing evaluators. The results suggest that impostor phenomenon in evaluators consists of three constructs: Discount, Luck, and Fake. Qualitative data analysis suggests differential coping strategies for men and women. Thematic analysis guided the development of a set of proposed solutions to help lessen the phenomenon's detrimental effects for evaluators.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48146586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Citizens and Evaluation: A Review of Evaluation Models","authors":"Pirmin Bundi, V. Pattyn","doi":"10.1177/10982140211047219","DOIUrl":"https://doi.org/10.1177/10982140211047219","url":null,"abstract":"Evaluations are considered of key importance for a well-functioning democracy. Against this background, it is vital to assess whether and how evaluation models approach the role of citizens. This paper is the first in presenting a review of citizen involvement in the main evaluation models which are commonly distinguished in the field. We present the results of both a document analysis and an international survey with experts who had a prominent role in developing the models. This overview has not only a theoretical relevance, but can also be helpful for evaluation practitioners or scholars looking for opportunities for citizen involvement. The paper contributes to the evaluation literature in the first place, but also aims to fine-tune available insights on the relationship between evidence informed policy making and citizens.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"1 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2022-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42397721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Embedding a Proof-of-Concept Test in an At-Scale National Policy Experiment: Greater Policy Learning But at What Cost to Statistical Power? The Social Security Administration’s Benefit Offset National Demonstration (BOND)","authors":"S. Bell, D. Stapleton, M. Wood, Daniel Gubits","doi":"10.1177/10982140211006786","DOIUrl":"https://doi.org/10.1177/10982140211006786","url":null,"abstract":"A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower “proof-of-concept” evidence of whether the policy can work for motivated individuals. Both forms of learning carry value, yet evaluations rarely combine the two designs. The U.S. Social Security Administration conducted an exception, the Benefit Offset National Demonstration (BOND). This article uses BOND to examine the statistical power implications and potential gains in policy learning—relative to costs—from combining volunteer and population-representative experiments. It finds that minimum detectable effects of volunteer experiments rise little when one adds a population-representative experiment, but those of a population-representative experiment double or quadruple with the addition of a volunteer experiment.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"44 1","pages":"118 - 132"},"PeriodicalIF":1.7,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46489430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Book Review: The Institutionalisation of Evaluation in Europe","authors":"Marlène Laeubli Loud","doi":"10.1177/10982140211067394","DOIUrl":"https://doi.org/10.1177/10982140211067394","url":null,"abstract":"The global development of evaluation has steadily increased over the past 50 years. Evaluation has progressively been acknowledged as an important aid for policy making and policy review in the public and private sectors as well as in humanitarian and nongovernmental organizations (NGOs). Evaluation has mainly been used to measure “success,” render policies accountable to the public, and draw out important lessons, although its scope of purpose and use has become increasingly diverse over the last few decades. But what, if any, are the rules, norms, and regulations in place to support the evaluation process? And how readily can these be adapted to meet new demands, incorporate innovations, and be employed in new activity domains? In short, how well is evaluation now institutionalized? Such are the questions the editors of The Institutionalisation of Evaluation in Europe set out to answer. This book is the first in a four-volume series aimed at analyzing how well evaluation has become established in different continents around the world. The other three volumes will cover America, Africa, and Asia. While there have been previous publications on the same theme (Furubo et al., 2002; Jacob et al., 2015), this series plans a more comprehensive coverage for each of the four continents and, ultimately, systematic comparisons across countries and continents. The editors developed an analytical framework to structure data collection by the country-specific authors of each chapter to enable both country-to-country comparisons and the synthesis of findings across the chapters. The framework defines “institutionalization” as being comprised of three subsystems: (a) the political system of “institutional structures and processes”; (b) the social system, summarized as “societal dissemination and acceptance of evaluation in civil society”; and (c) the system of professionalization, referring specifically to the professionalization of evaluations (p. 15). This volume analyzes the situation in 16 European countries and the European Union. Country profiles are presented according to the geographical region: Northern, Western, Southern, and Central Eastern Europe. Data collection and analysis principally relied on evaluation specialists from each of the selected countries. The editors recruited chapter authors based on their overall familiarity with the evaluation status quo in the selected countries. To respond to the questions identified in the editors’ framework, authors drew on their personal experience, professional networks, and extensive documentary review. Book Review","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"148 - 150"},"PeriodicalIF":1.7,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42556153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer J. Esala, Liz Sweitzer, C. Higson-Smith, Kirsten L. Anderson
{"title":"Human Rights Advocacy Evaluation in the Global South: A Critical Review of the Literature","authors":"Jennifer J. Esala, Liz Sweitzer, C. Higson-Smith, Kirsten L. Anderson","doi":"10.1177/10982140211007937","DOIUrl":"https://doi.org/10.1177/10982140211007937","url":null,"abstract":"Advocacy evaluation has emerged in the past 20 years as a specialized area of evaluation practice. We offer a review of existing peer-reviewed literature and draw attention to the scarcity of scholarly work on human rights advocacy evaluation in the Global South. The lack of published material in this area is concerning, given the urgent need for human rights advocacy in the Global South and the difficulties of conducting advocacy in contexts in which fundamental human rights are often poorly protected. Based on the review of the literature and our professional experiences in human rights advocacy evaluation in the Global South, we identify themes in the literature that are especially salient in the Global South and warrant more attention. We also offer critical reflections on content areas not addressed in the existing literature and conclude with suggestions as to how activists, evaluators, and other stakeholders can contribute to the development of a field of practice that is responsive to the global challenge of advocacy evaluation.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":"43 1","pages":"335 - 356"},"PeriodicalIF":1.7,"publicationDate":"2022-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47771691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}