{"title":"A Tribute to George Julnes from a Devoted Mentee","authors":"J. Randolph","doi":"10.1177/10982140221079190","DOIUrl":"https://doi.org/10.1177/10982140221079190","url":null,"abstract":"In this tribute, I describe my wonderful experience having George Julnes as a long-time evaluation mentor and I pass on some of the sage wisdom that he passed on to me.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44685947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"George Julnes: Scholar of Evaluation and of Life","authors":"M. Mark","doi":"10.1177/10982140221078753","DOIUrl":"https://doi.org/10.1177/10982140221078753","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42366335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Letter From the Interim Editor","authors":"Rachael B. Lawrence","doi":"10.1177/10982140221075078","DOIUrl":"https://doi.org/10.1177/10982140221075078","url":null,"abstract":"","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48411838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Section Editor’s Note: Insights into the Generalizability of Findings from Experimental Evaluations","authors":"Laura R. Peck","doi":"10.1177/10982140221075092","DOIUrl":"https://doi.org/10.1177/10982140221075092","url":null,"abstract":"As noted in my Editor’s Note to the Experimental Methodology Section of the American Journal of Evaluation’s (2020) Volume 40, Issue 4, experimental evaluations—where research units, such as people, schools, classrooms, and neighborhoods are randomly assigned to a program or to a control group—are often criticized for having limited external validity. In evaluation parlance, external validity refers to the ability to generalize results to other people, places, contexts, or times beyond those on which the evaluation focused. Evaluations—whether using an experimental design or not—are commonly conducted in a single site or a selected set of sites, either because that site is of particular interest or for convenience. Those special circumstances can mean that those sites—or the people within them—are not representative of a broader population of interest. In turn, the evaluation results may be useful only for assessing those people and places and not for predicting how a similar intervention might generate similar results for other people in other places. The good news, however, is that research and design innovations over the past several years have focused on how to overcome this criticism, making experimental evaluations’ results more useful for informing policy and program decisions (e.g., Bell & Stuart, 2016; Tipton & Olsen, 2018). Efforts for improving the external validity of experiments fall into two camps: design and analysis. Improving external validity through design means explicitly engaging a sample that is representative of a clearly identified target population. Although doing so is not common, particularly at the national level, some experiments have been successful at engaging a representative set of sites. The U.S. Department of Labor’s National Job Corps Study (e.g., Schochet, Burghardt & McConnell, 2006), the U.S. Department of Health and Human Services’ Head Start Impact Study (Puma et al., 2010), and the U.S. Social Security Administration’s Benefit Offset National Evaluation (Gubits et al., 2018) are three major evaluations that successfully recruited a nationally representative sample so that the evaluation results would be nationally generalizable. A simple, random selection of sites is the most straightforward way to ensure this representativeness and the generalizability of an evaluation’s results. In practice, however, that can be anything but simple. Even if an evaluation team randomly samples a site to participate, that site still needs to agree to participate; and if it does not, then the sample is no longer random.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44824541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachael R. Kenney, L. Haverhals, Krysttel C Stryczek, Kelty B. Fehling, Sherry L Ball
{"title":"Site Visit Standards Revisited: A Framework for Implementation","authors":"Rachael R. Kenney, L. Haverhals, Krysttel C Stryczek, Kelty B. Fehling, Sherry L Ball","doi":"10.1177/10982140221079266","DOIUrl":"https://doi.org/10.1177/10982140221079266","url":null,"abstract":"Site visits are common in evaluation plans but there is a dearth of guidance about how to conduct them. This paper revisits site visit standards published by Michael Patton in 2017 and proposes a framework for evaluative site visits. We retrospectively examined documents from a series of site visits for examples of Patton's standards. Through this process, we identified additional standards and organized them into four categories and fourteen standards that can guide evaluation site visits: team competencies and knowledge (interpersonal competence, cultural humility, evaluation competence, methodological competence, subject matter knowledge, site specific knowledge), planning and coordination (project design, resources, data management), engagement (team engagement, sponsor engagement, site engagement), and confounding factors (neutrality, credibility). In the paper, we provide definitions and examples from the case of meeting, and missing, the standards. We encourage others to apply the framework in their contexts and continue the discussion around evaluative site visits.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44489033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building a Welcoming Evaluation Community in Remembrance of George Julnes","authors":"Guili Zhang","doi":"10.1177/10982140221079189","DOIUrl":"https://doi.org/10.1177/10982140221079189","url":null,"abstract":"The passing of George Julnes, Editor of the American Journal of Evaluation (AJE), brought deep sorrow to the evaluation community. We lost a dedicated colleague and even better friend. George was a welcoming face of the American Evaluation Association (AEA) and an exemplary leader. He supported AEA membership and leadership, contributed to an internationally inclusive AEA, maintained a strong AJE editorial team, and adapted AJE to meet the new reporting standards of the American Psychological Association. Through his dedication and efforts, George helped shape AEA's professional future. His torches are being picked up by many others who have been inspired by his vision and dedication to building a warm, welcoming, professional evaluation community.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45822347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Riding Shotgun Down Evaluations’ Highways: A Tribute to the Legacy of George Julnes","authors":"S. Donaldson","doi":"10.1177/10982140221077938","DOIUrl":"https://doi.org/10.1177/10982140221077938","url":null,"abstract":"George’s untimely passing this year was an earthquake in my world. Why George? He is the last colleague I would ever imagine leaving us this early. I was so fortunate to be working with him closer than ever this year as one of his Associate Editors for the American Journal Evaluation (AJE). He seemed so happy and healthy, and it is unbelievable I will only hear his voice in my head moving forward as I finish up the AJE editing we were working on together. However, the memories of our evaluation adventures and the many insights he generously shared with me about the field of evaluation will live with me until it is my time, and his legacy will inform evaluation theory and practice forever. George and I had numerous discussions and shared many meaningful evaluation adventures over the years. As I reflect, one of the themes that emerges is he mostly seemed to prefer the drivers’ seat, while I was happy to ride shotgun and support him as we navigated some of evaluations’ most challenging highways. Space limits prevent me from outlining and describing the plethora of major contributions George made to advancing evaluation theory and practice across his prolific career. I was thrilled when his impressive body of written work was honored by the American Evaluation Association (AEA) with the Paul F. Lazarsfeld Evaluation Theory Award in 2015. Instead, I will provide a few brief reflections related to working closely with George on topics such as “What counts as credible and actionable evidence in evaluation practice?” in our adventures when we served on the AEA Board together; and on riding shotgun with him as he was serving as Editor-in-Chief for AJE during a global pandemic. So what counts as credible and actionable evidence, Professor Julnes? It depends on the context you are working in, Professor Donaldson. George was not a fan of the raging debates in the field over the years about the superiority of paradigms, approaches, and methods for evaluating high stake evaluation questions. He seemed to believe most evaluation approaches and methods had their place in a large evaluation tent. A better question in his view was how do we choose methods in relation to the contexts we face in practice. He and his close friend and colleague, Debra Rog, emphasized that:","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43606313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Framing Evaluation in Reality: An Introduction to Ontologically Integrative Evaluation","authors":"J. Billman","doi":"10.1177/10982140221075244","DOIUrl":"https://doi.org/10.1177/10982140221075244","url":null,"abstract":"For over 30 years, calls have been issued for the western evaluation field to address implicit bias in its theory and practice. Although many in the field encourage evaluators to be culturally competent, ontological competence remains unaddressed. Grounded in an institutionalized distrust of non-western perspectives of reality and knowledge frameworks, this neglect threatens the validity, reliability, and usefulness of western designed evaluations conducted in non-western settings. To address this, I introduce Ontologically Integrative Evaluation (OIE), a new framework built upon ontological competence and six foundational ontological concepts: ontological fluidity, authenticity, validity, synthesis, justice, and vocation. Grounding evaluation in three ontological considerations—what there is, what is real, and what is fundamental—OIE systematically guides evaluators through a deep exploration of their own and others’ ontological assumptions. By demonstrating the futility of evaluations grounded in a limited ontological worldview, OIE bridges the current divide between western and non-western evaluative thinking.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48071796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. LaVelle, Clayton L. Stephenson, S. Donaldson, Justin D Hackett
{"title":"Findings From an Empirical Exploration of Evaluators’ Values","authors":"J. LaVelle, Clayton L. Stephenson, S. Donaldson, Justin D Hackett","doi":"10.1177/10982140211046537","DOIUrl":"https://doi.org/10.1177/10982140211046537","url":null,"abstract":"Psychological theory suggests that evaluators’ individual values and traits play a fundamental role in evaluation practice, though few empirical studies have explored those constructs in evaluators. This paper describes an empirical study on evaluators’ individual, work, and political values, as well as their personality traits to predict evaluation practice and methodological orientation. The results suggest evaluators value benevolence, achievement, and universalism; they lean socially liberal but are slightly more conservative on fiscal issues; and they tend to be conscientious, agreeable, and open to new experiences. In the workplace, evaluators value competence and opportunities for growth, as well as status and independence. These constructs did not statistically predict evaluation practice, though some workplace values and individual values predicted quantitative methodological orientation. We conclude by discussing strengths, limitations, and next steps for this line of research.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42936989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing the Elephant in the Room: Exploring the Impostor Phenomenon in Evaluation","authors":"J. LaVelle, Natalie D. Jones, S. Donaldson","doi":"10.1177/10982140221075243","DOIUrl":"https://doi.org/10.1177/10982140221075243","url":null,"abstract":"The impostor phenomenon is a psychological construct referring to a range of negative emotions associated with a person's perception of their own \"fraudulent competence\" in a field or of their lack of skills necessary to be successful in that field. Anecdotal evidence suggests that many practicing evaluators have experienced impostor feelings, but lack a framework in which to understand their experiences and the forums in which to discuss them. This paper summarizes the literature on the impostor phenomenon, applies it to the field of evaluation, and describes the results of an empirical, quantitatively focused study which included open-ended qualitative questions exploring impostorism in 323 practicing evaluators. The results suggest that impostor phenomenon in evaluators consists of three constructs: Discount, Luck, and Fake. Qualitative data analysis suggests differential coping strategies for men and women. Thematic analysis guided the development of a set of proposed solutions to help lessen the phenomenon's detrimental effects for evaluators.","PeriodicalId":51449,"journal":{"name":"American Journal of Evaluation","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48146586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}