S. Brownhill, G. Stevens, T. E. Hammond, Richard Baldacchino, Rose Maposa, Bonyface Makoni, Aviva Sheb'a, Jagadeesh Andepalli, A. Kotak, Olav D'Souza, Azadeh Atashnama, Anish Thayil, Alison L Jones
{"title":"Cultural Humility: A Collaborative Approach to Recruiting Patients with Deliberate Self-Harm into a Multi-Hospital Randomized Controlled Trial","authors":"S. Brownhill, G. Stevens, T. E. Hammond, Richard Baldacchino, Rose Maposa, Bonyface Makoni, Aviva Sheb'a, Jagadeesh Andepalli, A. Kotak, Olav D'Souza, Azadeh Atashnama, Anish Thayil, Alison L Jones","doi":"10.56645/jmde.v17i39.665","DOIUrl":"https://doi.org/10.56645/jmde.v17i39.665","url":null,"abstract":"Objectives: The ‘SMS SOS’ Deliberate Self-Harm (DSH) Aftercare Study was conducted in Western Sydney, Australia (October 2017 to December 2020) across three large public hospitals. During this randomized controlled trial (RCT), it was observed that knowledge exchange between key stakeholders and their ‘cultural’ perspectives (for example, Mental Health Clinicians, Lived Experience Mental Health Consultants—Patient Representatives, Administrative Officers, and Researchers) was essential to effective recruitment of patients experiencing DSH. Knowledge exchange within and between cultural groups was maximised and assessed using a communication matrix. This process, transferable to other trials engaging multiple ‘cultures’, aimed to promote the early identification of wider-team strengths as well as active management of emergent issues that would otherwise impede patient recruitment, and to maximise funding and human resources. \u0000Methods: A descriptive study was conducted with a convenience sample of team members who represented different cultures in the study. Qualitative data were elicited from a ‘know and tell’ matrix. Through an iterative process, themes were generated that encapsulated what team members needed to know from and tell to their colleagues concerning the study. \u0000Results: Factors that impacted participation in the study included clinician workload, the level of motivation/ commitment/confidence of clinicians to recruit patients, clinician-patient engagement, perception and expectations of study involvement, inter-cultural communication, and clinician training and support. The findings of this multidisciplinary consultation informed a composite model of knowledge exchange and the development of educational briefing/ orientation modules that make explicit team members’ roles and responsibilities to foster group member participation and enhance patient recruitment. \u0000Conclusions: It is incumbent upon multidisciplinary team members of large-scale studies to adopt a similar ‘knowledge exchange’ strategy early in the planning and design stage. Adoption of such a strategy has the potential to mitigate risk of delay in project timelines, improve project outcomes, and ensure the efficient use of research funding, particularly in newly established research teams within clinical settings and with members newer to formal research collaborations. \u0000Keywords: cultural humility; deliberate self-harm; engagement; participant recruitment; participatory research; randomized controlled trial","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41742875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use of Geographic Information Systems by American Evaluation Association Members in their Professional Practice","authors":"Aaron W. Kates, Chris L. S. Coryn","doi":"10.56645/jmde.v17i38.683","DOIUrl":"https://doi.org/10.56645/jmde.v17i38.683","url":null,"abstract":"Background: As geographic information systems (GIS) technology continues to develop and expand in its capacity and applications, it is becoming increasingly useful to many disciplines. Even so, little has been written about the place of GIS technology in evaluation practice, and there is a paucity of information as to the extent to and applications for which evaluation practitioners use such technology. \u0000Purpose: In this investigation, the prevalence and common applications of GIS technology in professional evaluation practice are examined. Particularly, the study was designed to estimate what proportion of American Evaluation Association (AEA) members who self-identify as evaluation practitioners use GIS in their practice, if at all, and, if so, to what extent. For those who use GIS in their evaluation practice, the specific GIS software packages and applications used also are explored. \u0000Setting: Not applicable. \u0000Intervention: Not applicable. \u0000Research Design: A simple random sample of American Evaluation Association (AEA) members were surveyed, with an emphasis on evaluation practitioners. \u0000Findings: Less than less than half (41.04% ±6.09%) of AEA members who consider themselves evaluation practitioners have ever used GIS in their evaluation practice and less than one-third (31.47% ±5.75%) have received some form of training in GIS methods. Data visualization is, by far, the most frequent application of GIS in evaluation practice. \u0000Keywords: American Evaluation Association; geographic information systems; technology in evaluation; evaluation practice; research on evaluation","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46380871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dialogic and Generative Reflection: An Application of the Critical Appreciative Process in Program Evaluation","authors":"Ye He, T. L. Smith, Malitsitso Moteane","doi":"10.56645/jmde.v17i38.663","DOIUrl":"https://doi.org/10.56645/jmde.v17i38.663","url":null,"abstract":"Background: Even though the positive potential of reflective practice is widely acknowledged across professional fields, it has been recognized that reflective practice may be carried out primarily as an individual-based exercise, and at the technical or descriptive level without generative impact. Dialogic reflective processes involving both evaluators and program directors are far from being systematically implemented or examined. \u0000Purpose: The purpose of this article is to share our experiences engaging in dialogic and generative reflections as the project director and program evaluators of a K-12 teacher education program using the critical appreciative process. Building upon the reflective practice traditions in both disciplinary areas, we introduce the use of the critical appreciative process as a promising model to guide dialogic and generative reflection to support the co-design and improvement of the program and accompanying evaluation efforts. \u0000Setting: The project director and evaluators are engaged in a grant-funded teacher preparation project designed to prepare teachers for K-12 English learners and dual language learners. The project builds upon partnerships between the university teacher preparation program and two local school districts. The evaluation plan was designed based on culturally responsive, collaborative, and use-focused evaluation approaches and theory. In 2020, the project team faced critical decisions in the context of the COVID-19 pandemic. \u0000Intervention: Not applicable. \u0000Research Design: We applied self-study methodology to guide data collection and analysis in this study. The primary data source included individual written reflections and group critical friend dialogues guided by the critical appreciative process. Both the reflections and meeting notes were analyzed to identify convergent and divergent perspectives shared throughout the critical appreciative process and to highlight implications for both the evaluation and the program moving forward. \u0000Findings: Convergent and divergent perspectives from both the project director and the evaluators were shared based on the 4-D critical appreciative process: Discover, Dream, Design, and Deliver. Based on this shared experience, we illustrate how the dialogic reflective process entails reflexivity and requires pausing; how reflective practice in program evaluation situates our dialogues as learning-oriented rather than a mere accountability discussion; and how reflective action can create a dialogic and generative virtuous cycle. \u0000Keywords: reflective practice; evaluation; critical appreciative process","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42574070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation in Our New Normal Environment: Navigating the Challenges with Data Collection","authors":"Nadini Persaud, R. Dagher","doi":"10.56645/jmde.v17i38.673","DOIUrl":"https://doi.org/10.56645/jmde.v17i38.673","url":null,"abstract":"Background: Data collection is a critical component of all evaluations. However, it often presents a number of challenges under the best of circumstances. For instance, the evaluation budget and time frame both have implications for the quality and type of data that is collected. Additionally, adherence to high quality international ethical best practices is necessary when collecting data for any purpose, methodological rigor is important for ensuring the credibility of the evaluation, improving access to important documents and stakeholders, as well as decreasing excessive evaluation anxiety on the part of critical stakeholders, when possible, is vital. These challenges have now been considerably exacerbated by the COVID-19 global health pandemic which has changed our world in fundamental ways. In what is now considered as our new normal environment, evaluators will need to make profound changes to the manner in which they plan and undertake data collection. \u0000Objectives: This paper examines the many and varied challenges that will be encountered with data collection in our new normal environment. This new normal has had an impact on evaluation practices in all countries, developed and developing, and has significantly amplified existing challenges in countries with limited evaluation culture, budgets, technological coverage, access, and connectivity. It makes an important contribution to the literature since data collection has historically and traditionally been conducted using primarily face-to-face field work and through the freedom of movement of people to undertake this task. \u0000Setting: Not applicable. \u0000Intervention: Not applicable. \u0000Research Design: Desk review was utilized for the preparation of this paper. \u0000Findings: Evaluators need to be extremely flexible, innovative, and amendable to different approaches to data collections as our new normal environment will likely be with us for a while. This pandemic has thrown everyone a very painful curveball and introduced significant new work-related challenges for a myriad of work types and work environments. Innovation and the willingness to learn new methods have become an important necessity to help with learning, accountability, transparency. The COVID-19 pandemic has highlighted the plight of the most vulnerable and evidence-based data is the only means to assist this group. Evaluators must rise to the challenge, devise new ways to collect data that is credible and useful, and continue to promote the importance and benefits of the field of evaluation. As such, evaluators have an important role to play in the global economic recovery efforts. \u0000Keywords: budgets, challenges; COVID-19; data collection; evaluation; evaluators; new normal environment; time frame","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42131406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Program Logic Foundations: Putting the Logic Back into Program Logic","authors":"Andrew J. Hawkins","doi":"10.56645/jmde.v16i37.657","DOIUrl":"https://doi.org/10.56645/jmde.v16i37.657","url":null,"abstract":"Background: Program logic is one of the most used tools by the public policy evaluator. There is, however, little explanation in the evaluation literature about the logical foundations of program logic or discussion of how it may be determined if a program is logical. This paper was born on a long journey that started with program logic and ended with the logic of evaluation. Consistent throughout was the idea that the discipline of program evaluation is a pragmatic one, concerned with applied social science and effective action in complex, adaptive systems. It gradually became the central claim of this paper that evidence-based policy requires sound reasoning more urgently than further development and testing of scientific theory. This was difficult to reconcile with the observation that much evaluation was conducted within a scientific paradigm, concerned with the development and testing of various types of theory. \u0000Purpose: This paper demonstrates the benefits of considering the core essence of a program to be a proposition about the value of a course of action. This contrasts with a research-based paradigm in which programs are considered to be a type of theory, and in which experimental and theory-driven evaluations are conducted. Experimental approaches focus on internal validity of knowledge claims about programs and on discovering stable cause and effect relationships—or, colloquially, ‘what works?’. Theory-driven approaches tend to focus on external validity and in the case of the realist approach, the search for transfactual causal mechanisms—extending the ‘what works’ mantra to include ‘for whom and in what circumstances’. On both approaches, evaluation aspires to be a scientific pursuit for obtaining knowledge of general laws of phenomena, or in the case of realists, replicable context-mechanism-outcome configurations. This paper presents and seeks to justify an approach rooted in logic, and that supports anyone to engage in a reasonable and democratic deliberation about the value of a course of action. \u0000It is consistent with systems thinking, complexity and the associated limits to certainty for determining the value of a proposed, or actual, course of action in the social world. It suggests that evaluation should learn from the past and have an eye toward the future, but that it would be most beneficial if concerned with evaluating in the present, in addressing the question ‘is this a good idea here and now? \u0000Setting: Not applicable. \u0000Intervention: Not applicable \u0000Research design: Not applicable. \u0000Findings: In seeking foundations of program logic, this paper exposes roots that extend far deeper than the post-enlightenment, positivist and post-positivist social science search for stable cause and effect relationships. These roots lie in the 4th century BCE with Aristotle’s ‘enthymeme’. The exploration leads to conclusions about the need for a greater focus on logic and reasoning in the design and evaluation of programs and interventi","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45954159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Elements to Enhance the Successful Start and Completion of Program and Policy Evaluations: The Injury & Violence Prevention (IVP) Program & Policy Evaluation Institute","authors":"J. Porter, L. Brennan, Mighty Fine, Ina Robinson","doi":"10.56645/jmde.v16i37.659","DOIUrl":"https://doi.org/10.56645/jmde.v16i37.659","url":null,"abstract":"Background: Public health practitioners, including injury and violence prevention (IVP) professionals, are responsible for implementing evaluations, but often lack formal evaluation training. Impacts of many practitioner-focused evaluation trainings—particularly their ability to help participants successfully start and complete evaluations—are unknown. \u0000Objectives: We assessed the impact of the Injury and Violence Prevention (IVP) Program & Policy Evaluation Institute (“Evaluation Institute”), a team-based, multidisciplinary, and practitioner-focused evaluation training designed to teach state IVP practitioners and their cross-sector partners how to evaluate program and policy interventions. \u0000Design: Semi-structured interviews were conducted with members of 13 evaluation teams across eight states at least one year after training participation (24 participants in total). Document reviews were conducted to triangulate, supplement, and contextualize reported improvements to policies, programs, and practices. \u0000Intervention: Teams of practitioners applied for and participated in the Evaluation Institute, a five-month evaluation training initiative that included a set of online training modules, an in-person workshop, and technical support from evaluation consultants. \u0000Main Outcome Measure(s): The successful start and/or completion of a program or policy evaluation focused on an IVP intervention. \u0000Results: Of the 13 teams studied, a total of 12 teams (92%) reported starting or completing an evaluation. Four teams (31%) reported fully completing their evaluations; eight teams (61%) reported partially completing their evaluations. Teams identified common facilitators and barriers that impacted their ability to start and complete their evaluations. Nearly half of the 13 teams (46%) – whether or not they completed their evaluation – reported at least one common improvement made to a program or policy as a result of engaging in an evaluative process. \u0000Conclusion: Practitioner-focused evaluation trainings are essential to build critical evaluation skills among public health professionals and their multidisciplinary partners. The process of evaluating an intervention—even if the evaluation is not completed—has substantial value and can drive improvements to public health interventions. The Evaluation Institute can serve as a model for training public health practitioners and their partners to successfully plan, start, complete, and utilize evaluations to improve programs and policies. \u0000Keywords: Evaluation; injury; multidisciplinary partnerships; practitioner-focused evaluation training; professional development; program and policy evaluation; public health; technical assistance; violence","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42565196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Cognitive Interviewing to Test Youth Survey and Interview Items in Evaluation: A Case Example","authors":"Elisa H LaPietra, Jennifer Brown Urban, M. Linver","doi":"10.56645/jmde.v16i37.651","DOIUrl":"https://doi.org/10.56645/jmde.v16i37.651","url":null,"abstract":"Background: Cognitive interviewing is a pretesting tool used by evaluators to increase item and response option validity. Cognitive interviewing techniques are used to assess the cognitive processes utilized by participants to respond to items. This approach is particularly appropriate for testing items with children and adolescents who have more limited cognitive capacities than adults, vary in their cognitive development, and have a unique perspective on their life experiences and context. \u0000Purpose: This paper presents a case example of cognitive interviewing with youth as part of a national program evaluation, and aims to expand the use of cognitive interviewing as a pretesting tool for both quantitative and qualitative items in evaluation studies involving youth. \u0000Setting: Youth participants were located in four regions of the United States: Northeast, Central, Southern, and Western. Interviewers were located at Montclair State University. \u0000Intervention: Not applicable. \u0000Research design: A cognitive interview measure was designed to include a subset of survey items, interview questions, and verbal probes, to evaluate if these items and questions would be understood as intended by both younger and older youth participants. An iterative design was used with cognitive interviewing testing rounds, analysis, and revisions. \u0000Data Collection and Analysis: The cognitive interview was administered by phone to 10 male youth, five from the 10-13-year-old age range and five from the 15-17-year-old age range. Interviews were audio-recorded, transcribed, reviewed, and coded. Survey items and interview questions were revised based on feedback from the participants and consensus agreement among the evaluation team. Item revisions were included in further testing rounds with new participants. \u0000Findings: As a result of using cognitive interviewing to pretest survey and interview items with youth, response errors were identified. Participants did not understand some of the items and response options as intended, indicating problems with validity. These findings support the use of cognitive interviewing for testing and modifying survey items adapted for use with youth, as well as qualitative interview items. Additionally, the perspective of the youth participants was valuable for informing decisions to modify items and helping the evaluators learn the participants’ program culture and experiences. Based on the findings and limitations of the study, we give practice recommendations for future studies using cognitive interviewing with a youth sample. \u0000Keywords: cognitive interviewing; item validity; response error; verbal probes; pre-testing surveys; qualitative evaluation; interviewing children and adolescents; survey development","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43226317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Sheate, C. Twigger-Ross, Liza Papadopoulou, R. Sadauskis, O. White, P. Orr, R. Eales
{"title":"Learning Lessons for Evaluating Complexity Across the Nexus: A Meta-Evaluation of Environmental Projects","authors":"W. Sheate, C. Twigger-Ross, Liza Papadopoulou, R. Sadauskis, O. White, P. Orr, R. Eales","doi":"10.56645/jmde.v16i37.641","DOIUrl":"https://doi.org/10.56645/jmde.v16i37.641","url":null,"abstract":"Background: A major gap in environmental policy making is learning lessons from past interventions and in integrating the lessons from evaluations that have been undertaken. Institutional memory of such evaluations often resides externally to government, in evaluation practitioner contractors who undertake commissioned evaluations on behalf of government departments. \u0000Purpose: The aims were to learn the lessons from past policy evaluations, understand the barriers and enablers to successful evaluations, to explore the value of different types of approaches and methods used for evaluating complexity, and how evaluations were used in practice. \u0000Setting: A meta-evaluation of 23 environmental evaluations undertaken by Collingwood Environmental Planning Ltd (CEP), London, UK was undertaken by CEP staff under the auspices of CECAN (the Centre for Evaluation of Complexity Across the Nexus – a UK Research Councils funded centre, coordinated by the University of Surrey, UK). The research covered water, environment and climate change nexus issues, including evaluations of flood risk, biodiversity, landscape, land use, climate change, catchment management, community resilience, bioenergy, and European Union (EU) Directives. \u0000Intervention: Not applicable. \u0000Research design: A multiple embedded case study design was adopted, selecting 23 CEP evaluation cases from across a 10-year period (2006-2016). Four overarching research questions were posed by the meta-evaluation and formed the basis for more specific evaluation questions, answered on the basis of documented project final reports and supplemented by interviews with CEP project managers. Thematic analysis was used to draw out common themes from across the case categories. \u0000Findings: Policy context invariably framed the complex evaluations; as environmental policy has been spread beyond the responsibility of government to encompass multiple stakeholders, so policy around nexus issues was often found to be in a state of constant flux. Furthermore, an explicit theory of change was only often first elaborated as part of the evaluation process, long after the policy intervention had already been initiated. A better understanding of the policy context, its state of flux or stability as well as clarity of policy intervention’s objectives (and theory of change) could help significantly in designing policy evaluations that can deliver real value for policy makers. Evaluations have other valuable uses aside from immediate instrumental use in revising policy and can be tailored to maximise those values where such potential impact is recognised. We suggest a series of questions that practitioners and commissioners could usefully ask themselves when starting out on a new complex policy evaluation. \u0000Keywords: evaluation; complexity; policy use; natural environment","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45519055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allyson Tintiangco-Cubales, P. E. Halagao, JoanMay Timtiman Cordova
{"title":"Journey “Back Over the Line”: Critical Pedagogies of Curriculum Evaluation","authors":"Allyson Tintiangco-Cubales, P. E. Halagao, JoanMay Timtiman Cordova","doi":"10.56645/jmde.v16i37.655","DOIUrl":"https://doi.org/10.56645/jmde.v16i37.655","url":null,"abstract":"Background: We re-trace our liberatory journey in developing a Critical Framework of Review to evaluate K-12 Filipina/x/o American curricula. Our framework is rooted in our positionality and epistemology as Filipina educational scholars engaged in confronting oppression that impacts our community. It responds to the need for evaluation methods grounded in culturally responsive and critical pedagogies. \u0000Purpose: The purpose is to provide a critical and cultural method of evaluation to assess curriculum and pedagogy of, by, and about our communities. \u0000Setting: The research takes place in the Filipinx/a/o American community in the United States. The authors are from three academic institutions in California, Hawai‘i and the Philippines. \u0000Intervention: Our Critical Framework of Review attempts to counter the predominance of Eurocentric, male, objective, and uncritical models of curricula evaluation. \u0000Research design: This research deconstructs how we developed and applied our framework, which was used to evaluate thirty-three Filipina/x/o American K-12 curricula in critical content, critical instruction, and critical impact, by asking 20 questions that reflected critical and cultural theories and pedagogies. \u0000Data collection and analysis: We asked: Who and what informed our evaluation framework? How was it developed? How do we use it? How could our framework be further applied? We referenced diverse scholars and used critical race, feminist, indigenous, and deolonizing pedagogies as guidelines to establish our evaluation framework and standards. \u0000Findings: The framework is an example of standards-based and responsive-based evaluation with a checklist of indicators to evaluate curricula for culture, race, positionality, and social justice. Although created for Filipina/x/o, the framework can be used to evaluate curriculum for other marginalized groups. \u0000Keywords: critical pedagogy; critical evaluation; framework of review; curriculum; curriculum evaluation","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45982252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation Policy and Organizational Evaluation Capacity Building: Application of an Ecological Framework across Cultural Contexts","authors":"Hind Al Hudib, J. Cousins","doi":"10.56645/jmde.v16i36.623","DOIUrl":"https://doi.org/10.56645/jmde.v16i36.623","url":null,"abstract":"Background: Research on the role and effects of evaluation policy is limited. Some research on the policy’s role in enhancing organizational evaluation capacity (EC) is beginning to accrue but to date it has been limited largely to global Western evaluation contexts. \u0000Purpose: We employed an ecological conceptual framework arising from our own empirical research to explore the interface between evaluation policy and EC in non-western contexts. We asked—To what extent does this framework resonate across these contexts? In the selected non-Western context, what are the salient variables moderating the relationship between policy and EC in the selected contexts? Are there differences across countries? \u0000Setting: The present research is focused on perceptions about evaluation culture and experiences in two countries situated in the Middle East and North Africa (MENA) region, namely Turkey and Jordan. \u0000Intervention: Not applicable. \u0000Research design: We conducted focus groups within the respective countries with a combined total of 18 participants associated with country-level voluntary organizations for professional evaluation (VOPE). Participants worked in government, non-governmental aid agencies, universities and private sector organizations. \u0000Data collection and analysis: We introduced the focus group participants to our ecological framework and then guided the conversation using semi-structured questions. Data were audio-recorded, transcribed and subsequently thematically analyzed using NVivo. \u0000Findings: The ecological framework was found to resonate well but the findings were weighted heavily toward macro-level contextual variables. Even though important contextual and cultural differences between Turkey and Jordan were evident, leadership emerged as a significant meso-level moderating variable in both settings. The discussion of the results included implications for ongoing research. \u0000Keywords: Evaluation capacity building; evaluation policy; program evaluation; cultural context","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44936639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}