{"title":"发展性评估的理论与实践:三个发展性评估试点的经验教训","authors":"Heather Esper, Y. Fatehi, Rebecca Baylor","doi":"10.56645/jmde.v17i40.685","DOIUrl":null,"url":null,"abstract":"Background. Developmental Evaluation (DE) practitioners turn to DE theory to make design and implementation decisions. However, DE practitioners can experience difficulty in fully understanding how to implement DE using theory because it is method agnostic (Patton, 2016). Instead, DE is a principle-based approach. \nPurpose. This article presents an empirical examination of how DE theory was (or was not) applied during three DE pilots. Our analysis aims to better understand how DE theory is used in practice to expand the evidence base and strengthen future DE implementation. \nSetting. A consortium of three organizations implemented three DE pilots through the United States Agency for International Development (USAID) from November 2016 to September 2019. The authors—who participated in the consortium—did not implement the DEs but instead conducted a study or meta-evaluation across the DE pilots. \nData Collection and Analysis. This article focuses on the results of an ex post facto analysis of three DE pilots based on the entire DE implementation experience. For each DE studied, we used mixed methods to collect data on the effectiveness of the DE approach, to identify adaptations to strengthen DE implementation in the USAID context, and to measure its value to stakeholders. Data included more than 100 hours of interviews, 465 pages of qualitative data, and 30 surveys completed by DE participants. \nFindings. We find that the ability to apply the DE principles in practice is influenced, in no particular order, by DE participant buy-in to the DE, the Developmental Evaluator’s aptitude, support and resources available to the Developmental Evaluator, and the number of DE participants. We also find that buy-in can change and this should be closely monitored throughout a DE to inform whether a DE should be paused or prematurely ended. \nKeywords: Developmental Evaluation; developmental evaluator skills; buy-in; DE practice; DE funder; meta-evaluation","PeriodicalId":91909,"journal":{"name":"Journal of multidisciplinary evaluation","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Developmental Evaluation in Theory versus Practice: Lessons from Three Developmental Evaluation Pilots\",\"authors\":\"Heather Esper, Y. Fatehi, Rebecca Baylor\",\"doi\":\"10.56645/jmde.v17i40.685\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background. Developmental Evaluation (DE) practitioners turn to DE theory to make design and implementation decisions. However, DE practitioners can experience difficulty in fully understanding how to implement DE using theory because it is method agnostic (Patton, 2016). Instead, DE is a principle-based approach. \\nPurpose. This article presents an empirical examination of how DE theory was (or was not) applied during three DE pilots. Our analysis aims to better understand how DE theory is used in practice to expand the evidence base and strengthen future DE implementation. \\nSetting. A consortium of three organizations implemented three DE pilots through the United States Agency for International Development (USAID) from November 2016 to September 2019. The authors—who participated in the consortium—did not implement the DEs but instead conducted a study or meta-evaluation across the DE pilots. \\nData Collection and Analysis. This article focuses on the results of an ex post facto analysis of three DE pilots based on the entire DE implementation experience. For each DE studied, we used mixed methods to collect data on the effectiveness of the DE approach, to identify adaptations to strengthen DE implementation in the USAID context, and to measure its value to stakeholders. Data included more than 100 hours of interviews, 465 pages of qualitative data, and 30 surveys completed by DE participants. \\nFindings. We find that the ability to apply the DE principles in practice is influenced, in no particular order, by DE participant buy-in to the DE, the Developmental Evaluator’s aptitude, support and resources available to the Developmental Evaluator, and the number of DE participants. We also find that buy-in can change and this should be closely monitored throughout a DE to inform whether a DE should be paused or prematurely ended. \\nKeywords: Developmental Evaluation; developmental evaluator skills; buy-in; DE practice; DE funder; meta-evaluation\",\"PeriodicalId\":91909,\"journal\":{\"name\":\"Journal of multidisciplinary evaluation\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of multidisciplinary evaluation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.56645/jmde.v17i40.685\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of multidisciplinary evaluation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.56645/jmde.v17i40.685","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Developmental Evaluation in Theory versus Practice: Lessons from Three Developmental Evaluation Pilots
Background. Developmental Evaluation (DE) practitioners turn to DE theory to make design and implementation decisions. However, DE practitioners can experience difficulty in fully understanding how to implement DE using theory because it is method agnostic (Patton, 2016). Instead, DE is a principle-based approach.
Purpose. This article presents an empirical examination of how DE theory was (or was not) applied during three DE pilots. Our analysis aims to better understand how DE theory is used in practice to expand the evidence base and strengthen future DE implementation.
Setting. A consortium of three organizations implemented three DE pilots through the United States Agency for International Development (USAID) from November 2016 to September 2019. The authors—who participated in the consortium—did not implement the DEs but instead conducted a study or meta-evaluation across the DE pilots.
Data Collection and Analysis. This article focuses on the results of an ex post facto analysis of three DE pilots based on the entire DE implementation experience. For each DE studied, we used mixed methods to collect data on the effectiveness of the DE approach, to identify adaptations to strengthen DE implementation in the USAID context, and to measure its value to stakeholders. Data included more than 100 hours of interviews, 465 pages of qualitative data, and 30 surveys completed by DE participants.
Findings. We find that the ability to apply the DE principles in practice is influenced, in no particular order, by DE participant buy-in to the DE, the Developmental Evaluator’s aptitude, support and resources available to the Developmental Evaluator, and the number of DE participants. We also find that buy-in can change and this should be closely monitored throughout a DE to inform whether a DE should be paused or prematurely ended.
Keywords: Developmental Evaluation; developmental evaluator skills; buy-in; DE practice; DE funder; meta-evaluation