{"title":"随时通知我:对实用评估的请求","authors":"G. Balch, S. Sutton","doi":"10.4324/9781315805795-5","DOIUrl":null,"url":null,"abstract":"The root purpose of evaluating is to see what, if anything, can be done better than what is being done or was done. It is inherently practical. This chapter contends that, despite the very practical intent of evaluation efforts in social marketing, the evaluations designed and conducted are often not useful. At times, they stand in the way of evaluation efforts that would be useful. At times, summative evaluations—with the ran-domized controlled experiment as the gold standard—impede the development and management of social marketing programs. As a result, program results suffer from inappropriate evaluation-related actions or through the opportunity cost of missed program improvements. Social marketers should apply the kind of practical marketing research perspective and procedures that commercial marketers apply to their programs. Evaluations of social marketing programs are most useful if they are integrated into programs in an interactive, iterative, ongoing system. Successful evaluation provides program direction as relevant, accurate, timely, and cost-effective \" feedforward \" and feedback on program objectives, target audiences, processes, and results. It not only guides program improvements, but also communicates program value to outside authorities. It is decision-driven research for consumer-based programs. Meaningful evaluation research requires evaluators to become key program team members who raise and answer questions that will improve the program demonstrably (Balch & Sutton, 1995). This often contrasts with the practices of outside evaluators whose primary commitment is not to program success, but to passing verdicts on programs or to publishing in academic journals.","PeriodicalId":85532,"journal":{"name":"Social marketing update","volume":"37 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Keep Me Posted: A Plea for Practical Evaluation\",\"authors\":\"G. Balch, S. Sutton\",\"doi\":\"10.4324/9781315805795-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The root purpose of evaluating is to see what, if anything, can be done better than what is being done or was done. It is inherently practical. This chapter contends that, despite the very practical intent of evaluation efforts in social marketing, the evaluations designed and conducted are often not useful. At times, they stand in the way of evaluation efforts that would be useful. At times, summative evaluations—with the ran-domized controlled experiment as the gold standard—impede the development and management of social marketing programs. As a result, program results suffer from inappropriate evaluation-related actions or through the opportunity cost of missed program improvements. Social marketers should apply the kind of practical marketing research perspective and procedures that commercial marketers apply to their programs. Evaluations of social marketing programs are most useful if they are integrated into programs in an interactive, iterative, ongoing system. Successful evaluation provides program direction as relevant, accurate, timely, and cost-effective \\\" feedforward \\\" and feedback on program objectives, target audiences, processes, and results. It not only guides program improvements, but also communicates program value to outside authorities. It is decision-driven research for consumer-based programs. Meaningful evaluation research requires evaluators to become key program team members who raise and answer questions that will improve the program demonstrably (Balch & Sutton, 1995). This often contrasts with the practices of outside evaluators whose primary commitment is not to program success, but to passing verdicts on programs or to publishing in academic journals.\",\"PeriodicalId\":85532,\"journal\":{\"name\":\"Social marketing update\",\"volume\":\"37 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Social marketing update\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4324/9781315805795-5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social marketing update","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4324/9781315805795-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The root purpose of evaluating is to see what, if anything, can be done better than what is being done or was done. It is inherently practical. This chapter contends that, despite the very practical intent of evaluation efforts in social marketing, the evaluations designed and conducted are often not useful. At times, they stand in the way of evaluation efforts that would be useful. At times, summative evaluations—with the ran-domized controlled experiment as the gold standard—impede the development and management of social marketing programs. As a result, program results suffer from inappropriate evaluation-related actions or through the opportunity cost of missed program improvements. Social marketers should apply the kind of practical marketing research perspective and procedures that commercial marketers apply to their programs. Evaluations of social marketing programs are most useful if they are integrated into programs in an interactive, iterative, ongoing system. Successful evaluation provides program direction as relevant, accurate, timely, and cost-effective " feedforward " and feedback on program objectives, target audiences, processes, and results. It not only guides program improvements, but also communicates program value to outside authorities. It is decision-driven research for consumer-based programs. Meaningful evaluation research requires evaluators to become key program team members who raise and answer questions that will improve the program demonstrably (Balch & Sutton, 1995). This often contrasts with the practices of outside evaluators whose primary commitment is not to program success, but to passing verdicts on programs or to publishing in academic journals.