{"title":"感知信息有效性研究中的测量和设计异质性:研究的呼唤。","authors":"Seth M Noar, Joshua Barker, Marco Yzer","doi":"10.1093/joc/jqy047","DOIUrl":null,"url":null,"abstract":"Ratings of perceived message effectiveness (PME) are commonly used during message testing and selection, operating under the assumption that messages scoring higher on PME are more likely to affect actual message effectiveness (AME)—for instance, intentions and behaviors. Such a practice has clear utility, particularly when selecting from a large pool of messages. Recently, O’Keefe (2018) argued against the validity of PME as a basis for message selection. He conducted a meta-analysis of mean ratings of PME and AME, testing how often two messages that differ on PME similarly differ on AME, as tested in separate samples. Comparing 151 message pairs derived from 35 studies, he found that use of PME would only result in choosing a more effective message 58% of the time, which is little better than chance. On that basis, O’Keefe concluded that “message designers might dispense with questions about expected or perceived persuasiveness (PME), and instead pretest messages for actual effectiveness” (p. 135). We do not believe that the meta-analysis supports this conclusion, given the measurement and design issues in the set of studies O’Keefe analyzed.","PeriodicalId":48410,"journal":{"name":"Journal of Communication","volume":null,"pages":null},"PeriodicalIF":6.1000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/joc/jqy047","citationCount":"18","resultStr":"{\"title\":\"Measurement and Design Heterogeneity in Perceived Message Effectiveness Studies: A Call for Research.\",\"authors\":\"Seth M Noar, Joshua Barker, Marco Yzer\",\"doi\":\"10.1093/joc/jqy047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Ratings of perceived message effectiveness (PME) are commonly used during message testing and selection, operating under the assumption that messages scoring higher on PME are more likely to affect actual message effectiveness (AME)—for instance, intentions and behaviors. Such a practice has clear utility, particularly when selecting from a large pool of messages. Recently, O’Keefe (2018) argued against the validity of PME as a basis for message selection. He conducted a meta-analysis of mean ratings of PME and AME, testing how often two messages that differ on PME similarly differ on AME, as tested in separate samples. Comparing 151 message pairs derived from 35 studies, he found that use of PME would only result in choosing a more effective message 58% of the time, which is little better than chance. On that basis, O’Keefe concluded that “message designers might dispense with questions about expected or perceived persuasiveness (PME), and instead pretest messages for actual effectiveness” (p. 135). We do not believe that the meta-analysis supports this conclusion, given the measurement and design issues in the set of studies O’Keefe analyzed.\",\"PeriodicalId\":48410,\"journal\":{\"name\":\"Journal of Communication\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.1000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1093/joc/jqy047\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communication\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1093/joc/jqy047\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2018/9/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communication","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1093/joc/jqy047","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2018/9/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
Measurement and Design Heterogeneity in Perceived Message Effectiveness Studies: A Call for Research.
Ratings of perceived message effectiveness (PME) are commonly used during message testing and selection, operating under the assumption that messages scoring higher on PME are more likely to affect actual message effectiveness (AME)—for instance, intentions and behaviors. Such a practice has clear utility, particularly when selecting from a large pool of messages. Recently, O’Keefe (2018) argued against the validity of PME as a basis for message selection. He conducted a meta-analysis of mean ratings of PME and AME, testing how often two messages that differ on PME similarly differ on AME, as tested in separate samples. Comparing 151 message pairs derived from 35 studies, he found that use of PME would only result in choosing a more effective message 58% of the time, which is little better than chance. On that basis, O’Keefe concluded that “message designers might dispense with questions about expected or perceived persuasiveness (PME), and instead pretest messages for actual effectiveness” (p. 135). We do not believe that the meta-analysis supports this conclusion, given the measurement and design issues in the set of studies O’Keefe analyzed.
期刊介绍:
The Journal of Communication, the flagship journal of the International Communication Association, is a vital publication for communication specialists and policymakers alike. Focusing on communication research, practice, policy, and theory, it delivers the latest and most significant findings in communication studies. The journal also includes an extensive book review section and symposia of selected studies on current issues. JoC publishes top-quality scholarship on all aspects of communication, with a particular interest in research that transcends disciplinary and sub-field boundaries.