{"title":"探索自动更正社交媒体错误信息的影响","authors":"Grégoire Burel, Mohammadali Tavakoli, Harith Alani","doi":"10.1002/aaai.12180","DOIUrl":null,"url":null,"abstract":"<p>Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab-based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real-world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact-checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 2","pages":"227-245"},"PeriodicalIF":2.5000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12180","citationCount":"0","resultStr":"{\"title\":\"Exploring the impact of automated correction of misinformation in social media\",\"authors\":\"Grégoire Burel, Mohammadali Tavakoli, Harith Alani\",\"doi\":\"10.1002/aaai.12180\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab-based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real-world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact-checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.</p>\",\"PeriodicalId\":7854,\"journal\":{\"name\":\"Ai Magazine\",\"volume\":\"45 2\",\"pages\":\"227-245\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12180\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ai Magazine\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12180\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12180","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Exploring the impact of automated correction of misinformation in social media
Correcting misinformation is a complex task, influenced by various psychological, social, and technical factors. Most research evaluation methods for identifying effective correction approaches tend to rely on either crowdsourcing, questionnaires, lab-based simulations, or hypothetical scenarios. However, the translation of these methods and findings into real-world settings, where individuals willingly and freely disseminate misinformation, remains largely unexplored. Consequently, we lack a comprehensive understanding of how individuals who share misinformation in natural online environments would respond to corrective interventions. In this study, we explore the effectiveness of corrective messaging on 3898 users who shared misinformation on Twitter/X over 2 years. We designed and deployed a bot to automatically identify individuals who share misinformation and subsequently alert them to related fact-checks in various message formats. Our analysis shows that only a small minority of users react positively to the corrective messages, with most users either ignoring them or reacting negatively. Nevertheless, we also found that more active users were proportionally more likely to react positively to corrections and we observed that different message tones made particular user groups more likely to react to the bot.
期刊介绍:
AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.