An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making

J. Harner, Lydia Ray, Florence Wakoko-Studstill
{"title":"An Investigation of the Effectiveness of Facebook and Twitter Algorithm and Policies on Misinformation and User Decision Making","authors":"J. Harner, Lydia Ray, Florence Wakoko-Studstill","doi":"10.54808/jsci.20.05.118","DOIUrl":null,"url":null,"abstract":"Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion \"Always\" or \"Frequently\" with 77% reporting the warning messages changed their opinion only \"Sometimes\" or \"Never\" suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.","PeriodicalId":30249,"journal":{"name":"Journal of Systemics Cybernetics and Informatics","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systemics Cybernetics and Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54808/jsci.20.05.118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Prominent social media sites such as Facebook and Twitter use content and filter algorithms that play a significant role in creating filter bubbles that may captivate many users. These bubbles can be defined as content that reinforces existing beliefs and exposes users to content they might have otherwise not seen. Filter bubbles are created when a social media website feeds user interactions into an algorithm that then exposes the user to more content similar to that which they have previously interacted. By continually exposing users to like-minded content, this can create what is called a feedback loop where the more the user interacts with certain types of content, the more they are algorithmically bombarded with similar viewpoints. This can expose users to dangerous or extremist content as seen with QAnon rhetoric, leading to the January 6, 2021 attack on the U.S. Capitol, and the unprecedented propaganda surrounding COVID-19 vaccinations. This paper hypothesizes that the secrecy around content algorithms and their ability to perpetuate filter bubbles creates an environment where dangerous false information is pervasive and not easily mitigated with the existing algorithms designed to provide false information warning messages. In our research, we focused on disinformation regarding the COVID-19 pandemic. Both Facebook and Twitter provide various forms of false information warning messages which sometimes include fact-checked research to provide a counter viewpoint to the information presented. Controversially, social media sites do not remove false information outright, in most cases, but instead promote these false information warning messages as a solution to extremist or false content. The results of a survey administered by the authors indicate that users would spend less time on Facebook or Twitter once they understood how their data is used to influence their behavior on the sites and the information that is fed to them via algorithmic recommendations. Further analysis revealed that only 23% of respondents who had seen a Facebook or Twitter false information warning message changed their opinion "Always" or "Frequently" with 77% reporting the warning messages changed their opinion only "Sometimes" or "Never" suggesting the messages may not be effective. Similarly, users who did not conduct independent research to verify information were likely to accept false information as factual and less likely to be vaccinated against COVID-19. Conversely, our research indicates a possible correlation between having seen a false information warning message and COVID-19 vaccination status.
Facebook和Twitter算法和策略对虚假信息和用户决策的有效性研究
Facebook和Twitter等知名社交媒体网站使用的内容和过滤算法在创建可能吸引许多用户的过滤泡沫方面发挥了重要作用。这些泡沫可以被定义为强化现有信念并让用户接触到他们可能没有看到的内容的内容。当社交媒体网站将用户互动输入算法,然后将用户暴露在与他们之前互动的内容类似的更多内容中时,就会产生过滤气泡。通过不断让用户接触到志同道合的内容,这可以创建一个所谓的反馈循环,用户与某些类型的内容互动越多,他们就越容易受到类似观点的算法轰炸。这可能会让用户接触到QAnon言论中的危险或极端主义内容,导致2021年1月6日美国国会大厦遇袭,以及围绕新冠肺炎疫苗接种的前所未有的宣传。本文假设,内容算法的保密性及其使过滤气泡永久存在的能力创造了一个危险的虚假信息普遍存在的环境,而现有的用于提供虚假信息警告消息的算法不容易缓解这种情况。在我们的研究中,我们专注于有关新冠肺炎大流行的虚假信息。脸书和推特都提供各种形式的虚假信息警告信息,其中有时包括经过事实核查的研究,以对所提供的信息提供相反的观点。有争议的是,在大多数情况下,社交媒体网站并没有彻底删除虚假信息,而是推广这些虚假信息警告信息,作为极端主义或虚假内容的解决方案。作者进行的一项调查结果表明,一旦用户了解了他们的数据如何被用来影响他们在网站上的行为,以及通过算法推荐提供给他们的信息,他们在脸书或推特上的时间就会减少。进一步的分析显示,在看到脸书或推特虚假信息警告消息的受访者中,只有23%的人改变了他们的看法“总是”或“经常”,77%的人表示警告消息只改变了他们对“有时”或“从不”的看法,这表明这些消息可能无效。同样,没有进行独立研究以核实信息的用户可能会接受虚假信息作为事实,也不太可能接种新冠肺炎疫苗。相反,我们的研究表明,看到虚假信息警告信息与新冠肺炎疫苗接种状态之间可能存在相关性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
44
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信