What's in the black box? How algorithmic knowledge promotes corrective and restrictive actions to counter misinformation in the USA, the UK, South Korea and Mexico

IF 5.9 3区 管理学 Q1 BUSINESS
Myojung Chung
{"title":"What's in the black box? How algorithmic knowledge promotes corrective and restrictive actions to counter misinformation in the USA, the UK, South Korea and Mexico","authors":"Myojung Chung","doi":"10.1108/intr-07-2022-0578","DOIUrl":null,"url":null,"abstract":"PurposeWhile there has been a growing call for insights on algorithms given their impact on what people encounter on social media, it remains unknown how enhanced algorithmic knowledge serves as a countermeasure to problematic information flow. To fill this gap, this study aims to investigate how algorithmic knowledge predicts people's attitudes and behaviors regarding misinformation through the lens of the third-person effect.Design/methodology/approachFour national surveys in the USA (N = 1,415), the UK (N = 1,435), South Korea (N = 1,798) and Mexico (N = 784) were conducted between April and September 2021. The survey questionnaire measured algorithmic knowledge, perceived influence of misinformation on self and others, intention to take corrective actions, support for government regulation and content moderation. Collected data were analyzed using multigroup SEM.FindingsResults indicate that algorithmic knowledge was associated with presumed influence of misinformation on self and others to different degrees. Presumed media influence on self was a strong predictor of intention to take actions to correct misinformation, while presumed media influence on others was a strong predictor of support for government-led platform regulation and platform-led content moderation. There were nuanced but noteworthy differences in the link between presumed media influence and behavioral responses across the four countries studied.Originality/valueThese findings are relevant for grasping the role of algorithmic knowledge in countering rampant misinformation on social media, as well as for expanding US-centered extant literature by elucidating the distinctive views regarding social media algorithms and misinformation in four countries.","PeriodicalId":54925,"journal":{"name":"Internet Research","volume":" ","pages":""},"PeriodicalIF":5.9000,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/intr-07-2022-0578","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

PurposeWhile there has been a growing call for insights on algorithms given their impact on what people encounter on social media, it remains unknown how enhanced algorithmic knowledge serves as a countermeasure to problematic information flow. To fill this gap, this study aims to investigate how algorithmic knowledge predicts people's attitudes and behaviors regarding misinformation through the lens of the third-person effect.Design/methodology/approachFour national surveys in the USA (N = 1,415), the UK (N = 1,435), South Korea (N = 1,798) and Mexico (N = 784) were conducted between April and September 2021. The survey questionnaire measured algorithmic knowledge, perceived influence of misinformation on self and others, intention to take corrective actions, support for government regulation and content moderation. Collected data were analyzed using multigroup SEM.FindingsResults indicate that algorithmic knowledge was associated with presumed influence of misinformation on self and others to different degrees. Presumed media influence on self was a strong predictor of intention to take actions to correct misinformation, while presumed media influence on others was a strong predictor of support for government-led platform regulation and platform-led content moderation. There were nuanced but noteworthy differences in the link between presumed media influence and behavioral responses across the four countries studied.Originality/valueThese findings are relevant for grasping the role of algorithmic knowledge in countering rampant misinformation on social media, as well as for expanding US-centered extant literature by elucidating the distinctive views regarding social media algorithms and misinformation in four countries.
黑盒子里有什么?在美国、英国、韩国和墨西哥,算法知识如何促进纠正和限制行动,以打击错误信息
目的尽管鉴于算法对人们在社交媒体上遇到的事情的影响,人们越来越呼吁深入了解算法,但增强的算法知识是如何应对有问题的信息流的,目前尚不清楚。为了填补这一空白,本研究旨在通过第三人称效应的视角,调查算法知识如何预测人们对错误信息的态度和行为。设计/方法/方法2021年4月至9月,在美国(N=1415)、英国(N=1435)、韩国(N=1798)和墨西哥(N=784)进行了四项全国性调查。调查问卷测量了算法知识、错误信息对自己和他人的影响、采取纠正措施的意愿、对政府监管的支持和内容审核。结果表明,算法知识在不同程度上与错误信息对自身和他人的影响有关。假定媒体对自己的影响是采取行动纠正错误信息意图的有力预测因素,而假定媒体对他人的影响是支持政府主导的平台监管和平台主导的内容审核的有力预测指标。在所研究的四个国家中,假定的媒体影响力和行为反应之间的联系存在细微但值得注意的差异。原创性/价值这些发现有助于把握算法知识在对抗社交媒体上猖獗的错误信息中的作用,也有助于通过阐明四个国家对社交媒体算法和错误信息的独特观点来扩大以美国为中心的现有文献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Internet Research
Internet Research 工程技术-电信学
CiteScore
11.20
自引率
10.20%
发文量
85
审稿时长
>12 weeks
期刊介绍: This wide-ranging interdisciplinary journal looks at the social, ethical, economic and political implications of the internet. Recent issues have focused on online and mobile gaming, the sharing economy, and the dark side of social media.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信