Perceived legitimacy of layperson and expert content moderators.

IF 2.2 Q2 MULTIDISCIPLINARY SCIENCES
PNAS nexus Pub Date : 2025-05-20 eCollection Date: 2025-05-01 DOI:10.1093/pnasnexus/pgaf111
Cameron Martel, Adam J Berinsky, David G Rand, Amy X Zhang, Paul Resnick
{"title":"Perceived legitimacy of layperson and expert content moderators.","authors":"Cameron Martel, Adam J Berinsky, David G Rand, Amy X Zhang, Paul Resnick","doi":"10.1093/pnasnexus/pgaf111","DOIUrl":null,"url":null,"abstract":"<p><p>Content moderation is a critical aspect of platform governance on social media and of particular relevance to addressing the belief in and spread of misinformation. However, current content moderation practices have been criticized as unjust. This raises an important question-who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative survey experiment (<i>n</i> = 3,000) in which US participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts (e.g. domain experts), laypeople (e.g. social media users), or nonjuries (e.g. computer algorithm). We also randomized features of jury composition (size and necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions-nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion. Maximally legitimate layperson juries were comparably legitimate with expert panels. Republicans perceived experts as less legitimate compared with Democrats, but still more legitimate than baseline layperson juries. Conversely, larger lay juries with news knowledge qualifications who engaged in discussion were perceived as more legitimate across the political spectrum. Our findings shed light on the foundations of institutional legitimacy in content moderation and have implications for the design of online moderation systems.</p>","PeriodicalId":74468,"journal":{"name":"PNAS nexus","volume":"4 5","pages":"pgaf111"},"PeriodicalIF":2.2000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063528/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PNAS nexus","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/pnasnexus/pgaf111","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Content moderation is a critical aspect of platform governance on social media and of particular relevance to addressing the belief in and spread of misinformation. However, current content moderation practices have been criticized as unjust. This raises an important question-who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative survey experiment (n = 3,000) in which US participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts (e.g. domain experts), laypeople (e.g. social media users), or nonjuries (e.g. computer algorithm). We also randomized features of jury composition (size and necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions-nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion. Maximally legitimate layperson juries were comparably legitimate with expert panels. Republicans perceived experts as less legitimate compared with Democrats, but still more legitimate than baseline layperson juries. Conversely, larger lay juries with news knowledge qualifications who engaged in discussion were perceived as more legitimate across the political spectrum. Our findings shed light on the foundations of institutional legitimacy in content moderation and have implications for the design of online moderation systems.

外行和专家内容审核员的感知合法性。
内容审核是社交媒体平台治理的一个关键方面,尤其与解决错误信息的信仰和传播有关。然而,目前的内容审核做法被批评为不公正。这就提出了一个重要的问题——美国人希望谁来决定网络内容是否具有有害的误导性?我们进行了一项具有全国代表性的调查实验(n = 3000),在该实验中,美国参与者评估了假想的内容审核陪审团的合法性,该陪审团的任务是评估在线内容是否具有有害的误导性。这些适度陪审团根据他们是由专家(如领域专家)、外行(如社交媒体用户)还是非陪审团(如计算机算法)组成而有所不同。我们还随机化了陪审团组成的特征(规模和必要的资格),以及陪审团是否在内容评估过程中参与讨论。总的来说,参与者认为专家陪审团比外行陪审团或计算机算法更合理。然而,修改外行陪审团的特征有助于增加合法性观念——具有全国代表性或政治平衡的组成增强了合法性,扩大规模、陪审员个人知识资格和使陪审员能够讨论也是如此。最合法的外行陪审团与专家小组的合法性相当。共和党人认为专家的合法性不如民主党人,但仍比基本的外行陪审团更有合法性。相反,参与讨论的具有新闻知识资格的更大的非专业陪审团被认为在政治范围内更合法。我们的研究结果揭示了内容审核制度合法性的基础,并对在线审核系统的设计产生了影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信