Automated Propaganda: Labeling AI-Generated Political Content Should Not be Required by Law

IF 0.9 2区 哲学 Q4 ETHICS
Bartlomiej Chomanski, Lode Lauwaert
{"title":"Automated Propaganda: Labeling AI-Generated Political Content Should Not be Required by Law","authors":"Bartlomiej Chomanski,&nbsp;Lode Lauwaert","doi":"10.1111/japp.70002","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that legally requiring the disclosure of the automated nature of bot accounts and AI-generated content is unlikely to succeed in improving the quality of political discussion on social media. This is because information that an account spreading or creating political information is a bot or a language model is itself politically relevant information, and people reason very poorly about such information. Second, we aim to show that the main motivation for these laws – the threat of coordinated disinformation campaigns (automated or not) – appears overstated.</p>\n </div>","PeriodicalId":47057,"journal":{"name":"Journal of Applied Philosophy","volume":"42 3","pages":"994-1015"},"PeriodicalIF":0.9000,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Philosophy","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/japp.70002","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that legally requiring the disclosure of the automated nature of bot accounts and AI-generated content is unlikely to succeed in improving the quality of political discussion on social media. This is because information that an account spreading or creating political information is a bot or a language model is itself politically relevant information, and people reason very poorly about such information. Second, we aim to show that the main motivation for these laws – the threat of coordinated disinformation campaigns (automated or not) – appears overstated.

自动宣传:法律不应要求对人工智能生成的政治内容进行标注
一些学者和政策制定者对聊天机器人和生成式人工智能(AI)对政治虚假信息传播的影响提出了严重担忧。为解决这一问题,一项日益受欢迎的建议是通过法律,要求人为产生和人为传播的内容贴上这样的标签,目的是确保在这个迅速变化的环境中有一定程度的透明度。本文认为这样的法律是被误导的,原因有二。我们首先旨在表明,法律上要求披露机器人账户和人工智能生成内容的自动化性质,不太可能成功地提高社交媒体上政治讨论的质量。这是因为传播或创造政治信息的账户是机器人或语言模型的信息本身就是政治相关信息,人们对这些信息的推理非常差。其次,我们的目标是表明,制定这些法律的主要动机——协调的虚假信息活动(无论是否自动化)的威胁——似乎被夸大了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.20
自引率
0.00%
发文量
71
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信