Moderating Synthetic Content: the Challenge of Generative AI.

Q1 Arts and Humanities
Philosophy and Technology Pub Date : 2024-01-01 Epub Date: 2024-11-13 DOI:10.1007/s13347-024-00818-9
Sarah A Fisher, Jeffrey W Howard, Beatriz Kira
{"title":"Moderating Synthetic Content: the Challenge of Generative AI.","authors":"Sarah A Fisher, Jeffrey W Howard, Beatriz Kira","doi":"10.1007/s13347-024-00818-9","DOIUrl":null,"url":null,"abstract":"<p><p>Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic content, we argue that the challenge posed by generative AI should be met through the enforcement of general platform rules. We demonstrate that the threat posed to individuals and society by AI-generated content is no different in kind from that of ordinary harmful content-a threat which is already well recognised. Generative AI massively increases the problem but, ultimately, it requires the same approach. Therefore, platforms do best to double down on improving and enforcing their existing rules, regardless of whether the content they are dealing with was produced by humans or machines.</p>","PeriodicalId":39065,"journal":{"name":"Philosophy and Technology","volume":"37 4","pages":"133"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561028/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Philosophy and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s13347-024-00818-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/13 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

Abstract

Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes. Malicious uses of the new technologies can only be expected to proliferate over time. In the face of this threat, social media platforms must surely act. But how? While it is tempting to think they need new sui generis policies targeting synthetic content, we argue that the challenge posed by generative AI should be met through the enforcement of general platform rules. We demonstrate that the threat posed to individuals and society by AI-generated content is no different in kind from that of ordinary harmful content-a threat which is already well recognised. Generative AI massively increases the problem but, ultimately, it requires the same approach. Therefore, platforms do best to double down on improving and enforcing their existing rules, regardless of whether the content they are dealing with was produced by humans or machines.

调节合成内容:生成式人工智能的挑战。
人工生成的内容有可能严重扰乱公共领域。生成式人工智能为制造令人信服的虚构事件提供了巨大便利。我们已经开始目睹人工合成的错误信息、政治宣传和未经同意的隐私深度伪造的传播。随着时间的推移,对新技术的恶意使用只会越来越多。面对这种威胁,社交媒体平台必须采取行动。但如何行动呢?虽然人们很容易认为平台需要针对合成内容制定新的特殊政策,但我们认为,生成式人工智能带来的挑战应该通过执行一般平台规则来应对。我们证明,人工智能生成的内容对个人和社会构成的威胁与普通的有害内容并无不同--这种威胁已经得到了广泛认可。生成式人工智能大大增加了问题的严重性,但归根结底,它需要同样的方法。因此,无论所处理的内容是由人类还是机器生成的,平台最好都能加倍努力改进和执行现有规则。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Philosophy and Technology
Philosophy and Technology Arts and Humanities-Philosophy
CiteScore
10.40
自引率
0.00%
发文量
98
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信