{"title":"自动宣传:法律不应要求对人工智能生成的政治内容进行标注","authors":"Bartlomiej Chomanski, Lode Lauwaert","doi":"10.1111/japp.70002","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that legally requiring the disclosure of the automated nature of bot accounts and AI-generated content is unlikely to succeed in improving the quality of political discussion on social media. This is because information that an account spreading or creating political information is a bot or a language model is itself politically relevant information, and people reason very poorly about such information. Second, we aim to show that the main motivation for these laws – the threat of coordinated disinformation campaigns (automated or not) – appears overstated.</p>\n </div>","PeriodicalId":47057,"journal":{"name":"Journal of Applied Philosophy","volume":"42 3","pages":"994-1015"},"PeriodicalIF":0.9000,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Propaganda: Labeling AI-Generated Political Content Should Not be Required by Law\",\"authors\":\"Bartlomiej Chomanski, Lode Lauwaert\",\"doi\":\"10.1111/japp.70002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that legally requiring the disclosure of the automated nature of bot accounts and AI-generated content is unlikely to succeed in improving the quality of political discussion on social media. This is because information that an account spreading or creating political information is a bot or a language model is itself politically relevant information, and people reason very poorly about such information. Second, we aim to show that the main motivation for these laws – the threat of coordinated disinformation campaigns (automated or not) – appears overstated.</p>\\n </div>\",\"PeriodicalId\":47057,\"journal\":{\"name\":\"Journal of Applied Philosophy\",\"volume\":\"42 3\",\"pages\":\"994-1015\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2025-02-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Philosophy\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/japp.70002\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Philosophy","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/japp.70002","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ETHICS","Score":null,"Total":0}
Automated Propaganda: Labeling AI-Generated Political Content Should Not be Required by Law
A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that legally requiring the disclosure of the automated nature of bot accounts and AI-generated content is unlikely to succeed in improving the quality of political discussion on social media. This is because information that an account spreading or creating political information is a bot or a language model is itself politically relevant information, and people reason very poorly about such information. Second, we aim to show that the main motivation for these laws – the threat of coordinated disinformation campaigns (automated or not) – appears overstated.