Beyond the individual: governing AI's societal harm

Nathalie A. Smuha
{"title":"Beyond the individual: governing AI's societal harm","authors":"Nathalie A. Smuha","doi":"10.14763/2021.3.1574","DOIUrl":null,"url":null,"abstract":"In this paper, I distinguish three types of harm that can arise in the context of artificial intelligence (AI): individual harm, collective harm and societal harm. Societal harm is often overlooked, yet not reducible to the two former types of harm. Moreover, mechanisms to tackle individual and collective harm raised by AI are not always suitable to counter societal harm. As a result, policymakers’ gap analysis of the current legal framework for AI not only risks being incomplete, but proposals for new legislation to bridge these gaps may also inadequately protect societal interests that are adversely impacted by AI. By conceptualising AI’s societal harm, I argue that a shift in perspective is needed beyond the individual, towards a regulatory approach of AI that addresses its effects on society at large. Drawing on a legal domain specifically aimed at protecting a societal interest—environmental law—I identify three ‘societal’ mechanisms that EU policymakers should consider in the context of AI. These concern (1) public oversight mechanisms to increase accountability, including mandatory impact assessments with the opportunity to provide societal feedback; (2) public monitoring mechanisms to ensure independent information gathering and dissemination about AI’s societal impact; and (3) the introduction of procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm. Finally, I consider to what extent the European Commission’s new proposal for an AI regulation takes these mechanisms into consideration, before offering concluding remarks. Issue 3 This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.","PeriodicalId":219999,"journal":{"name":"Internet Policy Rev.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"36","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Policy Rev.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14763/2021.3.1574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 36

Abstract

In this paper, I distinguish three types of harm that can arise in the context of artificial intelligence (AI): individual harm, collective harm and societal harm. Societal harm is often overlooked, yet not reducible to the two former types of harm. Moreover, mechanisms to tackle individual and collective harm raised by AI are not always suitable to counter societal harm. As a result, policymakers’ gap analysis of the current legal framework for AI not only risks being incomplete, but proposals for new legislation to bridge these gaps may also inadequately protect societal interests that are adversely impacted by AI. By conceptualising AI’s societal harm, I argue that a shift in perspective is needed beyond the individual, towards a regulatory approach of AI that addresses its effects on society at large. Drawing on a legal domain specifically aimed at protecting a societal interest—environmental law—I identify three ‘societal’ mechanisms that EU policymakers should consider in the context of AI. These concern (1) public oversight mechanisms to increase accountability, including mandatory impact assessments with the opportunity to provide societal feedback; (2) public monitoring mechanisms to ensure independent information gathering and dissemination about AI’s societal impact; and (3) the introduction of procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm. Finally, I consider to what extent the European Commission’s new proposal for an AI regulation takes these mechanisms into consideration, before offering concluding remarks. Issue 3 This paper is part of Governing “European values” inside data flows, a special issue of Internet Policy Review guest-edited by Kristina Irion, Mira Burri, Ans Kolk, Stefania Milan.
超越个人:治理人工智能的社会危害
在本文中,我区分了人工智能(AI)背景下可能出现的三种类型的伤害:个人伤害、集体伤害和社会伤害。社会危害往往被忽视,但不能简化为前两种危害。此外,解决人工智能带来的个人和集体伤害的机制并不总是适用于对抗社会伤害。因此,政策制定者对当前人工智能法律框架的差距分析不仅存在不完整的风险,而且为弥合这些差距而提出的新立法提案也可能不足以保护受到人工智能不利影响的社会利益。通过将人工智能的社会危害概念化,我认为需要转变视角,超越个人,转向人工智能的监管方法,以解决其对整个社会的影响。利用专门旨在保护社会利益的法律领域-环境法-我确定了欧盟政策制定者在人工智能背景下应该考虑的三种“社会”机制。这些问题涉及:(1)加强问责制的公共监督机制,包括强制性影响评估,并有机会提供社会反馈;(2)公共监督机制,确保人工智能社会影响的独立信息收集和传播;(3)引入具有社会维度的程序性权利,包括获得信息、诉诸司法和参与人工智能公共决策的权利,而不管个人是否受到伤害。最后,在结束语之前,我将考虑欧盟委员会(European Commission)关于人工智能监管的新提案在多大程度上考虑了这些机制。本文是《互联网政策评论》特刊《管理数据流中的“欧洲价值观”》的一部分,由克里斯汀娜·伊里昂、米拉·布里、安斯·科尔克、斯特凡尼亚·米兰特约编辑。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信