Questioning the EU proposal for an Artificial Intelligence Act: The need for prohibitions and a stricter approach to biometric surveillance

IF 1.3 Q2 INFORMATION SCIENCE & LIBRARY SCIENCE
Irena Barkane
{"title":"Questioning the EU proposal for an Artificial Intelligence Act: The need for prohibitions and a stricter approach to biometric surveillance","authors":"Irena Barkane","doi":"10.3233/IP-211524","DOIUrl":null,"url":null,"abstract":". Artificial Intelligence (AI)-based surveillance technologies such as facial recognition, emotion recognition and other biometric technologies have been rapidly introduced by both public and private entities all around the world, raising major concerns about their impact on fundamental rights, the rule of law and democracy. This article questions the efficiency of the European Commission’s Proposal for Regulation of Artificial Intelligence, known as the AI Act, in addressing the threats and risks to fundamental rights posed by AI biometric surveillance systems. It argues that in order to meaningfully address risks to fundamental rights the proposed classification of these systems should be reconsidered. Although the draft AI Act acknowledges that some AI practices should be prohibited, the multiple exceptions and loopholes should be closed, and in addition new prohibitions, in particular to emotional recognition and biometric categorisation systems, should be added to counter AI surveillance practices violating fundamental rights. The AI Act should also introduce stronger legal requirements, such as third-party conformity assessment, fundamental rights impact assessment, transparency obligations as well as enhance existing EU data protection law and the rights and remedies available to individuals, thus not missing the unique opportunity to adopt the first legal framework that truly promotes trustworthy AI. enhance existing data protection rules and promote the rights and remedies of the individuals and groups in the draft AI Act.","PeriodicalId":46265,"journal":{"name":"Information Polity","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Polity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/IP-211524","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

. Artificial Intelligence (AI)-based surveillance technologies such as facial recognition, emotion recognition and other biometric technologies have been rapidly introduced by both public and private entities all around the world, raising major concerns about their impact on fundamental rights, the rule of law and democracy. This article questions the efficiency of the European Commission’s Proposal for Regulation of Artificial Intelligence, known as the AI Act, in addressing the threats and risks to fundamental rights posed by AI biometric surveillance systems. It argues that in order to meaningfully address risks to fundamental rights the proposed classification of these systems should be reconsidered. Although the draft AI Act acknowledges that some AI practices should be prohibited, the multiple exceptions and loopholes should be closed, and in addition new prohibitions, in particular to emotional recognition and biometric categorisation systems, should be added to counter AI surveillance practices violating fundamental rights. The AI Act should also introduce stronger legal requirements, such as third-party conformity assessment, fundamental rights impact assessment, transparency obligations as well as enhance existing EU data protection law and the rights and remedies available to individuals, thus not missing the unique opportunity to adopt the first legal framework that truly promotes trustworthy AI. enhance existing data protection rules and promote the rights and remedies of the individuals and groups in the draft AI Act.
质疑欧盟提出的人工智能法案:禁止和更严格的生物识别监控方法的必要性
. 人脸识别、情绪识别等基于人工智能(AI)的监控技术已被世界各地的公共和私营实体迅速引入,引发了人们对其对基本权利、法治和民主的影响的重大担忧。本文质疑欧盟委员会的人工智能监管提案(即人工智能法案)在解决人工智能生物识别监控系统对基本权利构成的威胁和风险方面的效率。它认为,为了有意义地处理对基本权利的风险,应重新考虑对这些制度的拟议分类。尽管人工智能法案草案承认应该禁止一些人工智能做法,但应该填补多个例外和漏洞,此外还应该增加新的禁令,特别是情绪识别和生物识别分类系统,以对抗侵犯基本权利的人工智能监控行为。人工智能法案还应引入更强有力的法律要求,如第三方合格评定、基本权利影响评估、透明度义务,以及加强现有的欧盟数据保护法以及个人可获得的权利和补救措施,从而不错过采用第一个真正促进可信赖人工智能的法律框架的独特机会。加强现有的数据保护规则,促进人工智能法案草案中个人和群体的权利和补救措施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Polity
Information Polity INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
3.30
自引率
10.00%
发文量
42
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信