On the path to the future: mapping the notion of transparency in the EU regulatory framework for AI

Q1 Social Sciences
Ida Varošanec
{"title":"On the path to the future: mapping the notion of transparency in the EU regulatory framework for AI","authors":"Ida Varošanec","doi":"10.1080/13600869.2022.2060471","DOIUrl":null,"url":null,"abstract":"ABSTRACT Transparency is the currency of trust. It offers clarity and certainty. This is essential when dealing with intelligent systems which are increasingly making impactful decisions. Such decisions need to be sufficiently explained. With the goal of establishing ‘trustworthy AI’, the European Commission has recently published a legislative proposal for AI. However, there are important gaps in this framework which have not yet been addressed. This article identifies these gaps through a systematic overview of transparency considerations therein. Since transparency is an important means to improve procedural rights, this article argues that the AI Act should contain clear transparency obligations to avoid asymmetries and enable the explainability of automated decisions to those affected by them. The transparency framework in the proposed AI Act leaves open a risk of abuse by companies because their interests do not encompass considerations of AI systems’ ultimate impact on individuals. However, the dangers of keeping transparency as a value without a legal force justify further reflection when regulating AI systems in a way that aims to safeguard opposing interests. To this end, this article proposes inclusive co-regulation instead of self-regulation so that impacted individuals as well as innovators will be empowered to use and trust AI systems.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Review of Law, Computers and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13600869.2022.2060471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 1

Abstract

ABSTRACT Transparency is the currency of trust. It offers clarity and certainty. This is essential when dealing with intelligent systems which are increasingly making impactful decisions. Such decisions need to be sufficiently explained. With the goal of establishing ‘trustworthy AI’, the European Commission has recently published a legislative proposal for AI. However, there are important gaps in this framework which have not yet been addressed. This article identifies these gaps through a systematic overview of transparency considerations therein. Since transparency is an important means to improve procedural rights, this article argues that the AI Act should contain clear transparency obligations to avoid asymmetries and enable the explainability of automated decisions to those affected by them. The transparency framework in the proposed AI Act leaves open a risk of abuse by companies because their interests do not encompass considerations of AI systems’ ultimate impact on individuals. However, the dangers of keeping transparency as a value without a legal force justify further reflection when regulating AI systems in a way that aims to safeguard opposing interests. To this end, this article proposes inclusive co-regulation instead of self-regulation so that impacted individuals as well as innovators will be empowered to use and trust AI systems.
在通往未来的道路上:在欧盟人工智能监管框架中绘制透明度的概念
透明是信任的货币。它提供了清晰和确定性。这在处理智能系统时至关重要,因为智能系统正在做出越来越有影响力的决策。这样的决定需要得到充分的解释。为了建立“可信赖的人工智能”,欧盟委员会最近发布了一项关于人工智能的立法提案。然而,这一框架中还存在一些尚未解决的重要差距。本文通过系统地概述其中的透明度考虑因素来识别这些差距。由于透明度是提高程序性权利的重要手段,本文认为,人工智能法案应包含明确的透明度义务,以避免不对称,并使自动决策能够向受其影响的人解释。拟议的《人工智能法案》中的透明度框架给公司带来了滥用的风险,因为它们的利益不包括考虑人工智能系统对个人的最终影响。然而,在以一种旨在保护对立利益的方式监管人工智能系统时,保持透明度作为一种价值而没有法律效力的危险,值得进一步反思。为此,本文提出了包容性的共同监管,而不是自我监管,以便受影响的个人和创新者将有权使用和信任人工智能系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.70
自引率
0.00%
发文量
25
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信