Creating non-discriminatory Artificial Intelligence systems: balancing the tensions between code granularity and the general nature of legal rules

Alba Soriano Arnanz
{"title":"Creating non-discriminatory Artificial Intelligence systems: balancing the tensions between code granularity and the general nature of legal rules","authors":"Alba Soriano Arnanz","doi":"10.7238/idp.v0i38.403794","DOIUrl":null,"url":null,"abstract":"Over the past decade, concern has grown regarding the risks generated by the use of artificial intelligence systems. One of the main problems associated with the use of these systems is the harm they have been proven to cause to the fundamental right to equality and non-discrimination. In this context, it is vital that we examine existing and proposed regulatory instruments that aim to address this particular issue, especially taking into consideration the difficulties of applying the abstract nature that typically characterises legal instruments and, in particular, the equality and non-discrimination legal framework, to the specific instructions that are needed when coding an artificial intelligence instrument that aims to be non-discriminatory. This paper focuses on examining how article 10 of the new EU Artificial Intelligence Act proposal may be the starting point for a new form of regulation that adapts to the needs of algorithmic systems.","PeriodicalId":235695,"journal":{"name":"IDP. Revista de Internet, Derecho y Política","volume":"94 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IDP. Revista de Internet, Derecho y Política","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7238/idp.v0i38.403794","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Over the past decade, concern has grown regarding the risks generated by the use of artificial intelligence systems. One of the main problems associated with the use of these systems is the harm they have been proven to cause to the fundamental right to equality and non-discrimination. In this context, it is vital that we examine existing and proposed regulatory instruments that aim to address this particular issue, especially taking into consideration the difficulties of applying the abstract nature that typically characterises legal instruments and, in particular, the equality and non-discrimination legal framework, to the specific instructions that are needed when coding an artificial intelligence instrument that aims to be non-discriminatory. This paper focuses on examining how article 10 of the new EU Artificial Intelligence Act proposal may be the starting point for a new form of regulation that adapts to the needs of algorithmic systems.
创建非歧视性的人工智能系统:平衡代码粒度和法律规则的一般性质之间的紧张关系
在过去的十年里,人们越来越关注使用人工智能系统所带来的风险。与使用这些制度有关的主要问题之一是它们已被证明对平等和不歧视的基本权利造成损害。在这种情况下,至关重要的是,我们审查旨在解决这一特定问题的现有和拟议的监管文书,特别是考虑到将法律文书典型特征的抽象性,特别是平等和非歧视法律框架应用于编码旨在非歧视的人工智能工具时所需的具体指令的困难。本文的重点是研究新的欧盟人工智能法案提案的第10条如何成为适应算法系统需求的新形式监管的起点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信