AI Challenges and the Inadequacy of Human Rights Protections

Q2 Social Sciences
Hin-Yan Liu
{"title":"AI Challenges and the Inadequacy of Human Rights Protections","authors":"Hin-Yan Liu","doi":"10.1080/0731129X.2021.1903709","DOIUrl":null,"url":null,"abstract":"My aim in this article is to set out some counter-intuitive claims about the challenges posed by artificial intelligence (AI) applications to the protection and enjoyment of human rights and to be your guide through my unorthodox ideas. While there are familiar human rights issues raised by AI and its applications, these are perhaps the easiest of the challenges because they are already recognized by the human rights regime as problems. Instead, the more pernicious challenges are those that have yet to be identified or articulated, because they arise from new affordances rather than directly through AI modeled as a technology. I suggest that we need to actively explore the potential problem space on this basis. I suggest that we need to adopt models and metaphors that systematically exclude the possibility of applying the human rights regime to AI applications. This orientation will present us with the difficult, intractable problems that most urgently require responses. There are convincing ways of understanding AI that lock out the very possibility for human rights responses and this should be grounds for serious concern. I suggest that responses need to exploit both sets of insights I present in this paper: first that proactive and systematic searches of the potential problem space need to be continuously conducted to find the problems that require responses; and second that the monopoly that the human rights regime holds with regards to addressing harm and suffering needs to be broken so that we can deploy a greater range of barriers against failures to recognize and remedy AI-induced wrongs.","PeriodicalId":35931,"journal":{"name":"Criminal Justice Ethics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/0731129X.2021.1903709","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Criminal Justice Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/0731129X.2021.1903709","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 2

Abstract

My aim in this article is to set out some counter-intuitive claims about the challenges posed by artificial intelligence (AI) applications to the protection and enjoyment of human rights and to be your guide through my unorthodox ideas. While there are familiar human rights issues raised by AI and its applications, these are perhaps the easiest of the challenges because they are already recognized by the human rights regime as problems. Instead, the more pernicious challenges are those that have yet to be identified or articulated, because they arise from new affordances rather than directly through AI modeled as a technology. I suggest that we need to actively explore the potential problem space on this basis. I suggest that we need to adopt models and metaphors that systematically exclude the possibility of applying the human rights regime to AI applications. This orientation will present us with the difficult, intractable problems that most urgently require responses. There are convincing ways of understanding AI that lock out the very possibility for human rights responses and this should be grounds for serious concern. I suggest that responses need to exploit both sets of insights I present in this paper: first that proactive and systematic searches of the potential problem space need to be continuously conducted to find the problems that require responses; and second that the monopoly that the human rights regime holds with regards to addressing harm and suffering needs to be broken so that we can deploy a greater range of barriers against failures to recognize and remedy AI-induced wrongs.
人工智能的挑战和人权保护的不足
我在这篇文章中的目的是就人工智能应用对保护和享受人权构成的挑战提出一些反直觉的主张,并引导你了解我的非正统想法。虽然人工智能及其应用程序提出了一些熟悉的人权问题,但这些可能是最容易的挑战,因为人权制度已经承认这些问题。相反,更有害的挑战是那些尚未确定或阐明的挑战,因为它们来自新的可供性,而不是直接通过人工智能作为一种技术建模。我建议,我们需要在此基础上积极探索潜在的问题空间。我建议我们需要采用模型和隐喻,系统地排除将人权制度应用于人工智能应用的可能性。这一方向将给我们带来最迫切需要作出回应的棘手问题。有一些令人信服的方式来理解人工智能,这些方式排除了人权对策的可能性,这应该引起人们的严重关切。我建议应对措施需要利用我在本文中提出的两组见解:首先,需要不断对潜在问题空间进行积极和系统的搜索,以找到需要应对的问题;第二,需要打破人权制度在解决伤害和苦难方面的垄断,以便我们能够设置更大范围的障碍,防止未能认识到和纠正人工智能引发的错误。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Criminal Justice Ethics
Criminal Justice Ethics Social Sciences-Law
CiteScore
1.10
自引率
0.00%
发文量
11
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信