Artificial intelligence, human vulnerability and multi-level resilience

IF 3.3 3区 社会学 Q1 LAW
Sue Anne Teo
{"title":"Artificial intelligence, human vulnerability and multi-level resilience","authors":"Sue Anne Teo","doi":"10.1016/j.clsr.2025.106134","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) is increasing being deployed across various sectors in society. While bringing progress and promise to scientific discovery, public administration, healthcare, transportation and human well-being generally, artificial intelligence can also exacerbate existing forms of human vulnerabilities and can introduce new vulnerabilities through the interplay of AI inferences, predictions and content that is generated. This underpins the anxiety of policymakers in terms of managing potential harms and vulnerabilities and the harried landscape of governance and regulatory modalities, including through the European Union’s effort to be the first in the world to comprehensively regulate AI.</div><div>This article examines the adequacy of the existing theories of human vulnerability in countering the challenges posed by artificial intelligence, including through how vulnerability is theorised and addressed within human rights law and within existing legislative efforts such as the EU AI Act. Vulnerability is an element that informs the contours of groups and populations that are protected, for example under non-discrimination law and privacy law. A critical evaluation notes that while human vulnerability is taken into account in governing and regulating AI systems, the vulnerability lens that informs legal responses is one that is particularistic, static and identifiable. In other words, the law demands that vulnerabilities are known in advance in order for meaningful parameters of protection to be designed around them. The individual, as the subject of legal protection, is also expected to be able to identify the harms suffered and therein seek for accountability.</div><div>However, AI can displace this straightforward framing and the legal certainty that implicitly underpins how vulnerabilities are dealt with under the law. Through data-driven inferential insights of predictive AI systems and content generation enabled by general purpose AI models, novel forms of dynamic, unforeseeable and emergent forms of vulnerability can arise that cannot be adequately encompassed within existing legal responses. Instead, it requires an expansion of not only the types of legal responses offered but also of vulnerability theory itself and the measures of resilience that should be taken to address the exacerbation of existing vulnerabilities and but also of emergent ones.</div><div>The article offers a re-theorisation of human vulnerability in the age of AI as one informed by the universalist idea of vulnerability theorised by Martha Fineman. A new conceptual framework is offered, through an expanded understanding that sketches out the human condition in this age as one of ‘algorithmic vulnerability.’ It finds support for this new condition through a vector of convergence from the growing vocabularies of harm, the regulatory direction and drawing from scholarship on emerging vulnerabilities. The article proposes the framework of multi-level resilience to account for existing and emerging vulnerabilities. It offers a typology, examining how resilience towards vulnerabilities can be operationalised at the level of the individual, through technological design and within regulatory initiatives and other measures that promote societal resilience. The article also addresses objections to this new framing, namely in terms of how it seemingly results in a problem with no agency, potentially negating fault ascription and blame. Further, it addresses if the re-conception itself falls into the trap of technological determinism and finally, how the universalist notion of vulnerability can seemingly negate human autonomy that is a key feature of human dignity.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106134"},"PeriodicalIF":3.3000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2212473X25000070","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) is increasing being deployed across various sectors in society. While bringing progress and promise to scientific discovery, public administration, healthcare, transportation and human well-being generally, artificial intelligence can also exacerbate existing forms of human vulnerabilities and can introduce new vulnerabilities through the interplay of AI inferences, predictions and content that is generated. This underpins the anxiety of policymakers in terms of managing potential harms and vulnerabilities and the harried landscape of governance and regulatory modalities, including through the European Union’s effort to be the first in the world to comprehensively regulate AI.
This article examines the adequacy of the existing theories of human vulnerability in countering the challenges posed by artificial intelligence, including through how vulnerability is theorised and addressed within human rights law and within existing legislative efforts such as the EU AI Act. Vulnerability is an element that informs the contours of groups and populations that are protected, for example under non-discrimination law and privacy law. A critical evaluation notes that while human vulnerability is taken into account in governing and regulating AI systems, the vulnerability lens that informs legal responses is one that is particularistic, static and identifiable. In other words, the law demands that vulnerabilities are known in advance in order for meaningful parameters of protection to be designed around them. The individual, as the subject of legal protection, is also expected to be able to identify the harms suffered and therein seek for accountability.
However, AI can displace this straightforward framing and the legal certainty that implicitly underpins how vulnerabilities are dealt with under the law. Through data-driven inferential insights of predictive AI systems and content generation enabled by general purpose AI models, novel forms of dynamic, unforeseeable and emergent forms of vulnerability can arise that cannot be adequately encompassed within existing legal responses. Instead, it requires an expansion of not only the types of legal responses offered but also of vulnerability theory itself and the measures of resilience that should be taken to address the exacerbation of existing vulnerabilities and but also of emergent ones.
The article offers a re-theorisation of human vulnerability in the age of AI as one informed by the universalist idea of vulnerability theorised by Martha Fineman. A new conceptual framework is offered, through an expanded understanding that sketches out the human condition in this age as one of ‘algorithmic vulnerability.’ It finds support for this new condition through a vector of convergence from the growing vocabularies of harm, the regulatory direction and drawing from scholarship on emerging vulnerabilities. The article proposes the framework of multi-level resilience to account for existing and emerging vulnerabilities. It offers a typology, examining how resilience towards vulnerabilities can be operationalised at the level of the individual, through technological design and within regulatory initiatives and other measures that promote societal resilience. The article also addresses objections to this new framing, namely in terms of how it seemingly results in a problem with no agency, potentially negating fault ascription and blame. Further, it addresses if the re-conception itself falls into the trap of technological determinism and finally, how the universalist notion of vulnerability can seemingly negate human autonomy that is a key feature of human dignity.
人工智能,人类的脆弱性和多层次的弹性
人工智能(AI)正在越来越多地应用于社会的各个领域。在为科学发现、公共管理、医疗保健、交通运输和人类福祉带来进步和希望的同时,人工智能也可能加剧现有形式的人类脆弱性,并可能通过人工智能推理、预测和生成的内容的相互作用引入新的脆弱性。这加剧了政策制定者在管理潜在危害和脆弱性方面的焦虑,以及治理和监管模式的混乱局面,包括欧盟努力成为世界上第一个全面监管人工智能的国家。本文考察了现有人类脆弱性理论在应对人工智能带来的挑战方面的充分性,包括如何在人权法和现有立法努力(如欧盟人工智能法案)中对脆弱性进行理论和解决。脆弱性是一个因素,它可以告知受保护的群体和人口的轮廓,例如根据不歧视法和隐私法。一项批判性评估指出,虽然在管理和规范人工智能系统时考虑到了人类的脆弱性,但为法律回应提供信息的脆弱性视角是特殊的、静态的和可识别的。换句话说,法律要求预先知道漏洞,以便围绕它们设计有意义的保护参数。个人作为法律保护的对象,也应当能够查明所遭受的伤害,并为此寻求责任。然而,人工智能可以取代这种直接的框架和法律确定性,后者隐含地支持如何根据法律处理漏洞。通过预测性人工智能系统的数据驱动的推理见解和通用人工智能模型支持的内容生成,可能会出现新的动态、不可预见和紧急形式的漏洞,而这些漏洞无法在现有的法律响应中得到充分的涵盖。相反,它不仅需要扩大所提供的法律回应的类型,而且需要扩大脆弱性理论本身和应对现有脆弱性加剧以及新出现的脆弱性应采取的复原力措施。这篇文章提供了人工智能时代人类脆弱性的重新理论化,正如玛莎·菲恩曼(Martha Fineman)提出的普遍主义脆弱性理论所告诉我们的那样。一个新的概念框架,通过一个扩展的理解,勾勒出这个时代的人类状况作为一个“算法的脆弱性”。它从不断增长的危害词汇、监管方向和对新出现的脆弱性的学术研究中找到了一个趋同的向量,为这种新情况找到了支持。本文提出了多层次弹性框架,以解释现有和新出现的脆弱性。它提供了一种类型学,研究如何通过技术设计、监管举措和其他促进社会恢复力的措施,在个人层面上实施对脆弱性的恢复力。文章还讨论了对这种新框架的反对意见,即它似乎导致了一个没有代理的问题,潜在地否定了错误的归属和指责。此外,它还讨论了重新概念本身是否陷入了技术决定论的陷阱,最后,脆弱性的普遍主义概念如何看似否定了人类尊严的关键特征——人类自主性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.60
自引率
10.30%
发文量
81
审稿时长
67 days
期刊介绍: CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信