{"title":"Artificial intelligence, human vulnerability and multi-level resilience","authors":"Sue Anne Teo","doi":"10.1016/j.clsr.2025.106134","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) is increasing being deployed across various sectors in society. While bringing progress and promise to scientific discovery, public administration, healthcare, transportation and human well-being generally, artificial intelligence can also exacerbate existing forms of human vulnerabilities and can introduce new vulnerabilities through the interplay of AI inferences, predictions and content that is generated. This underpins the anxiety of policymakers in terms of managing potential harms and vulnerabilities and the harried landscape of governance and regulatory modalities, including through the European Union’s effort to be the first in the world to comprehensively regulate AI.</div><div>This article examines the adequacy of the existing theories of human vulnerability in countering the challenges posed by artificial intelligence, including through how vulnerability is theorised and addressed within human rights law and within existing legislative efforts such as the EU AI Act. Vulnerability is an element that informs the contours of groups and populations that are protected, for example under non-discrimination law and privacy law. A critical evaluation notes that while human vulnerability is taken into account in governing and regulating AI systems, the vulnerability lens that informs legal responses is one that is particularistic, static and identifiable. In other words, the law demands that vulnerabilities are known in advance in order for meaningful parameters of protection to be designed around them. The individual, as the subject of legal protection, is also expected to be able to identify the harms suffered and therein seek for accountability.</div><div>However, AI can displace this straightforward framing and the legal certainty that implicitly underpins how vulnerabilities are dealt with under the law. Through data-driven inferential insights of predictive AI systems and content generation enabled by general purpose AI models, novel forms of dynamic, unforeseeable and emergent forms of vulnerability can arise that cannot be adequately encompassed within existing legal responses. Instead, it requires an expansion of not only the types of legal responses offered but also of vulnerability theory itself and the measures of resilience that should be taken to address the exacerbation of existing vulnerabilities and but also of emergent ones.</div><div>The article offers a re-theorisation of human vulnerability in the age of AI as one informed by the universalist idea of vulnerability theorised by Martha Fineman. A new conceptual framework is offered, through an expanded understanding that sketches out the human condition in this age as one of ‘algorithmic vulnerability.’ It finds support for this new condition through a vector of convergence from the growing vocabularies of harm, the regulatory direction and drawing from scholarship on emerging vulnerabilities. The article proposes the framework of multi-level resilience to account for existing and emerging vulnerabilities. It offers a typology, examining how resilience towards vulnerabilities can be operationalised at the level of the individual, through technological design and within regulatory initiatives and other measures that promote societal resilience. The article also addresses objections to this new framing, namely in terms of how it seemingly results in a problem with no agency, potentially negating fault ascription and blame. Further, it addresses if the re-conception itself falls into the trap of technological determinism and finally, how the universalist notion of vulnerability can seemingly negate human autonomy that is a key feature of human dignity.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106134"},"PeriodicalIF":3.3000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2212473X25000070","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) is increasing being deployed across various sectors in society. While bringing progress and promise to scientific discovery, public administration, healthcare, transportation and human well-being generally, artificial intelligence can also exacerbate existing forms of human vulnerabilities and can introduce new vulnerabilities through the interplay of AI inferences, predictions and content that is generated. This underpins the anxiety of policymakers in terms of managing potential harms and vulnerabilities and the harried landscape of governance and regulatory modalities, including through the European Union’s effort to be the first in the world to comprehensively regulate AI.
This article examines the adequacy of the existing theories of human vulnerability in countering the challenges posed by artificial intelligence, including through how vulnerability is theorised and addressed within human rights law and within existing legislative efforts such as the EU AI Act. Vulnerability is an element that informs the contours of groups and populations that are protected, for example under non-discrimination law and privacy law. A critical evaluation notes that while human vulnerability is taken into account in governing and regulating AI systems, the vulnerability lens that informs legal responses is one that is particularistic, static and identifiable. In other words, the law demands that vulnerabilities are known in advance in order for meaningful parameters of protection to be designed around them. The individual, as the subject of legal protection, is also expected to be able to identify the harms suffered and therein seek for accountability.
However, AI can displace this straightforward framing and the legal certainty that implicitly underpins how vulnerabilities are dealt with under the law. Through data-driven inferential insights of predictive AI systems and content generation enabled by general purpose AI models, novel forms of dynamic, unforeseeable and emergent forms of vulnerability can arise that cannot be adequately encompassed within existing legal responses. Instead, it requires an expansion of not only the types of legal responses offered but also of vulnerability theory itself and the measures of resilience that should be taken to address the exacerbation of existing vulnerabilities and but also of emergent ones.
The article offers a re-theorisation of human vulnerability in the age of AI as one informed by the universalist idea of vulnerability theorised by Martha Fineman. A new conceptual framework is offered, through an expanded understanding that sketches out the human condition in this age as one of ‘algorithmic vulnerability.’ It finds support for this new condition through a vector of convergence from the growing vocabularies of harm, the regulatory direction and drawing from scholarship on emerging vulnerabilities. The article proposes the framework of multi-level resilience to account for existing and emerging vulnerabilities. It offers a typology, examining how resilience towards vulnerabilities can be operationalised at the level of the individual, through technological design and within regulatory initiatives and other measures that promote societal resilience. The article also addresses objections to this new framing, namely in terms of how it seemingly results in a problem with no agency, potentially negating fault ascription and blame. Further, it addresses if the re-conception itself falls into the trap of technological determinism and finally, how the universalist notion of vulnerability can seemingly negate human autonomy that is a key feature of human dignity.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.