Non-discrimination law, the GDPR, the AI act and the - now withdrawn - AI liability directive proposal offering gateways to pre-trial knowledge of algorithmic discrimination
{"title":"Non-discrimination law, the GDPR, the AI act and the - now withdrawn - AI liability directive proposal offering gateways to pre-trial knowledge of algorithmic discrimination","authors":"Ljupcho Grozdanovski","doi":"10.1007/s43681-025-00754-0","DOIUrl":null,"url":null,"abstract":"<p>This article focuses on the evidence necessary to support claims of discrimination arising from AI-assisted recruitment. It addresses two main issues. First, given that discrimination may be subtly expressed by (possibly opaque) AI systems, this article examines the EU legal frameworks designed to facilitate access to explanations and evidence capable of revealing discriminatory bias in automated recruitment processes. Those provisions include the Equality Directives, the GDPR, the AI Act (AIA), and the now-withdrawn AI Liability Directive (AILD) proposal. In analysing those provisions, particular attention is paid to the types of information that may be sought: the logic behind an AI’s output, the reasons a human decision-maker relied on that output, and the AI system’s compliance with the AIA. Second, the article determines which among the various applicable provisions should be treated as <i>lex specialis</i>, that is, the specific rule that should be preferentially applied to obtain pre-trial knowledge of algorithmic discrimination. In this context, special emphasis is placed on Articles 22 GDPR and 86 AIA, both of which recognize a right to an explanation and are potentially applicable to automated recruitment systems, since those can be classified as both high-risk under Annex III of the AIA and involving personal data processing, under the GDPR. From the standpoint of a litigant’s ability to satisfy the procedural requirements of both provisions, the article argues that Article 86 AIA may offer a more accessible pathway than Article 22 GDPR, both in terms of the scope of information provided and the conditions required for access. Nonetheless, neither provision guarantees automatic disclosure; access remains conditional and often subject to stringent procedural requirements. This selective, rather than automatic approach to transparency raises important questions about its implications for fundamental rights, particularly the right to access justice and effective remedies.</p>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"5039 - 5062"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00754-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This article focuses on the evidence necessary to support claims of discrimination arising from AI-assisted recruitment. It addresses two main issues. First, given that discrimination may be subtly expressed by (possibly opaque) AI systems, this article examines the EU legal frameworks designed to facilitate access to explanations and evidence capable of revealing discriminatory bias in automated recruitment processes. Those provisions include the Equality Directives, the GDPR, the AI Act (AIA), and the now-withdrawn AI Liability Directive (AILD) proposal. In analysing those provisions, particular attention is paid to the types of information that may be sought: the logic behind an AI’s output, the reasons a human decision-maker relied on that output, and the AI system’s compliance with the AIA. Second, the article determines which among the various applicable provisions should be treated as lex specialis, that is, the specific rule that should be preferentially applied to obtain pre-trial knowledge of algorithmic discrimination. In this context, special emphasis is placed on Articles 22 GDPR and 86 AIA, both of which recognize a right to an explanation and are potentially applicable to automated recruitment systems, since those can be classified as both high-risk under Annex III of the AIA and involving personal data processing, under the GDPR. From the standpoint of a litigant’s ability to satisfy the procedural requirements of both provisions, the article argues that Article 86 AIA may offer a more accessible pathway than Article 22 GDPR, both in terms of the scope of information provided and the conditions required for access. Nonetheless, neither provision guarantees automatic disclosure; access remains conditional and often subject to stringent procedural requirements. This selective, rather than automatic approach to transparency raises important questions about its implications for fundamental rights, particularly the right to access justice and effective remedies.