{"title":"AI liability in Europe: How does it complement risk regulation and deal with the problem of human oversight?","authors":"Beatriz Botero Arcila","doi":"10.1016/j.clsr.2024.106012","DOIUrl":null,"url":null,"abstract":"<div><p>Who should compensate you if you get hit by a car in “autopilot” mode: the safety driver or the car manufacturer? What about if you find out you were unfairly discriminated against by an AI decision-making tool that was being supervised by an HR professional? Should the developer compensate you, the company that procured the software, or the (employer of the) HR professional that was “supervising” the system's output?</p><p>These questions do not have easy answers. In the European Union and elsewhere around the world, AI governance is turning towards risk regulation. Risk regulation alone is, however, rarely optimal. The situations above all involve the liability for harms that are caused by or with an AI system. While risk regulations like the AI Act regulate some aspects of these human and machine interactions, they do not offer those impacted by AI systems any rights and little avenues to seek redress. From a corrective justice perspective risk regulation must also be complemented by liability law because when harms do occur, harmed individuals should be compensated. From a risk-prevention perspective, risk regulation may still fall short of creating optimal incentives for all parties to take precautions.</p><p>Because risk regulation is not enough, scholars and regulators around the world have highlighted that AI regulations should be complemented by liability rules to address AI harms when they occur. Using a law and economics framework this Article examines how the recently proposed AI liability regime in the EU – a revision of the Product Liability Directive, and an AI Liability effectively complement the AI Act and how they address the particularities of AI-human interactions.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106012"},"PeriodicalIF":3.3000,"publicationDate":"2024-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000797/pdfft?md5=4672fdb50a5856a23c27094c7201b057&pid=1-s2.0-S0267364924000797-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924000797","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Who should compensate you if you get hit by a car in “autopilot” mode: the safety driver or the car manufacturer? What about if you find out you were unfairly discriminated against by an AI decision-making tool that was being supervised by an HR professional? Should the developer compensate you, the company that procured the software, or the (employer of the) HR professional that was “supervising” the system's output?
These questions do not have easy answers. In the European Union and elsewhere around the world, AI governance is turning towards risk regulation. Risk regulation alone is, however, rarely optimal. The situations above all involve the liability for harms that are caused by or with an AI system. While risk regulations like the AI Act regulate some aspects of these human and machine interactions, they do not offer those impacted by AI systems any rights and little avenues to seek redress. From a corrective justice perspective risk regulation must also be complemented by liability law because when harms do occur, harmed individuals should be compensated. From a risk-prevention perspective, risk regulation may still fall short of creating optimal incentives for all parties to take precautions.
Because risk regulation is not enough, scholars and regulators around the world have highlighted that AI regulations should be complemented by liability rules to address AI harms when they occur. Using a law and economics framework this Article examines how the recently proposed AI liability regime in the EU – a revision of the Product Liability Directive, and an AI Liability effectively complement the AI Act and how they address the particularities of AI-human interactions.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.