Maria Lillà Montagnani , Marie-Claire Najjar , Antonio Davola
{"title":"欧盟对人工智能责任的监管方法及其在金融服务市场的应用","authors":"Maria Lillà Montagnani , Marie-Claire Najjar , Antonio Davola","doi":"10.1016/j.clsr.2024.105984","DOIUrl":null,"url":null,"abstract":"<div><p>The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"53 ","pages":"Article 105984"},"PeriodicalIF":3.3000,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000517/pdfft?md5=f07f39582f3ead443d3362d92bf7c60f&pid=1-s2.0-S0267364924000517-main.pdf","citationCount":"0","resultStr":"{\"title\":\"The EU Regulatory approach(es) to AI liability, and its Application to the financial services market\",\"authors\":\"Maria Lillà Montagnani , Marie-Claire Najjar , Antonio Davola\",\"doi\":\"10.1016/j.clsr.2024.105984\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.</p></div>\",\"PeriodicalId\":51516,\"journal\":{\"name\":\"Computer Law & Security Review\",\"volume\":\"53 \",\"pages\":\"Article 105984\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0267364924000517/pdfft?md5=f07f39582f3ead443d3362d92bf7c60f&pid=1-s2.0-S0267364924000517-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Law & Security Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0267364924000517\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924000517","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
The EU Regulatory approach(es) to AI liability, and its Application to the financial services market
The continued progress of Artificial Intelligence (AI) can benefit different aspects of society and various fields of the economy, yet pose crucial risks to both those who offer such technologies and those who use them. These risks are emphasized by the unpredictability of developments in AI technology (such as the increased level of autonomy of self-learning systems), which renders it even more difficult to build a comprehensive legal framework accounting for all potential legal and ethical issues arising from the use of AI. As such, enforcement authorities are facing increased difficulties in checking compliance with applicable legislation and assessing liability, due to the specific features of AI, – namely: complexity, opacity, autonomy, unpredictability, openness, data-drivenness, and vulnerability. These problems are particularly significant in areas, such as financial markets, in which consequences arising from malfunctioning of AI systems are likely to have a major impact both in terms of individuals' protection, and of overall market stability. This scenario challenges policymaking in an increasingly digital and global context, where it becomes difficult for regulators to predict and face the impact of AI systems on economy and society, to make sure that they are human-centric, ethical, explainable, sustainable and respectful of fundamental rights and values. The European Union has been dedicating increased attention to filling the gap between the existing legal framework and AI. Some of the legislative proposals in consideration call for preventive legislation and introduce obligations on different actors – such as the AI Act – while others have a compensatory scope and seek to build a liability framework – such as the proposed AI Liability Directive and revised Product Liability Directive. At the same time, cross-sectorial regulations shall coexist with sector-specific initiatives, and the rules they establish. The present paper starts by assessing the fit of the existing European liability regime(s) with the constantly evolving AI landscape, by identifying the normative foundations on which a liability regime for such technology should be built on. It then addresses the proposed additions and revisions to the legislation, focusing on how they seek to govern AI systems, with a major focus on their implications on highly-regulated complex systems such as financial markets. Finally, it considers potential additional measures that could continue to strike a balance between the interests of all parties, namely by seeking to reduce the inherent risks that accompany the use of AI and to leverage its major benefits for our society and economy.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.