Computer Law & Security Review最新文献

筛选
英文 中文
Artificial intelligence, human vulnerability and multi-level resilience 人工智能,人类的脆弱性和多层次的弹性
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-24 DOI: 10.1016/j.clsr.2025.106134
Sue Anne Teo
{"title":"Artificial intelligence, human vulnerability and multi-level resilience","authors":"Sue Anne Teo","doi":"10.1016/j.clsr.2025.106134","DOIUrl":"10.1016/j.clsr.2025.106134","url":null,"abstract":"<div><div>Artificial intelligence (AI) is increasing being deployed across various sectors in society. While bringing progress and promise to scientific discovery, public administration, healthcare, transportation and human well-being generally, artificial intelligence can also exacerbate existing forms of human vulnerabilities and can introduce new vulnerabilities through the interplay of AI inferences, predictions and content that is generated. This underpins the anxiety of policymakers in terms of managing potential harms and vulnerabilities and the harried landscape of governance and regulatory modalities, including through the European Union’s effort to be the first in the world to comprehensively regulate AI.</div><div>This article examines the adequacy of the existing theories of human vulnerability in countering the challenges posed by artificial intelligence, including through how vulnerability is theorised and addressed within human rights law and within existing legislative efforts such as the EU AI Act. Vulnerability is an element that informs the contours of groups and populations that are protected, for example under non-discrimination law and privacy law. A critical evaluation notes that while human vulnerability is taken into account in governing and regulating AI systems, the vulnerability lens that informs legal responses is one that is particularistic, static and identifiable. In other words, the law demands that vulnerabilities are known in advance in order for meaningful parameters of protection to be designed around them. The individual, as the subject of legal protection, is also expected to be able to identify the harms suffered and therein seek for accountability.</div><div>However, AI can displace this straightforward framing and the legal certainty that implicitly underpins how vulnerabilities are dealt with under the law. Through data-driven inferential insights of predictive AI systems and content generation enabled by general purpose AI models, novel forms of dynamic, unforeseeable and emergent forms of vulnerability can arise that cannot be adequately encompassed within existing legal responses. Instead, it requires an expansion of not only the types of legal responses offered but also of vulnerability theory itself and the measures of resilience that should be taken to address the exacerbation of existing vulnerabilities and but also of emergent ones.</div><div>The article offers a re-theorisation of human vulnerability in the age of AI as one informed by the universalist idea of vulnerability theorised by Martha Fineman. A new conceptual framework is offered, through an expanded understanding that sketches out the human condition in this age as one of ‘algorithmic vulnerability.’ It finds support for this new condition through a vector of convergence from the growing vocabularies of harm, the regulatory direction and drawing from scholarship on emerging vulnerabilities. The article proposes the framework of multi-","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106134"},"PeriodicalIF":3.3,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data portability strategies in the EU: Moving beyond individual rights 欧盟的数据可携性战略:超越个人权利
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-23 DOI: 10.1016/j.clsr.2025.106135
Yongle Chao , Meihe Xu , Aurelia Tamò-Larrieux , Konrad Kollnig
{"title":"Data portability strategies in the EU: Moving beyond individual rights","authors":"Yongle Chao ,&nbsp;Meihe Xu ,&nbsp;Aurelia Tamò-Larrieux ,&nbsp;Konrad Kollnig","doi":"10.1016/j.clsr.2025.106135","DOIUrl":"10.1016/j.clsr.2025.106135","url":null,"abstract":"<div><div>Data-driven innovation promises benefits for citizens, businesses, and organizations. To release the economic and social value of data, however, these actors need access to data. To get access to data, EU policymakers have introduced the concept of data portability. Data portability has traditionally been considered an individual right to enhance data subjects’ control over their personal data under the GDPR. Today, however, the concept was further developed in the DA and DMA to complement and enhance the GDPR right to data portability. Yet, the DA and DMA have different regulatory objectives compared to the GDPR. We argue in this paper that the concept of data portability has evolved beyond its original scope of protecting individual rights, while in the midst of a paradigm shift towards better access and flow for multiple stakeholders. However, this paradigm shift has rarely been explored and is not achieved yet in practice, as the academic and practical understanding of data portability is still focused on an individual level. To fill this gap, we analyze the evolution of data portability as an important novel policy instrument in (newer) EU legislation, as well as reflect on the shortcomings of the current understanding and implementation approach by means of use cases. We make the argument to understand the concept of data portability as a foundation for unlocking the collective value of data. We contend that data interoperability is both a technical issue and a political concern, and argue that sectoral and modular data interoperability standards are an opportunity for facilitating the effective implementation of data portability. Last, we call for improving data literacy among stakeholders, which is a possible path for closing the gap between regulations and effective enforcement by promoting an understanding of data portability.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106135"},"PeriodicalIF":3.3,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143859413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-driven civil litigation: Navigating the right to a fair trial 人工智能驱动的民事诉讼:引导公平审判的权利
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-19 DOI: 10.1016/j.clsr.2025.106136
Seyhan Selçuk , Nesibe Kurt Konca , Serkan Kaya
{"title":"AI-driven civil litigation: Navigating the right to a fair trial","authors":"Seyhan Selçuk ,&nbsp;Nesibe Kurt Konca ,&nbsp;Serkan Kaya","doi":"10.1016/j.clsr.2025.106136","DOIUrl":"10.1016/j.clsr.2025.106136","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) into legal proceedings has gained significant traction in recent years, particularly following the Covid-19 pandemic. As part of the broader movement toward the digitalization of legal systems, AI is seen as a tool to improve access to justice, enhance efficiency, and adopt a human-centered approach. However, the rapid advancement of AI necessitates careful consideration of fundamental human rights, especially the right to a fair trial as enshrined in Article 6 of the European Convention on Human Rights (ECHR). Recently, the European Union's Artificial Intelligence Act classifies AI systems used in the judiciary as high-risk, requiring impact assessments on fundamental rights, including the right to a fair trial. This paper explores the impact of AI-driven judicial tools on the right to a fair trial, focusing on key components such as the right to be heard, judicial independence, impartiality, and the principle of publicity. This paper explores the impact of AI-driven judicial tools on the right to a fair trial, focusing on key components such as the right to be heard, judicial independence, impartiality, and the principle of publicity, while examining the risks and opportunities posed by AI in civil litigation, including challenges like algorithmic discrimination, digital exclusion, and the potential erosion of human judges' cognitive abilities.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143847788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping the empirical literature of the GDPR's (In-)effectiveness: A systematic review 绘制GDPR (In-)有效性的实证文献:系统回顾
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-19 DOI: 10.1016/j.clsr.2025.106129
Wenlong Li , Zihao Li , Wenkai Li , Yueming Zhang , Aolan Li
{"title":"Mapping the empirical literature of the GDPR's (In-)effectiveness: A systematic review","authors":"Wenlong Li ,&nbsp;Zihao Li ,&nbsp;Wenkai Li ,&nbsp;Yueming Zhang ,&nbsp;Aolan Li","doi":"10.1016/j.clsr.2025.106129","DOIUrl":"10.1016/j.clsr.2025.106129","url":null,"abstract":"<div><div>In the realm of data protection, a striking disconnect prevails between traditional domains of doctrinal, legal, theoretical, and policy-based inquiries and a burgeoning body of empirical evidence. Much of the scholarly and regulatory discourse remains entrenched in abstract legal principles or normative frameworks, leaving the empirical landscape uncharted or minimally engaged. Since the birth of EU data protection law, a modest body of empirical evidence has been generated but remains widely scattered and unexamined. Such evidence offers vital insights into the effectiveness of data protection measures but languishes on the periphery, inadequately integrated into the broader conversation. To make a meaningful connection, we conduct a comprehensive review and synthesis of empirical research spanning nearly three decades (1995 - March 2022), advocating for a more robust integration of empirical evidence into the evaluation and review of the GDPR while laying a methodological foundation for coordinated research. By categorising evidence into four distinct groups– Awareness and Trust, Operational Performance, Ripple Effect, and Normative Clarity, we provide a structured analysis therein and highlight the variety and nuances of the empirical evidence produced about the GDPR. Our discussion offers critical reflections on the current orientations and designs of evaluation work, challenging some popular but misguided orientations that significantly influence public debate and even direction of empirical and doctrinal research. This synthesis also sheds light on several understated aspects, surfaced by our systematic review, including the complex structure of the GDPR and the internal contradictions between components, the GDPR's interaction with other normative values and legal frameworks, as well as unintended consequences imposed by the GDPR on other values not explicitly recognised as regulatory objectives (such as innovation). We further propose a methodological improvement in how empirical evidence can be generated and utilised, stressing the need for more guided, coordinated and rigorous empirical research. By re-aligning empirical focus towards these ends and establishing strategic coordination at the community level, we seek to inform and underpin evaluative work that aligns empirical inquiries with policy and doctrinal needs, while truly reflecting the complexities and challenges of safeguarding personal data in the digital age.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106129"},"PeriodicalIF":3.3,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Authorship in Human-AI collaborative creation: A creative control theory perspective 人-人工智能协同创作的作者身份:一个创造性控制理论的视角
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-18 DOI: 10.1016/j.clsr.2025.106139
Wei Liu , Weijie Huang
{"title":"Authorship in Human-AI collaborative creation: A creative control theory perspective","authors":"Wei Liu ,&nbsp;Weijie Huang","doi":"10.1016/j.clsr.2025.106139","DOIUrl":"10.1016/j.clsr.2025.106139","url":null,"abstract":"<div><div>The emergence of human-AI collaborative creation (HAIC) models has provided a good opportunity to uncover the principles of authorship identification. To clarify whether humans exert control over AI-generated content (AIGC) and whether such control is sufficient to confer authorship, we propose the theory of creative control from a law and aesthetics perspective. According to this theory, a human can claim authorship when they are guided by artistic imagery thinking and manifest individual creativity throughout the entire creation process from conception to execution. In the HAIC model, the unpredictable nature of the AI black box does not impede the recognition of users’ control, as users possess the capability for artistic imagery thinking to direct the entire creation process. If their contribution meets the originality standard, they qualify as the author of the AIGC. Current prevailing views that evaluate AIGC’s originality on the basis of either the final form of expression or the users’ prompts in the initial stage overlook the dynamic nature of the creative process.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106139"},"PeriodicalIF":3.3,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143844455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scoring the European citizen in the AI era 人工智能时代的欧洲公民得分
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-14 DOI: 10.1016/j.clsr.2025.106130
Nathan Genicot
{"title":"Scoring the European citizen in the AI era","authors":"Nathan Genicot","doi":"10.1016/j.clsr.2025.106130","DOIUrl":"10.1016/j.clsr.2025.106130","url":null,"abstract":"<div><div>Social scoring is one of the AI practices banned by the AI Act. This ban is explicitly inspired by China, which in 2014 announced its intention to set up a large-scale government project – the Social Credit System – aiming to rate every Chinese citizen according to their good behaviour, using digital technologies and AI. But in Europe, individuals are also scored by public and private bodies in a variety of contexts, such as assessing creditworthiness, monitoring employee productivity, detecting social fraud or terrorist risks, and so on. However, the AI Act does not intend to prohibit these types of scoring, as they would qualify as “high-risk AI systems”, which are authorised while subject to various requirements. One might therefore think that the ban on social scoring will have no practical effect on the scoring practices already in use in Europe, and that it is merely a vague safeguard in case an authoritarian power is tempted to set up such a system on European territory. Contrary to this view, this article argues that the ban has been drafted in a way that is flexible and therefore likely to make it a useful tool, similar and complementary to Article 22 of the General Data Protection Regulation, to protect individuals against certain forms of disproportionate use of AI-based scoring.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106130"},"PeriodicalIF":3.3,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143825686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Solid use case to empower and protect data subjects: Responsibilities under GDPR for governance of personal data stores 授权和保护数据主体的可靠用例:GDPR下管理个人数据存储的责任
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-13 DOI: 10.1016/j.clsr.2025.106133
Michiel Fierens , Harshvardhan J. Pandit , Aurelia Tamo-Larrieux , Kimberly Garcia
{"title":"A Solid use case to empower and protect data subjects: Responsibilities under GDPR for governance of personal data stores","authors":"Michiel Fierens ,&nbsp;Harshvardhan J. Pandit ,&nbsp;Aurelia Tamo-Larrieux ,&nbsp;Kimberly Garcia","doi":"10.1016/j.clsr.2025.106133","DOIUrl":"10.1016/j.clsr.2025.106133","url":null,"abstract":"<div><div>Decentralised data governance has emerged as an alternative model in response to the challenges of managing data and privacy in conventional centralised models. ‘Personal Data Stores’ (PDS) are at the forefront of this movement and provide forms of control over storage and management of data to the individual with the goal of empowering them. In this article, we argue how PDS, while being important technological innovations, are challenging to implement in the current regulatory landscape as the interpretation of responsibilities under the GDPR is woefully inadequate for decentralised systems. This represents a challenge to the decentralisation movement and makes it difficult to empower and protect individuals under the GDPR (data subjects) using PDS. A thorough understanding of the technological and legal situation and therefore an interdisciplinary approach is essential to make policymakers aware of any efforts that still need to be made to realise the decentralisation paradigm's goal. We therefore build upon research investigating GDPR compliance in decentralised data storage and management but do so through an interdisciplinary lens applied to an emerging application, Solid, that provides technical specifications for implementing it as the leading PDS implementation. By taking an interdisciplinary approach, we consider the interaction between the legal definitions from the GDPR and the implications of established case law with Solid's technical specifications and its possible implementations. We conclude with recommendations regarding the division of responsibilities for policymakers, authorities, market participants and technical developers to simultaneously protect and empower those involved in the use of PDS, particularly through Solid. Furthermore, the role of decentralised systems such as Solid is discussed, as well as the current unclear regulatory landscape surrounding it in the context of implementing the Data Governance Act (DGA). The implications for further AI development and within data spaces are also considered.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106133"},"PeriodicalIF":3.3,"publicationDate":"2025-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143825685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The elephant in the room: A global mechanism for E-Sport disputes 房间里的大象:电子竞技纠纷的全球机制
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-10 DOI: 10.1016/j.clsr.2025.106128
Serkan Kaya , Eda Şahin-Şengül , Aybüke Keskin
{"title":"The elephant in the room: A global mechanism for E-Sport disputes","authors":"Serkan Kaya ,&nbsp;Eda Şahin-Şengül ,&nbsp;Aybüke Keskin","doi":"10.1016/j.clsr.2025.106128","DOIUrl":"10.1016/j.clsr.2025.106128","url":null,"abstract":"<div><div>The e-sports industry has seen exponential growth, leading to increased disputes among players, teams, and organisers. Traditional dispute resolution methods, such as litigation, often fall short due to their time-consuming nature, the lack of technical expertise of the parties, and the international scope of e-sports disputes. This article highlights the potential of Blockchain Dispute Resolution (BDR) mechanisms to address these challenges. BDR offers several advantages for e-sports dispute resolution, ensuring transparency by recording all transactions and decisions on a public ledger, which can be accessed by all parties involved. This reduces the risk of biased decisions and enhances trust among stakeholders. Additionally, smart contracts can automate the enforcement of agreements, reducing the need for intermediaries and speeding up the resolution process. The article also underscores the importance of developing standardised rules and protocols for blockchain-based dispute resolution in e-sports, as it provides a structured approach for the recognition and enforcement of decisions made through blockchain mechanisms. The article, therefore, argues that the integration of blockchain technology in e-sports not only offers potential solutions for dispute resolution but also opens new avenues for monetisation and fan engagement, exciting the industry and its fans with the possibilities it brings for a more interactive and engaging future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106128"},"PeriodicalIF":3.3,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public data authorized operation and the rise of data finance in China: origins, risks, and prospects 公共数据授权运营与数据金融在中国的兴起:起源、风险与前景
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-10 DOI: 10.1016/j.clsr.2025.106132
Jingxian Chen (Lecturer)
{"title":"Public data authorized operation and the rise of data finance in China: origins, risks, and prospects","authors":"Jingxian Chen (Lecturer)","doi":"10.1016/j.clsr.2025.106132","DOIUrl":"10.1016/j.clsr.2025.106132","url":null,"abstract":"<div><div>This article explores the introduction of public data authorized operation (PDAO) in China and its role in the emergence of data finance, a new revenue model for local governments facing fiscal pressure due to declining land finance. It argues that the shift toward data finance is driven by the local government’s need for alternative fiscal resources, enabled by policies promoting the conditional and paid use of public data. The article examines the risks associated with the revenue-oriented approach to PDAO, such as the erosion of free public data openness, the formation of administrative monopolies, increased costs for data utilization, and the fragmentation of data regulations across regions. The article offers insights into the future of data finance and PDAO in China. It suggests that data finance should not be driven solely by short-term revenue goals but rather should be considered a strategic tool aimed at enhancing the country’s digital infrastructure and fostering long-term innovation. A comprehensive fiscal framework—including clear pricing standards, balanced revenue allocation mechanisms, and robust fiscal oversight—should be established to ensure that funds generated from PDAO are managed legally, transparently, and efficiently.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106132"},"PeriodicalIF":3.3,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143816762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Future themes in regulating artificial intelligence in investment management 在投资管理中规范人工智能的未来主题
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-03-06 DOI: 10.1016/j.clsr.2025.106111
Wojtek Buczynski , Felix Steffek , Mateja Jamnik , Fabio Cuzzolin , Barbara Sahakian
{"title":"Future themes in regulating artificial intelligence in investment management","authors":"Wojtek Buczynski ,&nbsp;Felix Steffek ,&nbsp;Mateja Jamnik ,&nbsp;Fabio Cuzzolin ,&nbsp;Barbara Sahakian","doi":"10.1016/j.clsr.2025.106111","DOIUrl":"10.1016/j.clsr.2025.106111","url":null,"abstract":"<div><div>We are witnessing the emergence of the “first generation” of AI and AI-adjacent soft and hard laws such as the EU AI Act or South Korea's Basic Act on AI. In parallel, existing industry regulations, such as GDPR, MIFID II or SM&amp;CR, are being “retrofitted” and reinterpreted from the perspective of AI. In this paper we identify and analyze ten novel, “second generation” themes which are likely to become regulatory considerations in the near future: non-personal data, managerial accountability, robo-advisory, generative AI, privacy enhancing techniques (PETs), profiling, emergent behaviours, smart contracts, ESG and algorithm management. The themes have been identified on the basis of ongoing developments in AI, existing regulations and industry discussions. Prior to making any new regulatory recommendations we explore whether novel issues can be solved by existing regulations. The contribution of this paper is a comprehensive picture of emerging regulatory considerations for AI in investment management, as well as broader financial services, and the ways they might be addressed by regulations – future or existing ones.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106111"},"PeriodicalIF":3.3,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信