{"title":"Personal data controllers and device producers: Mind the gap","authors":"Efstratios Koulierakis","doi":"10.1016/j.clsr.2025.106172","DOIUrl":"10.1016/j.clsr.2025.106172","url":null,"abstract":"<div><div>It seemed well established that producing a smart device could not, by itself, render someone a personal data controller in the absence of subsequent influence over the processing operations (the influence thesis). In contrast, legal scholars have introduced a new interpretation of European data protection law that seeks to apply the General Data Protection Regulation (GDPR) to the processing operations of smart devices even if no entity influences the processing remotely after the release of the product. This approach classifies producers as personal data controllers for device-based processing (producer-controller thesis). The proponents of the producer-controller thesis highlight the increasing importance of smart devices that store data locally and the need for protecting consumers’ rights in that context. However, as this paper claims, the GDPR is not the proper legal instrument for addressing the safety standards of smart products that process data locally. These considerations relate to legislative texts that prescribe product requirements, such as the AI Act and the Cyber Resilience Act. On those grounds, the present work criticises the producer-controller thesis. As this paper concludes, expanding the concept of ‘controller’ to encompass producers of smart devices does not enhance the protection of the data subjects and does not fit within the current data protection framework of the European Union.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106172"},"PeriodicalIF":3.3,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparative review of Symbolism and Connectionism in AI&Law: Tracing evolution and exploring integration","authors":"Bin Wei","doi":"10.1016/j.clsr.2025.106166","DOIUrl":"10.1016/j.clsr.2025.106166","url":null,"abstract":"<div><div>AI&Law explores computational methods for automating legal reasoning and prediction, evolving in parallel with AI research and developing along two primary paths: symbolic and connectionist approaches. Symbolic AI&Law centers on the formal representation of legal concepts and performing reasoning based on statutes and case law. These methods have led to the development of rule-based and case-based reasoning systems, successfully implemented in legal expert systems. The primary advantage of symbolic approaches is their inherent explainability, although they face limitations due to the knowledge acquisition bottleneck. Connectionist AI&Law encourages legal professionals to adopt inductive inference and use “bottom-up” learning models to extract hidden features from large datasets. This paradigm incorporates machine learning and natural language processing (NLP) techniques to address legal information extraction, retrieval, text classification, summarization, and legal prediction tasks. The advent of large language models (LLMs) has further expanded the capabilities of connectionist models, enabling more sophisticated legal text analysis and predictive accuracy, though issues of model transparency and hallucination remain active areas of research. The interaction between symbolic and connectionist approaches can complement each other. Symbolic models can enhance the transparency and explainability of connectionist systems, while connectionist techniques can optimize the scalability and efficiency of symbolic reasoning processes. These two paradigms exhibit strong potential for collaboration, particularly in the domains of explainable dialogue systems, neuro-symbolic systems, legal knowledge embedding and legal argumentation mining, etc.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106166"},"PeriodicalIF":3.3,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The algorithmic muse and the public domain: Why copyright’s legal philosophy precludes protection for generative AI outputs","authors":"Ezieddin Elmahjub","doi":"10.1016/j.clsr.2025.106170","DOIUrl":"10.1016/j.clsr.2025.106170","url":null,"abstract":"<div><div>Generative AI (GenAI) outputs are not copyrightable. This article argues why. We bypass conventional doctrinal analysis that focuses on black letter law notions of originality and authorship to re-evaluate copyright's foundational philosophy. GenAI fundamentally severs the direct human creative link to expressive form. Traditional theories utilitarian incentive, labor desert and personality fail to provide coherent justification for protection. The public domain constitutes the default baseline for intellectual creations. Those seeking copyright coverage for GenAI outputs bear the burden of proof. Granting copyright to raw GenAI outputs would not only be philosophically unsound but would also trigger an unprecedented enclosure of the digital commons, creating a legal quagmire and stifling future innovation. The paper advocates for a clear distinction: human creative contributions to AI-generated works may warrant protection, but the raw algorithmic output should remain in the public domain.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106170"},"PeriodicalIF":3.3,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A systematic literature review on dark patterns for the legal community: definitional clarity-and a legal classification based on the Unfair Commercial Practices Directive","authors":"Cecilia Isola , Fabrizio Esposito","doi":"10.1016/j.clsr.2025.106169","DOIUrl":"10.1016/j.clsr.2025.106169","url":null,"abstract":"<div><div>This article offers a clear definition of dark patterns and a comprehensive classification thereof using the framework provided by Directive 2005/29 on unfair commercial practices. The analysis builds on a systematic literature review that analyses how dark patterns are defined and the types of dark patterns discussed in 116 articles, conference papers and regulatory documents. Accordingly, 'dark pattern' can be defined as 'the design of a digital choice environment that is capable of distorting user behaviour'. We point out that the following elements should not be included in the definition of dark pattern: intentionality of the designer and exploitation of heuristics or cognitive bias. We identify 42 types of dark patterns. All of them can be classified as: misleading omission; misleading action; harassment; undue influence; coercion. This classification is based on legal categories and helps bridge the gap between research and legal practice, thereby increasing the expected social impact of research on dark patterns.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106169"},"PeriodicalIF":3.3,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the use of LLMs in the Italian legal domain: A survey on recent applications","authors":"Marco Siino","doi":"10.1016/j.clsr.2025.106164","DOIUrl":"10.1016/j.clsr.2025.106164","url":null,"abstract":"<div><div>This article delves into recent applications of Transformers (also, <em>Large Language Models</em> or <em>LLMs</em>) in the context of the Italian legal language. The impressive speed at which the literature in this domain has recently grown (i.e., in 2022 and 2023) is proved by the number of related works that we collected in this study. The focus of this work is on exploring how LLMs are being utilized within the framework of Italian law. In detail, we first introduce the tasks that have been addressed in the Italian legal domain. When introducing the tasks, to motivate and to provide the most relevant works, we reference worldwide literature. After introducing the tasks, we report and discuss all the existent applications to these tasks, specifically in the Italian legal domain. Through this work, we intend to deliver the state of the art in LLM applications in the Italian legal domain to researchers as well as practising attorneys.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106164"},"PeriodicalIF":3.3,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144588726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unpacking copyright infringement issues in the GenAI development lifecycle and a peek into the future","authors":"Cheng L. SAW, Bryan Zhi Yang TAN","doi":"10.1016/j.clsr.2025.106163","DOIUrl":"10.1016/j.clsr.2025.106163","url":null,"abstract":"<div><div>Generative AI (“GAI”) refers to deep learning models that ingest input data and “learn” to produce output that mimics such data when duly prompted. This feature, however, has given rise to numerous claims of infringement by the owners of copyright in the training material. Relevantly, three questions have emerged for the law of copyright: (1) whether <em>prima facie</em> acts of infringement are disclosed at each stage of the GAI development lifecycle; (2) whether such acts fall within the scope of the text and data mining (“TDM”) exceptions; and (3) whether (and, if so, how successfully) the fair use exception may be invoked by GAI developers as a defence to infringement claims. This paper critically examines these questions in turn and considers, in particular, their interplay with the so-called “memorisation” phenomenon. It is argued that although infringing acts might occur in the process of downloading in-copyright training material and training the GAI model in question, TDM and fair use exceptions (where available) may yet exonerate developers from copyright liability under the right conditions.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106163"},"PeriodicalIF":3.3,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deepfake detection in generative AI: A legal framework proposal to protect human rights","authors":"Felipe Romero-Moreno","doi":"10.1016/j.clsr.2025.106162","DOIUrl":"10.1016/j.clsr.2025.106162","url":null,"abstract":"<div><div>Deepfakes, exploited for financial fraud, political misinformation, non-consensual imagery, and targeted harassment, represent a rapidly evolving threat to global information integrity, demanding immediate and coordinated intervention. This research undertakes technical and comparative legal analyses of deepfake detection methods. It examines key mitigation strategies—including AI-powered detection, provenance tracking, and watermarking—highlighting the pivotal role of the Coalition for Content Provenance and Authenticity (C2PA) in establishing media authentication standards. The study investigates deepfakes' complex intersections with the admissibility of legal evidence, non-discrimination, data protection, freedom of expression, and copyright, questioning whether existing legal frameworks adequately balance advances in detection technologies with the protection of individual rights. As national strategies become increasingly vital amid geopolitical realities and fragmented global governance, the research advocates for a unified international approach grounded in UN Resolution 78/265 on safe, secure, and trustworthy AI. It calls for a collaborative framework that prioritizes interoperable technical standards and harmonized regulations. The paper critiques legal frameworks in the EU, US, UK, and China—jurisdictions selected for their global digital influence and divergent regulatory philosophies—and recommends developing robust, accessible, adaptable, and internationally interoperable tools to address evidentiary reliability, privacy, freedom of expression, copyright, and algorithmic bias. Specifically, it proposes enhanced technical standards; regulatory frameworks that support the adoption of explainable AI (XAI) and C2PA; and strengthened cross-sector collaboration to foster a trustworthy deepfake ecosystem.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106162"},"PeriodicalIF":3.3,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144338750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personal data propertisation in China: A difficult road under the 20 Key Measures on Data","authors":"Qifan Yang","doi":"10.1016/j.clsr.2025.106153","DOIUrl":"10.1016/j.clsr.2025.106153","url":null,"abstract":"<div><div>The Opinions on Building Basic Systems for Data to Better Exploit the Value of Data Factors (the 20 Key Measures on Data) in China has significantly influenced the discourse around propertising personal data, leading to a distinct approach to personal data protection from the EU and the US. The ownership-usufruct system and conditional personal data property system are raised as two representative property systems in China. In the ownership-usufruct system, the ownership of personal data belongs to the original subject, and the data processors (the data controllers in the GDPR) obtain their usufructuary right through “obtaining consent + consideration”. In the conditional personal data property system, the data processors originally acquired the data property right based on legitimate data collection behaviour. The data property right is limited by pre-existing rights, the proportionality principle, and the fair use principle. Rather than idealising the propertisation of personal data, this paper offers a nuanced critique of its limitations, including conceptual ambiguities, the failure of the consent mechanism, and unbalanced digital market structures. These challenges reveal that the propertisation of personal data is a socio-technical issue that requires legal frameworks and technical infrastructures.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106153"},"PeriodicalIF":3.3,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint and several liability between Europol and a Member State for damages from unlawful disclosure of personal data (comment on European Court of Justice, 5 March 2024, C‑755/21 P)","authors":"Andrea Parziale","doi":"10.1016/j.clsr.2025.106161","DOIUrl":"10.1016/j.clsr.2025.106161","url":null,"abstract":"<div><div>This case note examines a judgment by the Court of Justice on Europol's civil liability for unlawful disclosure of personal data during cross-border cooperation with Member State authorities. The Court overturned the General Court's decision, establishing that joint and several liability between Europol and Member States can arise under Article 50 of Regulation 2016/794 (Europol Regulation), informed by Recital 57. While this ruling facilitates compensation for injured parties when the exact source of data disclosure cannot be identified, the Court awarded only €2000 in damages to the appellant, a modest sum that may undermine Article 50′s effectiveness as a data protection mechanism. The case note analyzes both the joint liability determination and the damages quantification, arguing that while the recognition of joint liability strengthens data subject protection in principle, the symbolic damages awarded significantly limit its practical impact as an accountability tool for ensuring responsible data handling in cross-border criminal investigations.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"58 ","pages":"Article 106161"},"PeriodicalIF":3.3,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144240726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriela Kennedy (Partner) , Joanna Wong (Associate) , Arun Babu (Partner) , Gayathri Poti (Associate) , Avindra Yuliansyah Taher (Partner) , Kiyoko Nakaoka (Attorney-at-Law) , Jillian Chia (Partner) , Beatrice Yew (Senior Associate) , Karen Ngan (Partner) , Lam Chung Nian (Partner) , Huey Lee (Associate) , Quang Minh Vu (Associate)
{"title":"Asia–Pacific developments","authors":"Gabriela Kennedy (Partner) , Joanna Wong (Associate) , Arun Babu (Partner) , Gayathri Poti (Associate) , Avindra Yuliansyah Taher (Partner) , Kiyoko Nakaoka (Attorney-at-Law) , Jillian Chia (Partner) , Beatrice Yew (Senior Associate) , Karen Ngan (Partner) , Lam Chung Nian (Partner) , Huey Lee (Associate) , Quang Minh Vu (Associate)","doi":"10.1016/j.clsr.2025.106151","DOIUrl":"10.1016/j.clsr.2025.106151","url":null,"abstract":"<div><div>This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106151"},"PeriodicalIF":3.3,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144189436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}