Gabriela Kennedy , Joanna Wong , Justin Lai , James North , Philip Catania , Michael do Rozario , Jack Matthews , Arun Babu , Gayathri Poti , Ishita Vats , Kiyoko Nakaoka , Lam Chung Nian , Emma Choe
{"title":"Asia-Pacific Developments","authors":"Gabriela Kennedy , Joanna Wong , Justin Lai , James North , Philip Catania , Michael do Rozario , Jack Matthews , Arun Babu , Gayathri Poti , Ishita Vats , Kiyoko Nakaoka , Lam Chung Nian , Emma Choe","doi":"10.1016/j.clsr.2025.106116","DOIUrl":"10.1016/j.clsr.2025.106116","url":null,"abstract":"<div><div>This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106116"},"PeriodicalIF":3.3,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"My AI, my code, my secret – Trade secrecy, informational transparency and meaningful litigant participation under the European Union's AI Liability Directive Proposal","authors":"Ljupcho Grozdanovski","doi":"10.1016/j.clsr.2025.106117","DOIUrl":"10.1016/j.clsr.2025.106117","url":null,"abstract":"<div><div>In European Union (EU) law, the AI Liability Directive (AILD) proposal included a right for victims of harm caused by high-risk AI systems to request the disclosure of relevant evidence. That right is, however, limited by the protection of trade secrets. During legal proceedings, business confidentiality can indeed restrict the victims’ access to evidence, potentially precluding them from fully understanding the disputed facts and effectively making their views known before a court. This article examines whether the AILD provided sufficient procedural mechanisms to ensure that litigants can effectively participate in judicial proceedings, even when critical evidence is withheld from them, due to legitimate trade secret protections. Our analysis draws on the evidentiary challenges highlighted in emerging global AI liability cases and selected CJEU case law, which provide guidance on how a balance can be struck between legitimate confidentiality and a workable level of informational transparency, necessary for an informed and fair resolution of future AI liability disputes.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106117"},"PeriodicalIF":3.3,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143453793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using sensitive data to de-bias AI systems: Article 10(5) of the EU AI act","authors":"Marvin van Bekkum","doi":"10.1016/j.clsr.2025.106115","DOIUrl":"10.1016/j.clsr.2025.106115","url":null,"abstract":"<div><div>In June 2024, the EU AI Act came into force. The AI Act includes obligations for the provider of an AI system. Article 10 of the AI Act includes a new obligation for providers to evaluate whether their training, validation and testing datasets meet certain quality criteria, including an appropriate examination of biases in the datasets and correction measures. With the obligation comes a new provision in Article 10(5) AI Act, allowing providers to collect sensitive data to fulfil the obligation. Article 10(5) AI Act aims to prevent discrimination. In this paper, I investigate the scope and implications of Article 10(5) AI Act. The paper primarily concerns European Union law, but may be relevant in other parts of the world, as policymakers aim to regulate biases in AI systems.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106115"},"PeriodicalIF":3.3,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual justice, or justice virtually: Navigating the challenges in China’s adoption of virtual criminal justice","authors":"Han Qin , Li Chen","doi":"10.1016/j.clsr.2025.106112","DOIUrl":"10.1016/j.clsr.2025.106112","url":null,"abstract":"<div><div>Positioned within China’s <em>Trial Informatization</em> framework, the availability of virtual litigation has played a crucial role in enhancing access to justice. In the criminal justice system, the implementation of virtual litigation has transformed various areas, including pre-trial interviews, simplified criminal procedures, witness testimony, commutation hearings, and the reception of petitions. However, these technological advancements pose challenges to the authority, legitimacy, engagement and public deterrence aspects of criminal trials. To address these challenges, virtual litigation should be reframed as a tool to effect incremental change and be limited in application to cases where in-person hearings and other court processes are unfeasible. Further, more stringent rules need to be imposed on the finding of an implicit acceptance by accused persons to a remote trial process so as to preserve their autonomy. Courts should bear responsibility for third-party interfaces utlised as part of the criminal justice process, such as video conferencing platforms or digital document repositories. Finally, on the other side of the bench, defense counsel should have an equal say as the prosecution in determining whether a trial is conducted remotely.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106112"},"PeriodicalIF":3.3,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adverse human rights impacts of dissemination of nonconsensual sexual deepfakes in the framework of European Convention on Human Rights: A victim-centered perspective","authors":"Can Yavuz","doi":"10.1016/j.clsr.2025.106108","DOIUrl":"10.1016/j.clsr.2025.106108","url":null,"abstract":"<div><div>Generative artificial intelligence systems have advanced significantly over the past decade and can now generate synthetic but highly realistic audio, photo, and video, commonly referred to as deepfake. Image-based sexual abuse was the first widespread (mis)use of deepfake technology and continues to be the most common form of its misuse. However, further (empirical) research is needed to examine this phenomenon's adverse human rights implications. This paper analyses the potential adverse human rights impacts of the dissemination of nonconsensual sexual deepfakes in the framework of the European Convention on Human Rights and argues that the dissemination of such deepfakes can hinder the rights protected by the Convention. These include the right to respect for private and family life, as nonconsensual sexual deepfakes can undermine data protection, harm one's image and reputation, and compromise psychological integrity and personal autonomy. Additionally, such deepfakes can threaten freedom of expression by creating a silencing effect on public watchdogs, politicians, and private individuals. Finally, nonconsensual sexual deepfakes can impair the economic and moral rights of pornography performers by abusing their work and bodies to abuse others without authorization and compensation. These findings highlight that the Council of Europe member states must fulfil their obligations to provide effective protection against this technology-facilitated, gender-based, and sexual violence.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106108"},"PeriodicalIF":3.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To err is human: Managing the risks of contracting AI systems","authors":"Maarten Herbosch","doi":"10.1016/j.clsr.2025.106110","DOIUrl":"10.1016/j.clsr.2025.106110","url":null,"abstract":"<div><div>Artificial intelligence (AI) increasingly influences contract law. Applications like virtual home assistants can form contracts on behalf of users, while other AI tools can assist parties in deciding whether to contract. The advent of Generative AI has further accelerated and broadened the proliferation of such applications. However, AI systems are inherently imperfect, sometimes leading to unexpected or undesirable contracts, raising concerns about the legal protection of AI deployers.</div><div>Some authors have suggested that autonomous AI deployment cannot lead to a legally binding contract in the absence of a human “intent”. Others have argued that the system deployer is completely unprotected in cases of undesirable AI output. They argue that that deployment implies that the deployer should bear the risk of any mistake.</div><div>This article challenges these views by leveraging existing contract formation and mistake frameworks. Traditional analysis demonstrates that AI deployment can produce valid contracts. It also suggests that deployers may invoke the unilateral mistake doctrine, drawing parallels to clerical errors in human contracts. While AI outputs are probabilistic and unpredictable, similar characteristics apply to human decision-making. The potential benefits of AI development justify affording AI deployers protections analogous to those provided in traditional scenarios.</div><div>To enhance protection, deployers should use high-performing systems with safeguards such as oversight mechanisms and registration tools. As industry standards evolve, these safeguards will become more defined. The analysis concludes that current contract law frameworks are flexible enough to accommodate AI systems, negating the need for a complete overhaul.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106110"},"PeriodicalIF":3.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative AI, copyright and the AI Act","authors":"João Pedro Quintais","doi":"10.1016/j.clsr.2025.106107","DOIUrl":"10.1016/j.clsr.2025.106107","url":null,"abstract":"<div><div>This paper provides a critical analysis of the Artificial Intelligence (AI) Act's implications for the European Union (EU) copyright acquis, aiming to clarify the complex relationship between AI regulation and copyright law while identifying areas of legal ambiguity and gaps that may influence future policymaking. The discussion begins with an overview of fundamental copyright concerns related to generative AI, focusing on issues that arise during the input, model, and output stages, and how these concerns intersect with the text and data mining (TDM) exceptions under the Copyright in the Digital Single Market Directive (CDSMD).</div><div>The paper then explores the AI Act's structure and key definitions relevant to copyright law. The core analysis addresses the AI Act's impact on copyright, including the role of TDM in AI model training, the copyright obligations imposed by the Act, requirements for respecting copyright law—particularly TDM opt-outs—and the extraterritorial implications of these provisions. It also examines transparency obligations, compliance mechanisms, and the enforcement framework. The paper further critiques the current regime's inadequacies, particularly concerning the fair remuneration of creators, and evaluates potential improvements such as collective licensing and bargaining. It also assesses legislative reform proposals, such as statutory licensing and AI output levies, and concludes with reflections on future directions for integrating AI governance with copyright protection.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106107"},"PeriodicalIF":3.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigative genetic genealogy in Europe: Why the “manifestly made public by the data subject” legal basis should be avoided","authors":"Taner Kuru","doi":"10.1016/j.clsr.2025.106106","DOIUrl":"10.1016/j.clsr.2025.106106","url":null,"abstract":"<div><div>Investigative genetic genealogy has emerged as an effective investigation tool in the last few years, gaining popularity, especially after the arrest of the Golden State Killer. Since then, hundreds of cases have been reported to be solved thanks to this novel and promising technique. Unsurprisingly, this success also led law enforcement authorities in the EU to experiment with it. However, there is an ambiguity on which legal basis in the EU data protection framework should be used to access the personal data of genetic genealogy database users for investigative purposes, which may put the legality and legitimacy of investigative genetic genealogy in Europe at stake. Accordingly, this article examines whether the “manifestly made public by the data subject” legal basis enshrined in Article 10(c) of the Law Enforcement Directive could be used for such purposes. Based on its analysis, the article argues that this legal basis cannot be used for such purposes, given that the personal data in question are not “manifestly made” “public”, and they are not disclosed “by the data subject” in all cases. Therefore, the article concludes by suggesting a way forward to ensure the lawfulness of this investigation method in the EU data protection framework.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106106"},"PeriodicalIF":3.3,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data rule hanging over platform competition: How does the GDPR affect social media market concentration?","authors":"Qifan Yang , Yituan Liu","doi":"10.1016/j.clsr.2024.106102","DOIUrl":"10.1016/j.clsr.2024.106102","url":null,"abstract":"<div><div>Personal Data protection has become a cornerstone for policy in the digital sphere, significantly influencing the market behaviours of leading social media companies. This paper empirically studies the impact of the European Union’s General Data Protection Regulation (GDPR) on the social media market concentration in the EU, employing both the synthetic control method and the generalised difference-in-differences method. The findings reveal that the GDPR significantly reduced social media market concentration from 2015 to 2020, with a stronger impact on large companies. However, in the long term, the impact of the GDPR on EU social media market concentration is gradually fading, which has been very weak after 2020. Furthermore, the impact strength of the GDPR on the social media market concentration can be changed by Internet market scales and high technology levels. These insights contribute to a deeper understanding of how data protection policies shape the market dynamics of social media companies.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"56 ","pages":"Article 106102"},"PeriodicalIF":3.3,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143138751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}