{"title":"Smartphone decryption via forced fingerprinting and the right against self-incrimination: The German federal court (BGH) addresses the problem","authors":"Javier Escobar Veas","doi":"10.1016/j.clsr.2025.106215","DOIUrl":"10.1016/j.clsr.2025.106215","url":null,"abstract":"<div><div>Due to their storage capacity and evidentiary potential, smartphones are often seen as key sources of evidence in criminal investigations. In March 2025, the German Federal Court addressed the question of whether compelled smartphone decryption through forced fingerprinting violates the right against self-incrimination. During a search and seizure operation, the police forced the defendant’s right index finger on the fingerprint sensors of two smartphones to unlock them. The Court ruled that no violation had occurred, as the police did not require the defendant’s active cooperation. This note examines the BGH's reasoning and situates it within the comparative debate. The note argues that the decision is relevant because it affirms the “active cooperation” approach and avoids the problematic distinction between testimonial and physical evidence. Compared with the approaches of the European Court of Human Rights and the United States Supreme Court, the BGH's framework offers greater clarity and predictability, as well as broader protection.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106215"},"PeriodicalIF":3.2,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The digital prior restraint: Applying human rights safeguards to upload filters in the EU","authors":"Emmanuel Vargas Penagos","doi":"10.1016/j.clsr.2025.106219","DOIUrl":"10.1016/j.clsr.2025.106219","url":null,"abstract":"<div><div>This article examines the human rights standards relevant to the use of upload filters for content moderation within EU secondary legislation. Upload filters, which automatically screen user-generated content before publication, are a type of prior restraint, which raises critical concerns on freedom of expression. EU secondary legislation establishes rules for both mandatory and voluntary use of these technologies, which must be read in light of human rights protections. This article analyses the characteristics of both mandatory and voluntary upload filters as prior restraints, the relevant EU legal provisions governing their use, and the safeguards required to prevent disproportionate restrictions on speech. Additionally, it explores the procedural and institutional safeguards under EU law, viewed through the lens of the CJEU and ECtHR case law on prior restraints and the rights to a fair trial and to an effective remedy.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106219"},"PeriodicalIF":3.2,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Antitrust in artificial intelligence infrastructure – between regulation and innovation in the EU, the US, and China","authors":"Kena Zheng","doi":"10.1016/j.clsr.2025.106211","DOIUrl":"10.1016/j.clsr.2025.106211","url":null,"abstract":"<div><div>The enormous amount of data and the substantial computational resources are crucial inputs of artificial intelligence (AI) infrastructure, enabling the development and training of AI models. Incumbent firms in adjacent technology markets hold significant advantages in AI development, due to their established large user bases and substantial financial resources. These advantages facilitate the accumulation of enormous amounts of data, and the establishment of computational infrastructure necessary for sufficient data processing and high-performance computing. By controlling data and computational resources, incumbents raise entry barriers, leverage advantages to favour their own AI services, and drive significant vertical integration across the AI supply chain, thereby entrenching their market dominance and shielding themselves from competition. This article examines regulatory responses to these antitrust risks in the European Union (EU), the United States (US), and China, given their leadership in digital regulation and AI development. It demonstrates that the EU’s Digital Markets Act, and China’s Interim Measures for the Management of Generative Artificial Intelligence Services introduce broadly framed yet applicable rules to address challenges related to data and computational resources in AI markets. Conversely, the US lacks both AI regulations and digital-specific competition laws, instead adopting innovation-centric policies aimed at ensuring its AI dominance globally. Given the strategic importance of AI development, all three jurisdictions have adopted a cautious approach in investigating potential abusive practices.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106211"},"PeriodicalIF":3.2,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The ‘DPIA+’: Aligning data protection with UK equality law","authors":"Miranda Mourby","doi":"10.1016/j.clsr.2025.106212","DOIUrl":"10.1016/j.clsr.2025.106212","url":null,"abstract":"<div><div>In recent years, data protection scholarship has moved beyond the assumption that the General Data Protection Regulation (‘GDPR’) is solely concerned with individual rights. Tools such as the Human Rights Impact Assessment (‘HRIA’) and the Fundamental Rights Impact Assessment ('FRIA') have been promoted to apply the GDPR more expansively, capturing broader societal harms that may flow from personal data processing. These tools can widen the scope of the GDPR’s Data Protection Impact Assessment (‘DPIA’) through aligned consideration with human rights law. They have been outlined at an international level, but require adaptation to national contexts in practice.</div><div>This article advances the discussion in three ways. First, it develops a jurisdiction-anchored expansion of the DPIA (‘DPIA+’) by integrating the UK Public Sector Equality Duty in s.149 Equality Act 2010. Second, it highlights equality law as both overlapping with, and distinct from, human rights law. In the UK, equality law imports a proactive duty to investigate risks of discrimination, while also providing an evaluative template in the form of an Equality Impact Assessment. Finally, it considers the distinctive value of an equality-inflected DPIA+ in life-and-death contexts, such as the Covid-19 pandemic.</div><div>The open-ended term ‘DPIA+’ acknowledges that various legal frameworks may supplement a DPIA in each national context. The central argument, however, is that equality and human rights law should be considered together when augmenting a DPIA, as both can help identify and address risks of discrimination in personal data processing.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106212"},"PeriodicalIF":3.2,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The future of the movie industry in the wake of generative AI: A perspective under EU and UK copyright law","authors":"Eleonora Rosati","doi":"10.1016/j.clsr.2025.106207","DOIUrl":"10.1016/j.clsr.2025.106207","url":null,"abstract":"<div><div>Like all sectors, the movie industry has been both affected by and exploring potential uses of generative Artificial Intelligence ('<strong>AI</strong>'). On the one hand, movie studios have detected and begun to add warnings against unlicensed third-party uses of their content, including for AI training,<span><span><sup>1</sup></span></span> and have taken enforcement initiatives through court action. On the other hand, the use of AI within and by the industry itself has been growing. Regarding the latter, some have emphasised the opportunities presented by the implementation of AI, including by advancing claims that AI tools can offer a `purer' form of expression. Others have instead warned against the potential displacement of industry workers, including workers employed in technical roles and younger and emerging actors.</div><div>Against the background illustrated above, this study maps and critically evaluates relevant issues facing the development, deployment, and use of AI models from a movie industry perspective. The legal analysis is conducted having regard to EU and UK copyright law and is divided into three parts:<ul><li><span>•</span><span><div><strong>Input/AI training</strong>: By considering relevant legal restrictions applicable to the training of AI models on protected audiovisual content, the border between lawful unlicensed uses and restricted uses is drawn;</div></span></li><li><span>•</span><span><div><strong>Protectability of AI-generated outputs</strong>: Turning to the output generation phase, the protectability of such outputs is considered next, by focusing in particular on the requirements of authorship and originality under EU and UK copyright law;</div></span></li><li><span>•</span><span><div><strong>Legal risks and potential liability stemming from the use of third-party AI models for output generation</strong>: Still having regard to the output generation phase, relevant legal issues that might arise having regard to the use of AI models that `regurgitate' third-party training data at output generation are considered, alongside the question of style protection under copyright.</div></span></li></ul></div><div>The main conclusions are as follows:<ul><li><span>•</span><span><div><strong>Input/AI training</strong>: Insofar as model training on third-party protected content is concerned, there are no exceptions under EU/UK law that fully cover the entirety of these processes. As a result, lacking legislative reform, the establishment of a licensing framework appears unavoidable for such activities to be deemed lawful;</div></span></li><li><span>•</span><span><div><strong>Protectability of AI-generated outputs</strong>: The deployment of AI across various phases of the creative process does not render the resulting content unprotectable, provided that human involvement and control remain significant throughout, with the result that AI is relied upon as a tool that aids – rather than replaces – the creativity o","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106207"},"PeriodicalIF":3.2,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A semantic approach to understanding GDPR fines: From text to compliance insights","authors":"Albina Orlando, Mario Santoro","doi":"10.1016/j.clsr.2025.106187","DOIUrl":"10.1016/j.clsr.2025.106187","url":null,"abstract":"<div><div>This study introduces an explainable Artificial Intelligence (XAI) framework that couples legal-domain NLP with Structural Topic Modeling (STM) and WordNet semantic graphs to rigorously analyze over 1,900 GDPR enforcement decision summaries from a public dataset. Our methodology focuses on demonstrating the pipeline’s validity respect to manual analyses by inspecting the results of four well-know research questions: (1) cross-country fine distribution disparities (automated metadata extraction); (2) the violation severity–fine amount relationship (keyness and semantic analysis); (3) structural text patterns (network analysis and STM); and (4) prevalent enforcement triggers (topic prevalence modeling) The pipeline’s validity is underscored by its ability to replicate key findings from previous manual analyses while enabling a more nuanced exploration of GDPR enforcement trends. Our results confirm significant disparities in enforcement across EU member states and reveal that monetary penalties do not consistently correlate with violation severity. Specifically, serious infringements, particularly those involving video surveillance, frequently result in low-value fines, especially when committed by individuals or smaller entities. This highlights that a substantial proportion of severe violations are attributed to smaller actors. Methodologically, the framework’s ability to quickly replicate such well-known patterns, alongside its transparency and reproducibility, establishes its potential as a scalable tool for transparent and explainable GDPR enforcement analytics.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106187"},"PeriodicalIF":3.2,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bridging the Great Wall: China’s Evolving Cross-Border Data Flow Policies and Implications for Global Data Governance","authors":"Sheng Zhang , Henry Gao","doi":"10.1016/j.clsr.2025.106208","DOIUrl":"10.1016/j.clsr.2025.106208","url":null,"abstract":"<div><div>Despite the rapid expansion of the digital economy, the global regulatory framework for data flows remains fragmented, with countries adopting divergent approaches shaped by their own regulatory priorities. As a key player in the Internet economy, China’s approach to cross-border data flows (CBDF) not only defines its domestic digital landscape but also influences emerging global norms. This paper takes a comprehensive view of the evolution of China’s CBDF regime, examining its development through both domestic and international lenses. Domestically, China’s regulation of CBDF has evolved from a security-first approach to one that seeks to balance security with economic development. This paper examines the economic, political, and international drivers behind this shift. This paper also compares the approaches of China and the United States to CBDF, in light of the recent tightening of US restrictions, from both technical and geopolitical perspectives. At the technical level, recent policy trends in both countries reveal notable similarities. At the geopolitical level, however, the divergence between the two frameworks is not only significant but continues to widen. The paper concludes by examining the broader implications for global data governance and offering recommendations to bridge digital divides and promote a more inclusive international framework.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106208"},"PeriodicalIF":3.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The EU Cyber Resilience Act: Hybrid governance, compliance, and cybersecurity regulation in the digital ecosystem","authors":"Fabian Teichmann , Bruno S. Sergi","doi":"10.1016/j.clsr.2025.106209","DOIUrl":"10.1016/j.clsr.2025.106209","url":null,"abstract":"<div><div>This article advances a governance-theoretical account of the EU Cyber Resilience Act (CRA) as a form of hybrid regulation that combines command-and-control duties with risk-based calibration, co-regulation through European harmonized standards, and enforced self-regulation by firms. The central research question is: how does the CRA’s hybrid design reallocate regulatory functions between public authorities and private actors along the digital-product lifecycle, and with what compliance and enforcement consequences? Methodologically, the paper doctrinally analyses the CRA’s core provisions and situates them in the New Legislative Framework (NLF) for product regulation, the legal regime for standards under Regulation (EU) No 1025/2012 and Court of Justice of the European Union (CJEU) case law, and adjacent EU instruments (NIS2; Cybersecurity Act). It further offers a concise comparative sidebar on the United States and the United Kingdom to contrast policy trajectories. The contribution is threefold: (i) it clarifies the legal status and governance role of harmonized standards within CRA conformity assessment; (ii) it analytically distinguishes external obligations from firm-internal “meta-regulation”; and (iii) it maps institutional interfaces with NIS2 and the Cybersecurity Act, highlighting pathways for dynamic escalation (including mandatory certification). The analysis yields implications for corporate compliance design, market surveillance, and future rule updates via delegated acts.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106209"},"PeriodicalIF":3.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented accountability: Data access in the metaverse","authors":"Giancarlo Frosio , Faith Obafemi","doi":"10.1016/j.clsr.2025.106196","DOIUrl":"10.1016/j.clsr.2025.106196","url":null,"abstract":"<div><div>This article examines regulated data access (RDA) in the metaverse—an interconnected and immersive digital ecosystem comprising virtual, augmented, and hyper-physical realities. We organise the argument across taxonomy (Section 2), Digital Services Act (DSA)-anchored doctrine (Section 3), implementation challenges (Section 4), platform practices (Section 5), and a global blueprint (Section 6). Building on the European Union’s DSA, particularly Article 40, the analysis evaluates whether metaverse platforms qualify as Very Large Online Platforms or Very Large Online Search Engines and thus fall within the DSA’s data access rules. Drawing comparative insights from the UK’s Online Safety Act and the United States’ proposed Platform Accountability and Transparency Act, the article highlights differing global approaches to data sharing and the significant governance gaps that persist.</div><div>This article categorizes metaverse-native data, including spatial, biometric, and eye-tracking data, into personal and non-personal types, stressing the heightened complexity of governing immersive, multidimensional information flows. While existing legal frameworks offer a starting point, the metaverse’s novel data practices demand targeted adaptations to address challenges like decentralised governance, user consent in real-time environments, and the integration of privacy-enhancing technologies. Through an examination of data access regimes across selected metaverse platforms, the article identifies a lack of uniform, transparent processes for external researchers.</div><div>In this context, the article highlights RDA's broader public-interest function, facilitating external scrutiny of platform activities and ensuring service providers are held accountable. The absence of consistent RDA frameworks obstructs systemic risk research, undermining both risk assessment and mitigation efforts while leaving user rights vulnerable to opaque platform governance. To address these gaps, the article advances a set of policy recommendations aimed at strengthening RDA in the metaverse—adapting regulatory strategies to its evolving, decentralised architecture. By tailoring regulatory strategies to the metaverse’s dynamic nature, policymakers can foster accountability, innovation, and trust—both domestically (in jurisdictions like the UK, where data access provisions remain underdeveloped) and internationally. The analysis extends beyond mere applications to metaverse platforms, providing insights that can be applied to the online platform ecosystem in its entirety. Ultimately, this article charts a path toward harmonized, future-ready data governance frameworks—one that integrates RDA as a core regulatory mechanism for ‘augmented accountability’, essential for safeguarding user rights and enabling independent risk assessment in the metaverse.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106196"},"PeriodicalIF":3.2,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}