Vittoria Caponecchia, Bernardo D’Agostino, Sima Sarv Ahrabi, Giovanni Comandè, Daniele Licari, Andrea Vandin
{"title":"Process Mining for legal Courts: Visualising, analysing and comparing Italian divorce proceedings","authors":"Vittoria Caponecchia, Bernardo D’Agostino, Sima Sarv Ahrabi, Giovanni Comandè, Daniele Licari, Andrea Vandin","doi":"10.1016/j.clsr.2025.106210","DOIUrl":"10.1016/j.clsr.2025.106210","url":null,"abstract":"<div><div>Process Mining (PM) is a family of data-driven techniques that use data to study the underlying processes generating the data, i.e., the data-generating process. Despite being initially tailored for the engineering and industrial domain, it is becoming popular also in more human-centric domains like the legal and healthcare ones. We present a PM methodology using the <strong>fuzzy miner technique</strong> aimed at analysing and optimising the complex processes underlying decision making by legal Courts. We consider specifically the domain of civil proceedings, with a focus on divorces. In PM terms, we see a legal proceeding as a process instance, and the different internal phases in which a legal proceeding transits as activities. The studied process is, therefore, the internal process followed by a Court, possibly varying over the years, to handle specific types of proceedings. By leveraging PM techniques, this article compares consensual divorce proceedings within a Court across time, and across Courts. As a case study we take two Courts in Northern Italy. Our PM analysis identifies key performance indicators and uncovers hidden process efficiencies and inefficiencies. The findings highlight the ability of PM to reveal critical process patterns, enabling organisations to make data-driven decisions and implement targeted process improvements.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106210"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145424762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiences, challenges, and improvements in the construction of data property rights in China","authors":"Shaokun Huang , Le Cheng","doi":"10.1016/j.clsr.2025.106228","DOIUrl":"10.1016/j.clsr.2025.106228","url":null,"abstract":"<div><div>In recent years, China has been actively advancing the clarification of data ownership rights, establishing the National Data Administration and over 50 data exchanges. In China, data property rights are defined as proprietary rights enjoyed by right holders over specific data, including the rights to hold, use, and operate data. However, the construction of China’s data property rights system faces several challenges, such as the ambiguity of the subject matter of data property rights, excessive exclusivity of such rights, insufficient protection of individual data rights, and inadequate data sharing. To address these issues, this paper argues that it is necessary to move beyond the traditional property rights theories rooted in civil law. Rather than emphasizing exclusive control or rights enforceable against the world, the legal framework should be grounded in the relationships among different participants in data-related activities and aim to promote data sharing and co-utilization. A comprehensive and structural data property regime should be established that balances rights and obligations. In designing such a regime, it is essential to distinguish the proprietary rights and obligations of different actors—such as data originators, data processors, and data users—clarify the interrelations among various rights, and develop the specific contents of the system across the stages of data resourcification, data productization, and data capitalization.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106228"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145473817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithms for group recognition? Ensuring lawful and rights-based use of new technologies in group refugee recognition","authors":"Meltem Ineli-Ciger , Nikolas Feith Tan","doi":"10.1016/j.clsr.2025.106222","DOIUrl":"10.1016/j.clsr.2025.106222","url":null,"abstract":"<div><div>This article explores the potential role of new technologies, including Artificial Intelligence (AI), in group-based refugee recognition procedures. While the use of new technologies in individual refugee status determination has attracted significant scholarly interest, their application in the context of group recognition remains largely underexamined. This article argues that group recognition procedures grounded in pre-defined, objective eligibility criteria, rather than assessments of individual credibility or well-founded fear, offer a more structured and legally consistent framework for technological integration. Building on this insight, the article proposes a model for <em>Dynamic Autonomy Group Recognition</em>. In this model, AI tools support the identification of individuals who fall within a recognised group by verifying identity, matching applicants against legally defined group criteria and flagging potential exclusion concerns. Crucially, however, all negative or exclusion decisions remain subject to mandatory human review. The article analyses both the opportunities and risks of this approach and argues that, if carefully designed and properly regulated, <em>Dynamic Autonomy Group Recognition</em> may offer a lawful, principled, and operationally effective means of managing the protection obligations of states, particularly in large-scale displacement.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106222"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Achieving regulatory alignment for E2E autonomous driving in China: A framework for tort liability and data governance","authors":"Chuyi Wei , Jingchen Zhao , Li Sun","doi":"10.1016/j.clsr.2025.106192","DOIUrl":"10.1016/j.clsr.2025.106192","url":null,"abstract":"<div><div>China’s advancement in End-to-End Autonomous Driving (E2E AD) presents profound legal and regulatory challenges due to its “black box” nature and data dependency, rendering traditional frameworks inadequate. This paper argues for a tiered liability system, shifting responsibility to manufacturers with increasing vehicle autonomy. Additionally, it proposes an adaptive, multi-tiered, risk-stratified data governance model. Underpinning these proposals, robust transparency and explainability (XAI) are crucial for ensuring accountability and achieving effective regulatory alignment. These proposed frameworks offer critical insights for China and provide a practical and theoretical basis for other nations navigating AI governance in autonomous mobility.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106192"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cybersecurity in the Internet of Things: Trends and challenges in a nascent field","authors":"Pratham Ajmera","doi":"10.1016/j.clsr.2025.106204","DOIUrl":"10.1016/j.clsr.2025.106204","url":null,"abstract":"<div><div>The European cybersecurity regulation framework, not unlike European regulatory initiatives in general, has oft been criticized as being fragmented and divided among industry sectors. However, the past few years have seen legislative initiatives aimed at harmonizing cybersecurity across the EU, the most recent being the newly adopted Cyber-Resilience Act. The Act attempts to harmonize cybersecurity from the product side, establishing minimum requirements that must be met before digital products are brought into the Union market. It marks the initial foray of the EUs framework for product regulation (i.e., the New Legislative Framework or NLF) into the realm of cybersecurity regulation. Consistent with the NLF, the Cyber-Resilience Act provides for high-level cybersecurity requirements for all digital products, with demonstrable conformity met through multiple avenues including international/industrial standards adopted by European Standardization Organizations. However, unlike conventional product regulation, the Cyber-Resilience Act attempts to fulfil its objectives as part of an overarching framework of multiple harmonization legislations geared towards enhancing cybersecurity in the European Union. This article examines the Cyber-Resilience Act, its interplay with other harmonizing legislations in the EU cybersecurity regulatory regime, and raises critical challenges and questions raised through the trends identified in said interplay.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106204"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145106348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asia–Pacific developments","authors":"Gabriela Kennedy","doi":"10.1016/j.clsr.2025.106218","DOIUrl":"10.1016/j.clsr.2025.106218","url":null,"abstract":"<div><div>This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106218"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The legal framework and legal gaps for AI-generated child sexual abuse material","authors":"Desara Dushi , Nertil Berdufi , Anastasia Karagianni","doi":"10.1016/j.clsr.2025.106205","DOIUrl":"10.1016/j.clsr.2025.106205","url":null,"abstract":"<div><div>Generative AI has only gained public prominence in the past two years, yet instances of AI-generated CSAM videos have already been observed. It can be foreseen that in the next five years, these videos and images will become more realistic and widespread. In the United States, the FBI is already handling its first cases involving the generation of AI CSAM. This paper employs a comprehensive legal analysis of existing EU laws, including the AI Act, the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the proposed Child Sexual Abuse Regulation (CSAR), and the Child Sexual Abuse Directive to address the critical question of whether generative AI can be effectively policed to prevent the creation of deepfakes involving children. While EU legislation is promising, it remains limited, in particular regarding the regulation of training data used by generative AI technologies. To comprehensively address AI-generated CSAM, a proactive, effective regulation and holistic approach are required, ensuring that child protection against online CSAM is integrated into the guidelines, codes of conduct, and technical standards that bring these legal instruments to life.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106205"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Antitrust in artificial intelligence infrastructure – between regulation and innovation in the EU, the US, and China","authors":"Kena Zheng","doi":"10.1016/j.clsr.2025.106211","DOIUrl":"10.1016/j.clsr.2025.106211","url":null,"abstract":"<div><div>The enormous amount of data and the substantial computational resources are crucial inputs of artificial intelligence (AI) infrastructure, enabling the development and training of AI models. Incumbent firms in adjacent technology markets hold significant advantages in AI development, due to their established large user bases and substantial financial resources. These advantages facilitate the accumulation of enormous amounts of data, and the establishment of computational infrastructure necessary for sufficient data processing and high-performance computing. By controlling data and computational resources, incumbents raise entry barriers, leverage advantages to favour their own AI services, and drive significant vertical integration across the AI supply chain, thereby entrenching their market dominance and shielding themselves from competition. This article examines regulatory responses to these antitrust risks in the European Union (EU), the United States (US), and China, given their leadership in digital regulation and AI development. It demonstrates that the EU’s Digital Markets Act, and China’s Interim Measures for the Management of Generative Artificial Intelligence Services introduce broadly framed yet applicable rules to address challenges related to data and computational resources in AI markets. Conversely, the US lacks both AI regulations and digital-specific competition laws, instead adopting innovation-centric policies aimed at ensuring its AI dominance globally. Given the strategic importance of AI development, all three jurisdictions have adopted a cautious approach in investigating potential abusive practices.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106211"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping the scholarship of the regulation of dark patterns: A systematic review of concepts, regulatory paradigms, and solutions from law and HCI perspectives","authors":"Weiwei Yi , Zihao Li","doi":"10.1016/j.clsr.2025.106225","DOIUrl":"10.1016/j.clsr.2025.106225","url":null,"abstract":"<div><div>In recent years, dark patterns, which are interface designs that manipulate user decisions, have raised growing regulatory concern. Yet scholarship on their governance remains fragmented, particularly in how the concept is defined, the harms are understood, and legal responses are framed. This paper offers a systematic review of 65 studies from Law and Human–Computer Interaction, following PRISMA guidelines. It identifies five root problems and layered harms, critiques sectoral regulations for their theoretical and enforcement limits, and synthesises proposed solutions, from doctrinal refinements and accountability measures to technical design interventions. Building on these findings, the paper argues that regulatory progress is hindered by the elusive nature of dark patterns, the difficulty of pinpointing actionable harms, and the expanding scope of the concept. It concludes by advocating a paradigmatic shift towards a proactive framework centred on ‘diligent design’, and outlines directions for collaborative, transdisciplinary research.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106225"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145473808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The future of the movie industry in the wake of generative AI: A perspective under EU and UK copyright law","authors":"Eleonora Rosati","doi":"10.1016/j.clsr.2025.106207","DOIUrl":"10.1016/j.clsr.2025.106207","url":null,"abstract":"<div><div>Like all sectors, the movie industry has been both affected by and exploring potential uses of generative Artificial Intelligence ('<strong>AI</strong>'). On the one hand, movie studios have detected and begun to add warnings against unlicensed third-party uses of their content, including for AI training,<span><span><sup>1</sup></span></span> and have taken enforcement initiatives through court action. On the other hand, the use of AI within and by the industry itself has been growing. Regarding the latter, some have emphasised the opportunities presented by the implementation of AI, including by advancing claims that AI tools can offer a `purer' form of expression. Others have instead warned against the potential displacement of industry workers, including workers employed in technical roles and younger and emerging actors.</div><div>Against the background illustrated above, this study maps and critically evaluates relevant issues facing the development, deployment, and use of AI models from a movie industry perspective. The legal analysis is conducted having regard to EU and UK copyright law and is divided into three parts:<ul><li><span>•</span><span><div><strong>Input/AI training</strong>: By considering relevant legal restrictions applicable to the training of AI models on protected audiovisual content, the border between lawful unlicensed uses and restricted uses is drawn;</div></span></li><li><span>•</span><span><div><strong>Protectability of AI-generated outputs</strong>: Turning to the output generation phase, the protectability of such outputs is considered next, by focusing in particular on the requirements of authorship and originality under EU and UK copyright law;</div></span></li><li><span>•</span><span><div><strong>Legal risks and potential liability stemming from the use of third-party AI models for output generation</strong>: Still having regard to the output generation phase, relevant legal issues that might arise having regard to the use of AI models that `regurgitate' third-party training data at output generation are considered, alongside the question of style protection under copyright.</div></span></li></ul></div><div>The main conclusions are as follows:<ul><li><span>•</span><span><div><strong>Input/AI training</strong>: Insofar as model training on third-party protected content is concerned, there are no exceptions under EU/UK law that fully cover the entirety of these processes. As a result, lacking legislative reform, the establishment of a licensing framework appears unavoidable for such activities to be deemed lawful;</div></span></li><li><span>•</span><span><div><strong>Protectability of AI-generated outputs</strong>: The deployment of AI across various phases of the creative process does not render the resulting content unprotectable, provided that human involvement and control remain significant throughout, with the result that AI is relied upon as a tool that aids – rather than replaces – the creativity o","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"59 ","pages":"Article 106207"},"PeriodicalIF":3.2,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}