{"title":"For whom is privacy policy written? A new understanding of privacy policies","authors":"Xiaodong Ding , Hao Huang","doi":"10.1016/j.clsr.2024.106072","DOIUrl":"10.1016/j.clsr.2024.106072","url":null,"abstract":"<div><div>This article examines two types of privacy policies required by the GDPR and the PIPL. It argues that even if privacy policies fail to effectively assist data subjects in making informed consent but still facilitate private and public enforcement, it does not mean that privacy policies should exclusively serve one category of its readers. The article argues that, considering the scope and meaning of the transparency value protected by data privacy laws, the role of privacy policies must be repositioned to reduce costs of obtaining and understanding information for all readers of privacy policies.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106072"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act","authors":"Irina Carnat","doi":"10.1016/j.clsr.2024.106067","DOIUrl":"10.1016/j.clsr.2024.106067","url":null,"abstract":"<div><div>The rapid advancements in natural language processing, particularly the development of generative large language models (LLMs), have renewed interest in using artificial intelligence (AI) for judicial decision-making. While these technological breakthroughs present new possibilities for legal automation, they also raise concerns about over-reliance and automation bias. Drawing insights from the COMPAS case, this paper examines the implications of deploying generative LLMs in the judicial domain. It identifies the persistent factors that contributed to an accountability gap when AI systems were previously used for judicial decision-making. To address these risks, the paper analyses the relevant provisions of the EU Artificial Intelligence Act, outlining a comprehensive accountability framework based on the regulation's risk-based approach. The paper concludes that the successful integration of generative LLMs in judicial decision-making requires a holistic approach addressing cognitive biases. By emphasising shared responsibility and the imperative of AI literacy across the AI value chain, the regulatory framework can help mitigate the risks of automation bias and preserve the rule of law.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106067"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Procedural fairness in automated asylum procedures: Fundamental rights for fundamental challenges","authors":"Francesca Palmiotto","doi":"10.1016/j.clsr.2024.106065","DOIUrl":"10.1016/j.clsr.2024.106065","url":null,"abstract":"<div><div>In response to the increasing digitalization of asylum procedures, this paper examines the legal challenges surrounding the use of automated tools in refugee status determination (RSD). Focusing on the European Union (EU) context, where interoperable databases and advanced technologies are employed to streamline asylum processes, the paper asks how EU fundamental rights can address the challenges that automation raises. Through a comprehensive analysis of EU law and several real-life cases, the paper focuses on the relationship between procedural fairness and the use of automated tools to provide evidence in RSD. The paper illustrates what standards apply to automated systems based on a legal doctrinal analysis of EU primary and secondary law and emerging case law from national courts and the CJEU. The article contends that the rights to privacy and data protection enhance procedural fairness in asylum procedures and shows how they can be leveraged for increased protection of asylum seekers and refugees. Moreover, the paper also claims that asylum authorities carry a new pivotal responsibility as the medium between the technologies, asylum seekers and their rights.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106065"},"PeriodicalIF":3.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142444763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asia-Pacific developments","authors":"Gabriela Kennedy","doi":"10.1016/j.clsr.2024.106058","DOIUrl":"10.1016/j.clsr.2024.106058","url":null,"abstract":"<div><div>This column provides a country by country analysis of the latest legal developments, cases and issues relevant to the IT, media and telecommunications' industries in key jurisdictions across the Asia Pacific region. The articles appearing in this column are intended to serve as ‘alerts’ and are not submitted as detailed analyses of cases or legal developments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106058"},"PeriodicalIF":3.3,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Infrastructural Brussels Effect: The translation of EU Law into the UK's digital borders","authors":"Gavin Sullivan , Dimitri Van Den Meerssche","doi":"10.1016/j.clsr.2024.106057","DOIUrl":"10.1016/j.clsr.2024.106057","url":null,"abstract":"<div><div>This article gives an account of the legal standards and safeguards that guide and constrain the current design of the UK's digital borders. Based on an empirical engagement with the development of <em>Cerberus</em> – an advanced risk-based analytics platform aimed at the detection of previously ‘unknown’ threats – the article presents a dual argument. On the one hand, it provides an account of the remaining salience and extraterritorial reach of EU law in setting standards for the collection, retention, processing and sharing of Passenger Name Records (PNR) data in the UK. This PNR data is a constitutive component of the digital border. Through the EU-UK Trade and Cooperation Agreement (TCA), the UK is now bound to comply with the rather stringent legal safeguards developed by the CJEU (in Opinion 1/15) on the retention and automated processing of PNR data. Our analysis shows the different channels through which EU law obtains this extraterritorial reach, how compliance can be monitored and enforced, and, crucially, how it has influenced and constrained the technical design of the UK's digital borders – a salient and unexplored phenomenon that we describe as an <em>Infrastructural Brussels Effect.</em> Yet, on the other hand, the article empirically shows that this is not merely a process of norm diffusion and extraterritoriality. Once legal standards become infrastructurally embedded in Cerberus, we witness normative translations and sociotechnical shifts with important legal and political consequences. Legal standards on ‘reasonable suspicion’ and the ‘objective evidence’ of ‘risk’, we argue, are given specific meaning through a logic of relational inference and algorithmic pattern detection (leading to forms of ‘concern by association’). By studying the entanglements between legal norms and material infrastructures – an approach we describe as infra-legalities – these normative effects become visible and contestable, providing a productive site for the sociolegal study of law and algorithmic governance.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106057"},"PeriodicalIF":3.3,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ben Wagner , Matthias C. Kettemann , Anna Sophia Tiedeke , Felicitas Rachinger , Marie-Therese Sekwenz
{"title":"Mapping interpretations of the law in online content moderation in Germany","authors":"Ben Wagner , Matthias C. Kettemann , Anna Sophia Tiedeke , Felicitas Rachinger , Marie-Therese Sekwenz","doi":"10.1016/j.clsr.2024.106054","DOIUrl":"10.1016/j.clsr.2024.106054","url":null,"abstract":"<div><div>Content moderation is a vital condition that online platforms must facilitate, according to the law, to create suitable online environments for their users. By the law, we mean national or European laws that require the removal of content by online platforms, such as EU Regulation 2021/784, which addresses the dissemination of terrorist content online. Content moderation required by these national or European laws, summarised here as ‘the law’, is different from the moderation of pieces of content that is not directly required by law but instead is conducted voluntarily by the platforms. New regulatory requests create an additional layer of complexity of legal grounds for the moderation of content and are relevant to platforms’ daily decisions. The decisions made are either grounded in reasons stemming from different sources of law, such as international or national provisions, or can be based on contractual grounds, such as the platform's Terms of Service and Community Standards. However, how to empirically measure these essential aspects of content moderation remains unclear. Therefore, we ask the following research question: How do online platforms interpret the law when they moderate online content?</div><div>To understand this complex interplay and empirically test the quality of a platform's content moderation claims, this article develops a methodology that facilitates empirical evidence of the individual decisions taken per piece of content while highlighting the subjective element of content classification by human moderators. We then apply this methodology to a single empirical case, an anonymous medium-sized German platform that provided us access to their content moderation decisions. With more knowledge of how platforms interpret the law, we can better understand the complex nature of content moderation, its regulation and compliance practices, and to what degree legal moderation might differ from moderation due to contractual reasons in dimensions such as the need for context, information, and time.</div><div>Our results show considerable divergence between the platform's interpretation of the law and ours. We believe that a significant number of platform legal interpretations are incorrect due to divergent interpretations of the law and that platforms are removing legal content that they falsely believe to be illegal (‘overblocking’) while simultaneously not moderating illegal content (‘underblocking’). In conclusion, we provide recommendations for content moderation system design that takes (legal) human content moderation into account and creates new methodological ways to test its quality and effect on speech in online platforms.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106054"},"PeriodicalIF":3.3,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A European right to end-to-end encryption?","authors":"Jessica Shurson","doi":"10.1016/j.clsr.2024.106063","DOIUrl":"10.1016/j.clsr.2024.106063","url":null,"abstract":"<div><div>In <em>Podchasov v Russia</em>, the European Court of Human Rights unanimously held that a Russian statutory obligation on ‘internet communications organisers’ to provide information to state authorities that allowed for the decryption of encrypted communications was a disproportionate interference with Article 8 because the available technical means of decryption risked weakening the security of communications for all users of the service. This is significant as authorities in the UK and EU may seek to implement similar statutory obligations on communications service providers.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106063"},"PeriodicalIF":3.3,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Right to Procedural Accuracy: A Governance Model for Digital Evidence in Criminal Proceedings","authors":"Radina (Adi) Stoykova","doi":"10.1016/j.clsr.2024.106040","DOIUrl":"10.1016/j.clsr.2024.106040","url":null,"abstract":"<div><div>This paper motivates and studies the feasibility of a new digital right to procedural accuracy (RPA) for digital evidence processing in criminal investigations. The need to guarantee a new principle of procedural accuracy under Art. 6 of the European Convention on Human Rights (ECHR) is based on the concern that digital forensic science and AI technology have a significant impact on individuals’ rights in criminal proceedings, which are neither coherently nor comprehensively addressed. The personal and material scope of RPA are examined and include: <em>(i)</em> protection against unreliable digital evidence processing; <em>(ii)</em> right to access the chain of evidence, explanation, and forensic assistance; and <em>(iii)</em> the right to participate in the determinative stages of the digital evidence processing. Limitations of the RPA are also discussed.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106040"},"PeriodicalIF":3.3,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Who is vulnerable to deceptive design patterns? A transdisciplinary perspective on the multi-dimensional nature of digital vulnerability","authors":"Arianna Rossi , Rachele Carli , Marietjie W. Botes , Angelica Fernandez , Anastasia Sergeeva , Lorena Sánchez Chamorro","doi":"10.1016/j.clsr.2024.106031","DOIUrl":"10.1016/j.clsr.2024.106031","url":null,"abstract":"<div><div>In the last few years, there have been growing concerns about the far-reaching influence that digital architectures may exert on individuals and societies. A specific type of digital manipulation is often engineered into the interfaces of digital services through the use of so-called dark patterns, that cause manifold harms against which nobody seems to be immune. However, many areas of law rely on a traditional class-based view according to which certain groups are inherently more vulnerable than others, such as children. Although the undue influence exerted by dark patterns on online decisions can befall anybody, empirical studies show that there are actually certain factors that aggravate the vulnerability of some people by making them more likely to incur in certain manipulation risks engineered in digital services and less resilient to the related harms. But digital vulnerability does not overlap with traditionally protected groups and depends on multifaceted factors. This article contributes to the ongoing discussions on these topics by offering (i) a multidisciplinary mapping of the micro, meso, and macro factors of vulnerability to dark patterns; (ii) a subsequent critical reflection on the feasibility of the risk assessment proposed in three selected EU legal frameworks: the General Data Protection Regulation, the Digital Services Act, and the Artificial Intelligence Act; (iii) and multidisciplinary suggestions to increase resilience towards manipulative designs online.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106031"},"PeriodicalIF":3.3,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}