Computer Law & Security Review最新文献

筛选
英文 中文
Digital transformation in Russia: Turning from a service model to ensuring technological sovereignty 俄罗斯的数字化转型:从服务模式转向确保技术主权
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106075
Ekaterina Martynova , Andrey Shcherbovich
{"title":"Digital transformation in Russia: Turning from a service model to ensuring technological sovereignty","authors":"Ekaterina Martynova ,&nbsp;Andrey Shcherbovich","doi":"10.1016/j.clsr.2024.106075","DOIUrl":"10.1016/j.clsr.2024.106075","url":null,"abstract":"<div><div>The paper outlines core aspects of the digital transformation process in Russia since the early 2000s, as well as recent legislative initiatives and practices at the federal level. It considers the digitalization of public services, efforts towards ‘sovereignization’ of the Russian segment of the Internet, and the current focus on cybersecurity and the development of artificial intelligence. The paper highlights the tendency to strengthen the factor of protection of state interests and national security alongside control over online activities of citizens in comparison with the initial understanding of digital transformation as a human-oriented process aimed at increasing the accessibility and convenience of public services. It can be assumed that this change in the goals and methods of digital transformation is one of the manifestations of a broader political, social and cultural process of separation, primarily from the West, that Russian society is currently undergoing, amidst a growing official narrative of threats from both external and internal forces that require greater independence and increased vigilance, including in the digital domain.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106075"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
European National News
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106062
Nick Pantlin
{"title":"European National News","authors":"Nick Pantlin","doi":"10.1016/j.clsr.2024.106062","DOIUrl":"10.1016/j.clsr.2024.106062","url":null,"abstract":"<div><div>This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.</div><div>© 2024 Herbert Smith Freehills LLP. Published by Elsevier Ltd. All rights reserved.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106062"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Toward a BRICS stack? Leveraging digital transformation to construct digital sovereignty in the BRICS countries
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106064
Luca Belli, Larissa Galdino de Magalhães Santos
{"title":"Editorial: Toward a BRICS stack? Leveraging digital transformation to construct digital sovereignty in the BRICS countries","authors":"Luca Belli,&nbsp;Larissa Galdino de Magalhães Santos","doi":"10.1016/j.clsr.2024.106064","DOIUrl":"10.1016/j.clsr.2024.106064","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106064"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian deep learning: An enhanced AI framework for legal reasoning alignment 贝叶斯深度学习:用于法律推理调整的增强型人工智能框架
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106073
Chuyue Zhang, Yuchen Meng
{"title":"Bayesian deep learning: An enhanced AI framework for legal reasoning alignment","authors":"Chuyue Zhang,&nbsp;Yuchen Meng","doi":"10.1016/j.clsr.2024.106073","DOIUrl":"10.1016/j.clsr.2024.106073","url":null,"abstract":"<div><div>The integration of artificial intelligence into the field of law has penetrated the underlying logic of legal operations. Currently, legal AI systems face difficulties in representing legal knowledge, exhibit insufficient legal reasoning capabilities, have poor explainability, and are inefficient in handling causal inference and uncertainty. In legal practice, various legal reasoning methods (deductive reasoning, inductive reasoning, abductive reasoning, etc.) are often intertwined and used comprehensively. However, the reasoning modes employed by current legal AI systems are inadequate. Identifying AI models that are more suitable for legal reasoning is crucial for advancing the development of legal AI systems.</div><div>Distinguished from the current high-profile large language models, we believe that Bayesian reasoning is highly compatible with legal reasoning, as it can perferm abductive reasoning, excel at causal inference, and admits the \"defeasibility\" of reasoning conclusions, which is consistent with the cognitive development pattern of legal professionals from apriori to posteriori. AI models based on Bayesian methods can also become the main technological support for legal AI systems. Bayesian neural networks have advantages in uncertainty modeling, avoiding overfitting, and explainability. Legal AI systems based on Bayesian deep learning frameworks can combine the advantages of deep learning and probabilistic graphical models, facilitating the exchange and supplementation of information between perception tasks and reasoning tasks. In this paper, we take perpetrator prediction systems and legal judegment prediction systems as examples to discuss the construction and basic operation modes of the Bayesian deep learning framework. Bayesian deep learning can enhance reasoning ability, improve the explainability of models, and make the reasoning process more transparent and visualizable. Furthermore, Bayesian deep learning framework is well-suited for human-machine collaborative tasks, enabling the complementary strengths of humans and machines.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106073"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106066
Claudio Novelli , Federico Casolari , Philipp Hacker , Giorgio Spedicato , Luciano Floridi
{"title":"Generative AI in EU law: Liability, privacy, intellectual property, and cybersecurity","authors":"Claudio Novelli ,&nbsp;Federico Casolari ,&nbsp;Philipp Hacker ,&nbsp;Giorgio Spedicato ,&nbsp;Luciano Floridi","doi":"10.1016/j.clsr.2024.106066","DOIUrl":"10.1016/j.clsr.2024.106066","url":null,"abstract":"<div><div>The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and shortcomings in the EU legislative framework and proposes recommendations to ensure the safe and compliant deployment of generative models.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106066"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bias and discrimination in ML-based systems of administrative decision-making and support 基于 ML 的行政决策和支持系统中的偏见和歧视
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106070
Trang Anh MAC
{"title":"Bias and discrimination in ML-based systems of administrative decision-making and support","authors":"Trang Anh MAC","doi":"10.1016/j.clsr.2024.106070","DOIUrl":"10.1016/j.clsr.2024.106070","url":null,"abstract":"<div><div>In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised.<span><span><sup>1</sup></span></span> The documentary, Trials of Gabriel Fernandez in 2020,<span><span><sup>2</sup></span></span> has discussed the Allegheny Family Screening Tool (AFST<span><span><sup>3</sup></span></span>), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan<span><span><sup>4</sup></span></span>, the Centre for Social Data Analytics co-director, and the Children's Data Network<span><span><sup>5</sup></span></span> members, with Emily Putnam-Hornstein<span><span><sup>6</sup></span></span>, established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse<span><span><sup>7</sup></span></span>. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias.<span><span><sup>8</sup></span></span> This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers’ ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106070"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
European National News
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-11-01 DOI: 10.1016/j.clsr.2024.106039
Nick Pantlin
{"title":"European National News","authors":"Nick Pantlin","doi":"10.1016/j.clsr.2024.106039","DOIUrl":"10.1016/j.clsr.2024.106039","url":null,"abstract":"<div><div>This article tracks developments at the national level in key European countries in the area of IT and communications and provides a concise alerting service of important national developments. It is co-ordinated by Herbert Smith Freehills LLP and contributed to by firms across Europe. This column provides a concise alerting service of important national developments in key European countries. Part of its purpose is to complement the Journal's feature articles and briefing notes by keeping readers abreast of what is currently happening “on the ground” at a national level in implementing EU level legislation and international conventions and treaties. Where an item of European National News is of particular significance, CLSR may also cover it in more detail in the current or a subsequent edition.</div><div>© 2024 Herbert Smith Freehills LLP. Published by Elsevier Ltd. All rights reserved.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106039"},"PeriodicalIF":3.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
For whom is privacy policy written? A new understanding of privacy policies 隐私政策为谁而写?重新认识隐私政策
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-10-28 DOI: 10.1016/j.clsr.2024.106072
Xiaodong Ding , Hao Huang
{"title":"For whom is privacy policy written? A new understanding of privacy policies","authors":"Xiaodong Ding ,&nbsp;Hao Huang","doi":"10.1016/j.clsr.2024.106072","DOIUrl":"10.1016/j.clsr.2024.106072","url":null,"abstract":"<div><div>This article examines two types of privacy policies required by the GDPR and the PIPL. It argues that even if privacy policies fail to effectively assist data subjects in making informed consent but still facilitate private and public enforcement, it does not mean that privacy policies should exclusively serve one category of its readers. The article argues that, considering the scope and meaning of the transparency value protected by data privacy laws, the role of privacy policies must be repositioned to reduce costs of obtaining and understanding information for all readers of privacy policies.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106072"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act 应对人工智能对司法机构的风险:欧盟《人工智能法》下的问责框架
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-10-28 DOI: 10.1016/j.clsr.2024.106067
Irina Carnat
{"title":"Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act","authors":"Irina Carnat","doi":"10.1016/j.clsr.2024.106067","DOIUrl":"10.1016/j.clsr.2024.106067","url":null,"abstract":"<div><div>The rapid advancements in natural language processing, particularly the development of generative large language models (LLMs), have renewed interest in using artificial intelligence (AI) for judicial decision-making. While these technological breakthroughs present new possibilities for legal automation, they also raise concerns about over-reliance and automation bias. Drawing insights from the COMPAS case, this paper examines the implications of deploying generative LLMs in the judicial domain. It identifies the persistent factors that contributed to an accountability gap when AI systems were previously used for judicial decision-making. To address these risks, the paper analyses the relevant provisions of the EU Artificial Intelligence Act, outlining a comprehensive accountability framework based on the regulation's risk-based approach. The paper concludes that the successful integration of generative LLMs in judicial decision-making requires a holistic approach addressing cognitive biases. By emphasising shared responsibility and the imperative of AI literacy across the AI value chain, the regulatory framework can help mitigate the risks of automation bias and preserve the rule of law.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106067"},"PeriodicalIF":3.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Procedural fairness in automated asylum procedures: Fundamental rights for fundamental challenges 自动庇护程序中的程序公正:应对基本挑战的基本权利
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2024-10-17 DOI: 10.1016/j.clsr.2024.106065
Francesca Palmiotto
{"title":"Procedural fairness in automated asylum procedures: Fundamental rights for fundamental challenges","authors":"Francesca Palmiotto","doi":"10.1016/j.clsr.2024.106065","DOIUrl":"10.1016/j.clsr.2024.106065","url":null,"abstract":"<div><div>In response to the increasing digitalization of asylum procedures, this paper examines the legal challenges surrounding the use of automated tools in refugee status determination (RSD). Focusing on the European Union (EU) context, where interoperable databases and advanced technologies are employed to streamline asylum processes, the paper asks how EU fundamental rights can address the challenges that automation raises. Through a comprehensive analysis of EU law and several real-life cases, the paper focuses on the relationship between procedural fairness and the use of automated tools to provide evidence in RSD. The paper illustrates what standards apply to automated systems based on a legal doctrinal analysis of EU primary and secondary law and emerging case law from national courts and the CJEU. The article contends that the rights to privacy and data protection enhance procedural fairness in asylum procedures and shows how they can be leveraged for increased protection of asylum seekers and refugees. Moreover, the paper also claims that asylum authorities carry a new pivotal responsibility as the medium between the technologies, asylum seekers and their rights.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106065"},"PeriodicalIF":3.3,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142444763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信