Computer Law & Security Review最新文献

筛选
英文 中文
Principles for the responsible application of Generative AI 生成式人工智能负责任应用的原则
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-05-06 DOI: 10.1016/j.clsr.2025.106131
Roger Clarke
{"title":"Principles for the responsible application of Generative AI","authors":"Roger Clarke","doi":"10.1016/j.clsr.2025.106131","DOIUrl":"10.1016/j.clsr.2025.106131","url":null,"abstract":"<div><div>The quest for Artificial Intelligence (AI) has comprised successive waves of excessive enthusiasm followed by long, dispirited lulls. Most recently, during the first 3–4 years of public access to Generative Artificial Intelligence (GenAI), many authors have bought into the bullish atmosphere, replaying consultancies' predictions about gold mines of process efficiency and innovation. A more balanced approach to the technology is needed. Instances of apparently positive results need calm analysis, firstly to distinguish mirages from genuine contributions; secondly, to identify ways to effectively exploit the new capabilities; and thirdly, to formulate guidance for the avoidance and mitigation of negative consequences.</div><div>This article's first contribution is to ground the evaluation of GenAI's pathway, applications, impacts, implications and risks in a sufficiently deep appreciation of the technology's nature and key features. A wide range of sources is drawn on, in order to present descriptions of the processes involved in text-based GenAI. From those processes, 20 key characteristics are abstracted that together give rise to the promise and the threats GenAI embodies.</div><div>The effects of GenAI derive not from the technological features alone, but also from the patterns within which it is put to use. By mapping usage patterns across to domains of application, the phenomenon's impacts and implications can be more reliably delineated. The analysis provides a platform whereby the article's final contribution can be made. Previously-formulated principles for the responsible application of AI of all kinds are applied in the particular context of GenAI.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106131"},"PeriodicalIF":3.3,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143906660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative analysis of trademark protection in the metaverse and registration of virtual goods and NFTs 虚拟商品注册与nft注册中虚拟世界商标保护的比较分析
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-05-05 DOI: 10.1016/j.clsr.2025.106137
WooJung Jon, Sung-Pil Park
{"title":"Comparative analysis of trademark protection in the metaverse and registration of virtual goods and NFTs","authors":"WooJung Jon,&nbsp;Sung-Pil Park","doi":"10.1016/j.clsr.2025.106137","DOIUrl":"10.1016/j.clsr.2025.106137","url":null,"abstract":"<div><div>This study presents a comparative analysis of trademark protection in the metaverse and the registration of virtual goods and non‐fungible tokens (NFTs) across three distinct legal systems: those of the United States, the United Kingdom, and South Korea. Drawing on recent case law and evolving administrative guidelines, this study examines how traditional trademark doctrines—such as the likelihood‐of‐confusion standard in the U.S. under the Lanham Act, source-identifying function under the UK Trade Marks Act 1994, and proactive legislative reforms implemented by the Korean Intellectual Property Office—are being adapted to address the challenges posed by digital and virtual environments. Specifically, this study analyzes landmark cases such as <em>Hermès International v. Rothschild</em> and <em>Yuga Labs, Inc. v. Ripps</em>, which illustrate the extension of trademark protection to NFTs and other digital assets, as well as the interplay between trademark rights and freedom of expression. It also evaluates recent updates to international classification frameworks—including the 2024 Nice Classification and the Madrid Protocol—and discusses their implications for ensuring uniformity and effective enforcement of trademarks in a borderless digital market. The findings reveal that while each jurisdiction applies its own legal traditions to metaverse trademark disputes, all share a common policy objective: to prevent consumer confusion and safeguard brand integrity in an increasingly digital economy. Ultimately, the study advocates for proactive registration of trademarks as virtual goods and NFTs to streamline enforcement and enhance legal certainty, thereby fostering innovation and facilitating global trade in virtual environments.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106137"},"PeriodicalIF":3.3,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143903433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From theory to practice: Data minimisation and technical review of verifiable credentials under the GDPR 从理论到实践:GDPR下可验证凭证的数据最小化和技术审查
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-05-05 DOI: 10.1016/j.clsr.2025.106138
Qifan Yang , Cristian Lepore , Jessica Eynard , Romain Laborde
{"title":"From theory to practice: Data minimisation and technical review of verifiable credentials under the GDPR","authors":"Qifan Yang ,&nbsp;Cristian Lepore ,&nbsp;Jessica Eynard ,&nbsp;Romain Laborde","doi":"10.1016/j.clsr.2025.106138","DOIUrl":"10.1016/j.clsr.2025.106138","url":null,"abstract":"<div><div>Data minimisation is a fundamental principle of personal data processing under the European Union’s General Data Protection Regulation (GDPR). Article 5(1) of the GDPR defines three core elements of data minimisation: adequacy, relevance, and necessity in relation to the purposes. Adequacy concerns the relationship between personal data and the purposes of processing, which minimises data collection to an adequate level in relation to the purposes. Relevance requires objective, logical, and sufficiently close links between personal data and the objective pursued, and the controller should demonstrate this relevance in the context of necessity. Necessity in relation to the purposes limits personal data processing to a specific accuracy level of the purposes, considering appropriateness, effectiveness, and intrusiveness. Our legal analyses provide a framework linking each legal element to specific technical requirements. In the context of Verifiable Credentials, Selective Disclosure and Zero-Knowledge Proofs contribute to the technical requirements of data minimisation. Our evaluation of credential types reveals that SD-JWT, JSON-LD BBS+, AnonCreds, and mDOC support Selective Disclosure, and JSON-LD with BBS+ signature and AnonCreds enable Zero-Knowledge Proofs. These findings show JSON-based credentials have significant potential to enhance data minimisation in the future.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106138"},"PeriodicalIF":3.3,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143906661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI and the future of marketing: A consumer protection perspective 生成式人工智能与营销的未来:消费者保护的视角
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-05-02 DOI: 10.1016/j.clsr.2025.106141
Bram Duivenvoorde
{"title":"Generative AI and the future of marketing: A consumer protection perspective","authors":"Bram Duivenvoorde","doi":"10.1016/j.clsr.2025.106141","DOIUrl":"10.1016/j.clsr.2025.106141","url":null,"abstract":"<div><div>Generative AI has the potential to be the biggest disruption in marketing since the emergence of digital commerce in the early 2000s. This article will focus on three ways in which generative AI is expected to change marketing. First, generative AI enables companies to automatically create advertising copy and images, potentially leading to significant cost reductions. Secondly, generative AI offers possibilities to improve and automate personalised marketing, potentially enabling companies to send the right persuasive message at the right time to each potential customer. Thirdly, generative AI potentially offers possibilities to market products to consumers via generative AI chatbots. These developments offer potential advantages but also bear risks for consumers. For example, deepfakes in advertising can mislead consumers, AI-generated personalised marketing can exploit consumer vulnerabilities, and B2C chatbots can deceive consumers by providing biased advice. This article shows that EU law does in principle provide protection to consumers in relation to AI-generated marketing, but is also likely to fall short in effectively protecting consumers against the identified risks in several ways.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106141"},"PeriodicalIF":3.3,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence, human vulnerability and multi-level resilience 人工智能,人类的脆弱性和多层次的弹性
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-24 DOI: 10.1016/j.clsr.2025.106134
Sue Anne Teo
{"title":"Artificial intelligence, human vulnerability and multi-level resilience","authors":"Sue Anne Teo","doi":"10.1016/j.clsr.2025.106134","DOIUrl":"10.1016/j.clsr.2025.106134","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Artificial intelligence (AI) is increasing being deployed across various sectors in society. While bringing progress and promise to scientific discovery, public administration, healthcare, transportation and human well-being generally, artificial intelligence can also exacerbate existing forms of human vulnerabilities and can introduce new vulnerabilities through the interplay of AI inferences, predictions and content that is generated. This underpins the anxiety of policymakers in terms of managing potential harms and vulnerabilities and the harried landscape of governance and regulatory modalities, including through the European Union’s effort to be the first in the world to comprehensively regulate AI.&lt;/div&gt;&lt;div&gt;This article examines the adequacy of the existing theories of human vulnerability in countering the challenges posed by artificial intelligence, including through how vulnerability is theorised and addressed within human rights law and within existing legislative efforts such as the EU AI Act. Vulnerability is an element that informs the contours of groups and populations that are protected, for example under non-discrimination law and privacy law. A critical evaluation notes that while human vulnerability is taken into account in governing and regulating AI systems, the vulnerability lens that informs legal responses is one that is particularistic, static and identifiable. In other words, the law demands that vulnerabilities are known in advance in order for meaningful parameters of protection to be designed around them. The individual, as the subject of legal protection, is also expected to be able to identify the harms suffered and therein seek for accountability.&lt;/div&gt;&lt;div&gt;However, AI can displace this straightforward framing and the legal certainty that implicitly underpins how vulnerabilities are dealt with under the law. Through data-driven inferential insights of predictive AI systems and content generation enabled by general purpose AI models, novel forms of dynamic, unforeseeable and emergent forms of vulnerability can arise that cannot be adequately encompassed within existing legal responses. Instead, it requires an expansion of not only the types of legal responses offered but also of vulnerability theory itself and the measures of resilience that should be taken to address the exacerbation of existing vulnerabilities and but also of emergent ones.&lt;/div&gt;&lt;div&gt;The article offers a re-theorisation of human vulnerability in the age of AI as one informed by the universalist idea of vulnerability theorised by Martha Fineman. A new conceptual framework is offered, through an expanded understanding that sketches out the human condition in this age as one of ‘algorithmic vulnerability.’ It finds support for this new condition through a vector of convergence from the growing vocabularies of harm, the regulatory direction and drawing from scholarship on emerging vulnerabilities. The article proposes the framework of multi-","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106134"},"PeriodicalIF":3.3,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data portability strategies in the EU: Moving beyond individual rights 欧盟的数据可携性战略:超越个人权利
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-23 DOI: 10.1016/j.clsr.2025.106135
Yongle Chao , Meihe Xu , Aurelia Tamò-Larrieux , Konrad Kollnig
{"title":"Data portability strategies in the EU: Moving beyond individual rights","authors":"Yongle Chao ,&nbsp;Meihe Xu ,&nbsp;Aurelia Tamò-Larrieux ,&nbsp;Konrad Kollnig","doi":"10.1016/j.clsr.2025.106135","DOIUrl":"10.1016/j.clsr.2025.106135","url":null,"abstract":"<div><div>Data-driven innovation promises benefits for citizens, businesses, and organizations. To release the economic and social value of data, however, these actors need access to data. To get access to data, EU policymakers have introduced the concept of data portability. Data portability has traditionally been considered an individual right to enhance data subjects’ control over their personal data under the GDPR. Today, however, the concept was further developed in the DA and DMA to complement and enhance the GDPR right to data portability. Yet, the DA and DMA have different regulatory objectives compared to the GDPR. We argue in this paper that the concept of data portability has evolved beyond its original scope of protecting individual rights, while in the midst of a paradigm shift towards better access and flow for multiple stakeholders. However, this paradigm shift has rarely been explored and is not achieved yet in practice, as the academic and practical understanding of data portability is still focused on an individual level. To fill this gap, we analyze the evolution of data portability as an important novel policy instrument in (newer) EU legislation, as well as reflect on the shortcomings of the current understanding and implementation approach by means of use cases. We make the argument to understand the concept of data portability as a foundation for unlocking the collective value of data. We contend that data interoperability is both a technical issue and a political concern, and argue that sectoral and modular data interoperability standards are an opportunity for facilitating the effective implementation of data portability. Last, we call for improving data literacy among stakeholders, which is a possible path for closing the gap between regulations and effective enforcement by promoting an understanding of data portability.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106135"},"PeriodicalIF":3.3,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143859413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-driven civil litigation: Navigating the right to a fair trial 人工智能驱动的民事诉讼:引导公平审判的权利
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-19 DOI: 10.1016/j.clsr.2025.106136
Seyhan Selçuk , Nesibe Kurt Konca , Serkan Kaya
{"title":"AI-driven civil litigation: Navigating the right to a fair trial","authors":"Seyhan Selçuk ,&nbsp;Nesibe Kurt Konca ,&nbsp;Serkan Kaya","doi":"10.1016/j.clsr.2025.106136","DOIUrl":"10.1016/j.clsr.2025.106136","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) into legal proceedings has gained significant traction in recent years, particularly following the Covid-19 pandemic. As part of the broader movement toward the digitalization of legal systems, AI is seen as a tool to improve access to justice, enhance efficiency, and adopt a human-centered approach. However, the rapid advancement of AI necessitates careful consideration of fundamental human rights, especially the right to a fair trial as enshrined in Article 6 of the European Convention on Human Rights (ECHR). Recently, the European Union's Artificial Intelligence Act classifies AI systems used in the judiciary as high-risk, requiring impact assessments on fundamental rights, including the right to a fair trial. This paper explores the impact of AI-driven judicial tools on the right to a fair trial, focusing on key components such as the right to be heard, judicial independence, impartiality, and the principle of publicity. This paper explores the impact of AI-driven judicial tools on the right to a fair trial, focusing on key components such as the right to be heard, judicial independence, impartiality, and the principle of publicity, while examining the risks and opportunities posed by AI in civil litigation, including challenges like algorithmic discrimination, digital exclusion, and the potential erosion of human judges' cognitive abilities.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143847788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping the empirical literature of the GDPR's (In-)effectiveness: A systematic review 绘制GDPR (In-)有效性的实证文献:系统回顾
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-19 DOI: 10.1016/j.clsr.2025.106129
Wenlong Li , Zihao Li , Wenkai Li , Yueming Zhang , Aolan Li
{"title":"Mapping the empirical literature of the GDPR's (In-)effectiveness: A systematic review","authors":"Wenlong Li ,&nbsp;Zihao Li ,&nbsp;Wenkai Li ,&nbsp;Yueming Zhang ,&nbsp;Aolan Li","doi":"10.1016/j.clsr.2025.106129","DOIUrl":"10.1016/j.clsr.2025.106129","url":null,"abstract":"<div><div>In the realm of data protection, a striking disconnect prevails between traditional domains of doctrinal, legal, theoretical, and policy-based inquiries and a burgeoning body of empirical evidence. Much of the scholarly and regulatory discourse remains entrenched in abstract legal principles or normative frameworks, leaving the empirical landscape uncharted or minimally engaged. Since the birth of EU data protection law, a modest body of empirical evidence has been generated but remains widely scattered and unexamined. Such evidence offers vital insights into the effectiveness of data protection measures but languishes on the periphery, inadequately integrated into the broader conversation. To make a meaningful connection, we conduct a comprehensive review and synthesis of empirical research spanning nearly three decades (1995 - March 2022), advocating for a more robust integration of empirical evidence into the evaluation and review of the GDPR while laying a methodological foundation for coordinated research. By categorising evidence into four distinct groups– Awareness and Trust, Operational Performance, Ripple Effect, and Normative Clarity, we provide a structured analysis therein and highlight the variety and nuances of the empirical evidence produced about the GDPR. Our discussion offers critical reflections on the current orientations and designs of evaluation work, challenging some popular but misguided orientations that significantly influence public debate and even direction of empirical and doctrinal research. This synthesis also sheds light on several understated aspects, surfaced by our systematic review, including the complex structure of the GDPR and the internal contradictions between components, the GDPR's interaction with other normative values and legal frameworks, as well as unintended consequences imposed by the GDPR on other values not explicitly recognised as regulatory objectives (such as innovation). We further propose a methodological improvement in how empirical evidence can be generated and utilised, stressing the need for more guided, coordinated and rigorous empirical research. By re-aligning empirical focus towards these ends and establishing strategic coordination at the community level, we seek to inform and underpin evaluative work that aligns empirical inquiries with policy and doctrinal needs, while truly reflecting the complexities and challenges of safeguarding personal data in the digital age.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106129"},"PeriodicalIF":3.3,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Authorship in Human-AI collaborative creation: A creative control theory perspective 人-人工智能协同创作的作者身份:一个创造性控制理论的视角
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-18 DOI: 10.1016/j.clsr.2025.106139
Wei Liu , Weijie Huang
{"title":"Authorship in Human-AI collaborative creation: A creative control theory perspective","authors":"Wei Liu ,&nbsp;Weijie Huang","doi":"10.1016/j.clsr.2025.106139","DOIUrl":"10.1016/j.clsr.2025.106139","url":null,"abstract":"<div><div>The emergence of human-AI collaborative creation (HAIC) models has provided a good opportunity to uncover the principles of authorship identification. To clarify whether humans exert control over AI-generated content (AIGC) and whether such control is sufficient to confer authorship, we propose the theory of creative control from a law and aesthetics perspective. According to this theory, a human can claim authorship when they are guided by artistic imagery thinking and manifest individual creativity throughout the entire creation process from conception to execution. In the HAIC model, the unpredictable nature of the AI black box does not impede the recognition of users’ control, as users possess the capability for artistic imagery thinking to direct the entire creation process. If their contribution meets the originality standard, they qualify as the author of the AIGC. Current prevailing views that evaluate AIGC’s originality on the basis of either the final form of expression or the users’ prompts in the initial stage overlook the dynamic nature of the creative process.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106139"},"PeriodicalIF":3.3,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143844455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scoring the European citizen in the AI era 人工智能时代的欧洲公民得分
IF 3.3 3区 社会学
Computer Law & Security Review Pub Date : 2025-04-14 DOI: 10.1016/j.clsr.2025.106130
Nathan Genicot
{"title":"Scoring the European citizen in the AI era","authors":"Nathan Genicot","doi":"10.1016/j.clsr.2025.106130","DOIUrl":"10.1016/j.clsr.2025.106130","url":null,"abstract":"<div><div>Social scoring is one of the AI practices banned by the AI Act. This ban is explicitly inspired by China, which in 2014 announced its intention to set up a large-scale government project – the Social Credit System – aiming to rate every Chinese citizen according to their good behaviour, using digital technologies and AI. But in Europe, individuals are also scored by public and private bodies in a variety of contexts, such as assessing creditworthiness, monitoring employee productivity, detecting social fraud or terrorist risks, and so on. However, the AI Act does not intend to prohibit these types of scoring, as they would qualify as “high-risk AI systems”, which are authorised while subject to various requirements. One might therefore think that the ban on social scoring will have no practical effect on the scoring practices already in use in Europe, and that it is merely a vague safeguard in case an authoritarian power is tempted to set up such a system on European territory. Contrary to this view, this article argues that the ban has been drafted in a way that is flexible and therefore likely to make it a useful tool, similar and complementary to Article 22 of the General Data Protection Regulation, to protect individuals against certain forms of disproportionate use of AI-based scoring.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"57 ","pages":"Article 106130"},"PeriodicalIF":3.3,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143825686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信