Computer Law & Security Review最新文献

筛选
英文 中文
Incorporating AI incident reporting into telecommunications law and policy: Insights from India 将人工智能事件报告纳入电信法律和政策:来自印度的见解
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2026-01-27 DOI: 10.1016/j.clsr.2026.106263
Avinash Agarwal , Manisha J. Nene
{"title":"Incorporating AI incident reporting into telecommunications law and policy: Insights from India","authors":"Avinash Agarwal ,&nbsp;Manisha J. Nene","doi":"10.1016/j.clsr.2026.106263","DOIUrl":"10.1016/j.clsr.2026.106263","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) into telecommunications infrastructure introduces novel risks, such as algorithmic bias and unpredictable system behavior, that fall outside the scope of traditional cybersecurity and data protection frameworks. This paper introduces a precise definition and a detailed typology of <em>telecommunications AI incidents</em>, establishing them as a distinct category of risk that extends beyond conventional cybersecurity and data protection breaches. It argues for their recognition as a distinct regulatory concern. Using India as a case study for jurisdictions that lack a horizontal AI law, the paper analyzes the country’s key digital regulations. The analysis reveals that India’s existing legal instruments, including the Telecommunications Act, 2023, the CERT-In Rules, and the Digital Personal Data Protection Act, 2023, focus on cybersecurity and data breaches, creating a significant regulatory gap for AI-specific operational incidents, such as performance degradation and algorithmic bias. The paper also examines structural barriers to disclosure and the limitations of existing AI incident repositories. Based on these findings, the paper proposes targeted policy recommendations centered on integrating AI incident reporting into India’s existing telecom governance. Key proposals include mandating reporting for high-risk AI failures, designating an existing government body as a nodal agency to manage incident data, and developing standardized reporting frameworks. These recommendations aim to enhance regulatory clarity and strengthen long-term resilience, offering a pragmatic and replicable blueprint for other nations seeking to govern AI risks within their existing sectoral frameworks.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106263"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volunteering for the platforms – How social media terms of service may violate the fair remuneration principle of authors and performers 为平台做志愿者——社交媒体的服务条款如何违反作者和表演者的公平报酬原则
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2025-12-03 DOI: 10.1016/j.clsr.2025.106246
Ludovico Bossi
{"title":"Volunteering for the platforms – How social media terms of service may violate the fair remuneration principle of authors and performers","authors":"Ludovico Bossi","doi":"10.1016/j.clsr.2025.106246","DOIUrl":"10.1016/j.clsr.2025.106246","url":null,"abstract":"<div><div>Major social media terms of service (<em>i.e.</em>, YouTube, TikTok, Facebook, Instagram, LinkedIn, X) impose to users a royalty-free license covering uploaded “content” protected by intellectual property rights (“IPRs”). Consequently, while social media service providers’ revenues are significant, users that are also authors and performers do not directly receive any remuneration in most cases. Most recently, the benefits of training artificial intelligence (“AI”) tools on what is published on social media further intensify this imbalance.</div><div>This bargain has not gone completely unnoticed. However, the doctrine often questioned the workability of any legislative or judicial intervention aimed at restoring balance. This article argues that online social media service providers have an obligation under EU law to share the revenues derived from the exploitation of works and performances published on their platforms with authors and performers.</div><div>For this purpose, this work discusses the legitimacy of free licenses with the fair remuneration principle of authors and performers. It interprets the so-called “Linux clause” of Recital 82 Directive (EU) 2019/790 (“CDSMD”) and proposes a distinction between “free licences for the benefit of any users” (“open licenses”) and those for the benefit of specific licensees (“gratuitous licenses”). Abuses by the general public cannot occur in the case of open licenses. On the contrary, specific licensees who have a stronger position could unfairly impose gratuitous licenses to authors and performers. This inquiry runs in parallel with a recent litigation in Belgium on the matter (the “Streamz” case).</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106246"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging textual content, citational aspects and dissenting opinions through a multi-view contrastive learning methodology for legal precedent analysis 运用多视角对比学习方法,综合运用文本内容、引证内容和不同意见进行判例分析
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2026-01-02 DOI: 10.1016/j.clsr.2025.106257
Graziella De Martino , Piero Marra , Annunziata D’Aversa , Lorenzo Pulito , Antonio Pellicani , Gianvito Pio , Michelangelo Ceci
{"title":"Leveraging textual content, citational aspects and dissenting opinions through a multi-view contrastive learning methodology for legal precedent analysis","authors":"Graziella De Martino ,&nbsp;Piero Marra ,&nbsp;Annunziata D’Aversa ,&nbsp;Lorenzo Pulito ,&nbsp;Antonio Pellicani ,&nbsp;Gianvito Pio ,&nbsp;Michelangelo Ceci","doi":"10.1016/j.clsr.2025.106257","DOIUrl":"10.1016/j.clsr.2025.106257","url":null,"abstract":"<div><div>Artificial Intelligence is transforming the digital justice field by introducing technologies to automate document review, predict case outcomes, and perform legal research tasks. While offering significant benefits, these systems appear to prioritize decision-making patterns that are simply repeated over time, thus neglecting the importance of a dynamic evolution and potentially leading to the risk of stagnation of case law.</div><div>To mitigate this risk, this paper proposes ContraLEX, a methodology based on a multi-view contrastive learning framework to compare legal judgments, considering those from the European Court of Human Rights as an emblematic case study. Methodologically, our goal is to capture the positive influence on the similarity, provided by both textual content and citations of precedents, and the negative influence of dissenting opinions, by relying on a contrastive learning approach. We argue that our methodology can enhance legal analysis by creating a proper representation of case law to prevent the stagnation of legal precedents and promote their evolution over time. A case study on ECtHR data empirically demonstrated that the proposed pipeline is very promising for properly supporting legal precedent analysis.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106257"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mind the gap: Securing algorithmic explainability for credit decisions beyond the UK GDPR 注意差距:确保英国GDPR之外的信贷决策的算法可解释性
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2025-12-03 DOI: 10.1016/j.clsr.2025.106247
Holli Sargeant
{"title":"Mind the gap: Securing algorithmic explainability for credit decisions beyond the UK GDPR","authors":"Holli Sargeant","doi":"10.1016/j.clsr.2025.106247","DOIUrl":"10.1016/j.clsr.2025.106247","url":null,"abstract":"<div><div>The recent amendments to the United Kingdom’s GDPR under the Data (Use and Access) Act 2025 marks a significant divergence from the European Union’s approach to automated decision-making, substantively weakening the ‘right to explanation’ for automated decisions. This paper provides a critical legal analysis of the new regime, arguing that it dismantles crucial protections for individuals. The principal finding is that the legislation creates significant legal lacunas by introducing an ambiguous ‘no meaningful human involvement’ standard and restricting key safeguards to decisions involving ‘special category data’. These changes allow firms to shield opaque models from scrutiny, increasing the risk of algorithmic discrimination, particularly in high-stakes sectors like consumer credit.</div><div>Drawing on a comparative review of the United States’ technology-neutral adverse action notice requirement, the paper concludes that data protection law is no longer a sufficient safeguard against algorithmic harm in the United Kingdom. It proposes the establishment of a new right to an explanation for any adverse credit decision. This right should be anchored not in data protection law, but in consumer protection law, and be enforced by a specialist regulator, the Financial Conduct Authority. Such a framework would close the new accountability gaps and create market incentives for developing transparent, explainable-by-design systems, better aligning technological innovation with consumer protection.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106247"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy as institutional design: A legal-technological analysis of CBDC governance and compliance 隐私作为制度设计:CBDC治理与合规性的法律技术分析
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2025-12-27 DOI: 10.1016/j.clsr.2025.106258
Ammar Zafar
{"title":"Privacy as institutional design: A legal-technological analysis of CBDC governance and compliance","authors":"Ammar Zafar","doi":"10.1016/j.clsr.2025.106258","DOIUrl":"10.1016/j.clsr.2025.106258","url":null,"abstract":"<div><div>CBDCs reconfigure the relationship between public money, institutional authority, and informational power. While privacy in CBDC systems is often seen as a technical issue of cryptography and compliance, this paper argues that it is primarily an institutional design challenge: who may access transactional data, under what legal authority, and subject to which constraints. Using Sweden’s e-Krona pilot and the emerging digital euro framework as comparative references, the analysis demonstrates how identical privacy-enhancing technologies can produce different outcomes depending on how central banks, intermediaries, and supervisory bodies allocate visibility, responsibility, and access. The paper also highlights the limitations of pilot environments, which cannot replicate behavioural diversity, fraud incentives, or the governance frictions typical of live monetary systems. Furthermore, it examines how cross-border legal fragmentation hampers the feasibility of privacy-preserving interoperability, even when technical standards seem compatible. The findings suggest that lasting privacy in CBDCs cannot rely solely on PETs; it requires institutional restraint, legally defined access rights, and governance structures capable of maintaining credible limits on informational power.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106258"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approaching the AI Act... with AI: LLMs and knowledge graphs to extract and analyse obligations 接近人工智能法案……与人工智能:法学硕士和知识图谱提取和分析义务
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2025-12-16 DOI: 10.1016/j.clsr.2025.106230
Federico Galli , Thiago Raulino Dal Pont , Galileo Sartor , Giuseppe Contissa
{"title":"Approaching the AI Act... with AI: LLMs and knowledge graphs to extract and analyse obligations","authors":"Federico Galli ,&nbsp;Thiago Raulino Dal Pont ,&nbsp;Galileo Sartor ,&nbsp;Giuseppe Contissa","doi":"10.1016/j.clsr.2025.106230","DOIUrl":"10.1016/j.clsr.2025.106230","url":null,"abstract":"<div><div>The EU Artificial Intelligence Act (AIA) exemplifies the growing complexity of digital regulation in the domain of computer technologies. Characterised by abstract terminology, multi-layered provisions, and intersecting regulatory requirements, the AIA poses significant challenges for the identification and interpretation of legal obligations, making compliance a demanding and potentially error-prone endeavour for legal professionals and organisations alike.</div><div>Recent advances in Artificial Intelligence (AI), particularly in the fields of Natural Language Processing (NLP) and Large Language Models (LLMs), offer promising support for addressing these challenges. By automating the extraction and structuring of legal rules, AI-based tools have the potential to assist regulatory compliance activities and provide more systematic insights into complex legislative frameworks.</div><div>This paper presents an experiment combining NLP techniques and LLMs to automate the extraction and structuring of legal obligations from the AIA.</div><div>The approach is based on a modular workflow comprising four main stages: identification of obligations, filtering of deontic statements, analysis of deontic content, and the construction of searchable knowledge graphs. The experiment employed the LLaMA 3.3 70B model, supported by more traditional NLP tools.</div><div>Five experts (4 Ph.D. students and 1 post-doc in legal informatics and philosophy) evaluated the system’s performance on a subset of cases. The results indicate a precision of 93% in the obligation filtering phase and over 99% accuracy in classifying obligation types, addressees, and predicates. A quantitative analysis of the extracted and analysed obligations revealed a predominance of prescriptive obligations (603 out of 729 total), among which 136 are imposed on the European Commission, while 88 consist of informative duties. The results are in line with current discussions around the AI Act regulatory approach.</div><div>These findings underscore the potential of LLM-based tools to enhance regulatory compliance and analysis. Future research will focus on extending the system to additional EU regulations and integrating formal ontologies to enable more advanced representations of legal obligations.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106230"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Legal response to facial recognition technologies in China: still seeking the balance 中国对面部识别技术的法律回应:仍在寻求平衡
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2025-12-08 DOI: 10.1016/j.clsr.2025.106250
Yang Feng , Yuanyuan Cheng , Xingyu Yan
{"title":"Legal response to facial recognition technologies in China: still seeking the balance","authors":"Yang Feng ,&nbsp;Yuanyuan Cheng ,&nbsp;Xingyu Yan","doi":"10.1016/j.clsr.2025.106250","DOIUrl":"10.1016/j.clsr.2025.106250","url":null,"abstract":"<div><div>China leads globally in the large-scale deployment of facial recognition technologies (FRTs). As the country’s data protection legislation intensifies, the wide use of FRTs is raising increasing concerns about their legitimacy. To examine the legal response to FRTs in China, we analyse the legislative framework through a normative lens, evaluate the relevant administrative enforcement decisions with a mixed-method approach combining quantitative descriptive statistics and qualitative case study, and examine the judicial stance on FRT regulation through a case study. We find that despite some plausible legislative developments, the current legal framework provides inadequate facial information protection with an ineffective separate consent rule, a conspicuous lack of control over FRT use in the public sector, and weak enforcement of existing facial information protection laws. Additionally, the courts appear reluctant to address the abuse of FRTs, likely due to concerns about hindering the development of the FRT industry. We recommend a comprehensive approach to facial information protection, encompassing complementary legislative, administrative, and judicial measures.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106250"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anonymising personal data under the data legislative acquis established by the Data Governance Act 根据《数据治理法》建立的数据立法获取,对个人数据进行匿名处理
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2026-01-29 DOI: 10.1016/j.clsr.2026.106261
Emanuela Podda , Daniela Spajic , Pierangela Samarati
{"title":"Anonymising personal data under the data legislative acquis established by the Data Governance Act","authors":"Emanuela Podda ,&nbsp;Daniela Spajic ,&nbsp;Pierangela Samarati","doi":"10.1016/j.clsr.2026.106261","DOIUrl":"10.1016/j.clsr.2026.106261","url":null,"abstract":"<div><div>The re-identification risk test, established under Recital 26 of the General Data Protection Regulation (GDPR), constitutes a milestone in assessing the efficiency of personal data anonymisation. Its interpretation and implementation have been largely discussed by scholars and practitioners. This article illustrates the challenges to the plausibility of the anonymisation risk test, especially considering the most recent European jurisprudence on data anonymisation, and the recent Digital Omnibus proposal. Although this regulatory proposal aims at repealing the Data Governance Act, it transposes its data governance model into the Data Act.</div><div>With the aim to foster the regulatory and scholar debate of this new proposal, our work puts forward a new perspective on data anonymisation within the frame the DGA, considering its potential to reduce legal uncertainty to the application of anonymisation through data intermediaries. Specifically, our work investigates how Data Intermediation Service Provider (DISP) could support data holders in anonymising data, impacting on accountability when providing access to data and sharing of data. Although the involvement of DISPs may raise additional questions in terms of the responsibilities of a DISP, this article outlines the pertinent rules established by the DGA to analyse and discuss such potential responsibilities and to elaborate on their possible contractual consequences with respect to data anonymisation.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106261"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From the law of everything to a system that works: why recalibrating personal data enables, rather than undermines, digital protection (A response to Professor Nadezhda Purtova) 从万物法则到一个有效的系统:为什么重新校准个人数据能够而不是破坏数字保护(对Nadezhda Purtova教授的回应)
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2025-12-26 DOI: 10.1016/j.clsr.2025.106256
M.R. Leiser
{"title":"From the law of everything to a system that works: why recalibrating personal data enables, rather than undermines, digital protection (A response to Professor Nadezhda Purtova)","authors":"M.R. Leiser","doi":"10.1016/j.clsr.2025.106256","DOIUrl":"10.1016/j.clsr.2025.106256","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106256"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring gender equality in the metaverse 探索虚拟世界中的性别平等
IF 3.2 3区 社会学
Computer Law & Security Review Pub Date : 2026-04-01 Epub Date: 2025-12-24 DOI: 10.1016/j.clsr.2025.106254
Christina Pasvanti Gkioka, Eduard Fosch-Villaronga
{"title":"Exploring gender equality in the metaverse","authors":"Christina Pasvanti Gkioka,&nbsp;Eduard Fosch-Villaronga","doi":"10.1016/j.clsr.2025.106254","DOIUrl":"10.1016/j.clsr.2025.106254","url":null,"abstract":"<div><div>Gender-based discrimination in the Metaverse often takes the form as harassment or unwanted sexual behavior directed at avatars. Such harm is frequently underestimated because people assume a clear divide between users and their digital selves, overlooking how strongly individuals identify with their avatars. Mediated embodiment theory shows, nonetheless, that users experience their avatars as extensions of themselves, making virtual discrimination a real-world concern affecting dignity, mental health, and well-being. As digital spaces replicate and sometimes amplify existing gender inequalities, this study examines the extent to which gender equality is safeguarded in the Metaverse. It focuses on both legal and platform-based safeguards, assessing how the European Union’s Digital Services Act (DSA) can address gender-based risks in virtual environments. The analysis clarifies how the DSA’s obligations for hosting services and online platforms may apply to Metaverse providers, while acknowledging that most do not yet meet the threshold for designation as Very Large Online Platforms (VLOPs). The DSA provides a valuable starting point for promoting accountability and transparency but leaves important gaps in enforcement and coverage. At the platform level, policies, moderation tools, and safety features vary widely, underscoring the need for context-specific governance measures and legal recognition of avatar-mediated harm. Strengthening these safeguards is essential to ensure that the Metaverse evolves into a safer and more inclusive space, free from gender-based discrimination.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"60 ","pages":"Article 106254"},"PeriodicalIF":3.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书