Computer Law & Security Review最新文献

筛选
英文 中文
Introduction for computer law and security review: special issue “knowledge management for law” 计算机法律与安全评论》导言:"法律知识管理 "特刊
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-02-13 DOI: 10.1016/j.clsr.2024.105949
Emilio Sulis , Luigi Di Caro , Rohan Nanda
{"title":"Introduction for computer law and security review: special issue “knowledge management for law”","authors":"Emilio Sulis , Luigi Di Caro , Rohan Nanda","doi":"10.1016/j.clsr.2024.105949","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105949","url":null,"abstract":"","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139727065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consumer neuro devices within EU product safety law: Are we prepared for big tech ante portas? 欧盟产品安全法中的消费类神经设备:我们准备好迎接大型科技产品了吗?
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-02-10 DOI: 10.1016/j.clsr.2024.105945
Elisabeth Steindl
{"title":"Consumer neuro devices within EU product safety law: Are we prepared for big tech ante portas?","authors":"Elisabeth Steindl","doi":"10.1016/j.clsr.2024.105945","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105945","url":null,"abstract":"<div><p>Previously confined to the distinct medical market, neurotechnologies are expanding rapidly into the consumer market, driven by technological advancements and substantial investments. While offering promising benefits, concerns have emerged regarding the suitability of existing legal frameworks to adequately address the risks they present. Against the background of an ongoing global debate on new policies or new ‘neurorights’ regulating neurotechnology, this paper delves into the regulation of consumer Brain-Computer Interfaces (BCIs) in the European Union (EU), focusing on the pertinent product safety legislation.</p><p>The analysis will primarily examine the sector-specific product safety law for medical devices, the Medical Devices Regulation (MDR). It will meticulously delineate which consumer BCIs fall within its scope and are obliged to comply with the requirements outlined. The tech-based approach of Annex XVI MDR, coupled with recent amendments, show that the EU has adopted a forward-thinking rationale towards regulating health-related risks associated with consumer BCIs within existing EU medical devices legislation, while abstaining from over-regulating aspects therein that are beyond its core objectives.</p><p>Supplementary, the paper will discuss developments in EU horizontal product safety law, regulating all consumer BCIs that are not subject to sector-specific product safety legislation. In their recently adopted General Product Safety Regulation (GPSR), the EU has introduced several provisions addressing digital products. Inter alia, these changes will enhance the horizontal regulation of consumer BCIs.</p><p>Overall, within the context of product safety law, the recent adaptations affirm notable efforts by the EU to refine the legal framework that governs consumer BCIs, striking a delicate balance between effective technology regulation and not impeding innovation.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000128/pdfft?md5=7661ea829ab4840c2ce795423ce982d8&pid=1-s2.0-S0267364924000128-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139718879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Substantive fairness in the GDPR: Fairness Elements for Article 5.1a GDPR GDPR 中的实质性公平:GDPR 第 5.1a 条的公平要素
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-02-10 DOI: 10.1016/j.clsr.2024.105942
Andreas Häuselmann, Bart Custers
{"title":"Substantive fairness in the GDPR: Fairness Elements for Article 5.1a GDPR","authors":"Andreas Häuselmann,&nbsp;Bart Custers","doi":"10.1016/j.clsr.2024.105942","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105942","url":null,"abstract":"<div><p>According to the fairness principle in Article 5.1a of the EU General Data Protection Regulation (GDPR), data controllers must process personal data fairly. However, the GDPR fails to explain what is fairness and how it should be achieved. In fact, the GDPR focuses mostly on procedural fairness: if personal data are processed in compliance with the GDPR, for instance, by ensuring lawfulness and transparency, such processing is assumed to be fair. Because some forms of data processing can still be unfair, even if all the GDPR's procedural rules are complied with, we argue that substantive fairness is also an essential part of the GDPR's fairness principle and necessary to achieve the GDPR's goal of offering effective protection to data subjects. Substantive fairness is not mentioned in the GDPR and no guidance on substantive fairness is provided. In this paper, we provide elements of substantive fairness derived from EU consumer law, competition law, non-discrimination law, and data protection law that can help interpret the substantive part of the GDPR's fairness principle. Three elements derived from consumer protection law are good faith, no detrimental effects, and autonomy (e.g., no misleading or aggressive practices). We derive the element of abuse of dominant position (and power inequalities) from competition law. From other areas of law, we derive non-discrimination, vulnerabilities, and accuracy as elements relevant to interpreting substantive fairness. Although this may not be a complete list, cumulatively these elements may help interpret Article 5.1a GDPR and help achieve fairness in data protection law.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000098/pdfft?md5=9b2b92d2ee98f14fb7799d00af51f207&pid=1-s2.0-S0267364924000098-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139718878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affection as a service: Ghostbots and the changing nature of mourning 亲情是一种服务:幽灵机器人与哀悼性质的变化
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-02-09 DOI: 10.1016/j.clsr.2024.105943
Mauricio Figueroa-Torres
{"title":"Affection as a service: Ghostbots and the changing nature of mourning","authors":"Mauricio Figueroa-Torres","doi":"10.1016/j.clsr.2024.105943","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105943","url":null,"abstract":"<div><p>This article elucidates the rise of ghostbots, artificial conversational agents that emulate the deceased, as marketable commodities. The study explains the role of ghostbots in changing how mourning is experienced. It highlights how ghostbots alter the relationship between the bereaved and the departed, transforming it into one of a customer-object within legal discourse. By critically examining the nexus between commodification and the law, this study underscores how ghostbots signify a different and intriguing form of commodification in the interaction between the living and the deceased, within the dynamics of the Digital Afterlife Industry. By furnishing this scrutiny, the article contributes to comprehending the commodification inherent in ghostbots and concludes by delineating specific foundational or seminal points for subsequent academic discussion to aide a more holistic deliberation on the use, commercialisation, or regulation of these systems, and other affection-as-a-service products.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000104/pdfft?md5=2c1547e315f7ded62b771b4c6859191e&pid=1-s2.0-S0267364924000104-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139714785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to design data access for researchers: A legal and software development perspective 如何为研究人员设计数据访问:法律和软件开发视角
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-02-06 DOI: 10.1016/j.clsr.2024.105946
M.Z. van Drunen, A. Noroozian
{"title":"How to design data access for researchers: A legal and software development perspective","authors":"M.Z. van Drunen,&nbsp;A. Noroozian","doi":"10.1016/j.clsr.2024.105946","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105946","url":null,"abstract":"<div><p>Public scrutiny of platforms has been limited by a lack of transparency. In response, EU law increasingly requires platforms to provide data to researchers. The Digital Services Act and the proposed Regulation on the Transparency and Targeting of Political Advertising in particular require platforms to provide access to data through ad libraries and in response to data access requests. However, these obligations leave platforms considerable discretion to determine how access to data is provided. As the history of platforms’ self-regulated data access projects shows, the technical choices involved in designing data access significantly affect how researchers can use the provided data to scrutinise platforms. Ignoring the way data access is designed therefore creates a danger that platforms’ ability to limit research into their services simply shifts from controlling what data is available to researchers, to how data access is provided. This article explores how the Digital Services Act and proposed Political Advertising Regulation should be used to control the operationalisation of data access obligations that enable researchers to scrutinise platforms. It argues the operationalisation of data access regimes should not only be seen as a legal problem, but also as a software design problem. To that end it explores how software development principles may inform the operationalisation of data access obligations. The article closes by exploring the legal mechanisms available in the Digital Services Act and proposed Political Advertising Regulation to exercise control over the design of data access regimes, and makes five recommendations for ways in which these mechanisms should be used to enable research into platforms.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S026736492400013X/pdfft?md5=4f8827d19e930942944bf92c085de7da&pid=1-s2.0-S026736492400013X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139694673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is the regulation of connected and automated vehicles (CAVs) a wicked problem and why does it matter? 对联网和自动驾驶汽车(CAV)的监管是一个棘手的问题吗?
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-02-03 DOI: 10.1016/j.clsr.2024.105944
Amy Dunphy
{"title":"Is the regulation of connected and automated vehicles (CAVs) a wicked problem and why does it matter?","authors":"Amy Dunphy","doi":"10.1016/j.clsr.2024.105944","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105944","url":null,"abstract":"<div><p>The anticipated public deployment of highly connected and automated vehicles (‘CAVs’) has the potential to introduce a range of complex regulatory challenges because of the novel and expansive way that data is generated, used, collected and shared by CAVs. Regulators within Australia and internationally are facing the complex task of developing rules and regulations to meet these challenges against the backdrop of continuing uncertainty about the ultimate form of CAVs and the timeframe for their introduction. This paper undertakes a novel examination of whether the regulation of high level CAVs and their associated data will constitute a ‘wicked problem’. The wicked problem framework provides a valuable lens through which to examine difficult issues that are faced by regulators and, in turn, to aid in developing regulatory responses and to navigate such issues. A new four quadrant framework is developed and applied. It draws on and expands the seminal work on wicked problems by Rittel and Webber, and Alford and Head. The framework is used to critically reflect on whether CAVs are a ‘wicked problem’, and, if so, what might be the potential consequences for policy and regulatory development involving the data environment. This paper considers whether evaluating the ‘wickedness’ of a problem is a useful exercise for regulators, and the potential impact on developing novel approaches to regulatory responses.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000116/pdfft?md5=5aa65af607c54254006825d18dd5a56d&pid=1-s2.0-S0267364924000116-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139674866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transborder flow of personal data (TDF) in Africa: Stocktaking the ills and gains of a divergently regulated business mechanism 非洲个人数据跨境流动(TDF):盘点监管不一的商业机制的弊端与收益
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-01-31 DOI: 10.1016/j.clsr.2024.105940
Olumide Babalola
{"title":"Transborder flow of personal data (TDF) in Africa: Stocktaking the ills and gains of a divergently regulated business mechanism","authors":"Olumide Babalola","doi":"10.1016/j.clsr.2024.105940","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105940","url":null,"abstract":"<div><p>Technology-based transactions are inseparable from the routine exchange of data. These exchanges may not pose privacy problems until the movement takes extra-territorial turns thereby facing multiple levels of cross-border regulations. In the 80 s, the frequency of transfer of personal data beyond geographical boundaries in Europe precipitated the regulation of transborder data flows (TDF) beginning with the enactment of the Organization for OECD Guidelines. In Africa, the concept of TDF is more complex than usually viewed by the stakeholders and this is partly because neither the African Union nor other regional bodies have introduced legislation on TDF. Like many concepts in data protection, TDF is bereft of a generally accepted meaning. Regardless of the uncertainty, this paper approaches TDF as the transmission of personal data from one country to another country or international entity for the purpose of processing. The paper discusses some definitions of TDF as understood under African regional and national data protection legislation. In a comparative and normative approach, the paper analyses the barriers to TDF in Africa vis a vis the European experience and then concludes with recommendations for workable TDF within and outside the continent from an African perspective beginning with the harmonization of existing regional framework.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364924000074/pdfft?md5=169c3fa8a8be583b07c661812bd2a721&pid=1-s2.0-S0267364924000074-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fraud by generative AI chatbots: On the thin line between deception and negligence 生成式人工智能聊天机器人的欺诈行为:欺骗与疏忽之间的一线之隔
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-01-29 DOI: 10.1016/j.clsr.2024.105941
Maarten Herbosch
{"title":"Fraud by generative AI chatbots: On the thin line between deception and negligence","authors":"Maarten Herbosch","doi":"10.1016/j.clsr.2024.105941","DOIUrl":"https://doi.org/10.1016/j.clsr.2024.105941","url":null,"abstract":"<div><p><span>The use of generative AI systems is on the rise. As a result, we are increasingly often conversing with AI </span>chatbots<span> rather than with fellow humans. This increasing use of AI systems leads to legal challenges as well, particularly when the chatbot provides incorrect information. In this article, we study whether someone who decides to contract on the basis of incorrect information provided by a generative AI chatbot might invoke the fraud regime to annul the resulting contract in various legal systems. During this analysis, it becomes clear that some of the requirements that are currently being put forward from a public law perspective, such as in the European AI Act, may also naturally arise from existing private law figures. In the same vein, this analysis highlights the interesting intradisciplinary feedback between instruments of public law and other legal domains.</span></p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy icons as a component of effective transparency and controls under the GDPR: effective data protection by design based on art. 25 GDPR 隐私图标作为 GDPR 有效透明度和控制的组成部分:基于 GDPR 第 25 条的有效数据保护设计。GDPR 第 25 条
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-01-28 DOI: 10.1016/j.clsr.2023.105924
Max von Grafenstein , Isabel Kiefaber , Julie Heumüller , Valentin Rupp , Paul Graßl , Otto Kolless , Zsófia Puzst
{"title":"Privacy icons as a component of effective transparency and controls under the GDPR: effective data protection by design based on art. 25 GDPR","authors":"Max von Grafenstein ,&nbsp;Isabel Kiefaber ,&nbsp;Julie Heumüller ,&nbsp;Valentin Rupp ,&nbsp;Paul Graßl ,&nbsp;Otto Kolless ,&nbsp;Zsófia Puzst","doi":"10.1016/j.clsr.2023.105924","DOIUrl":"https://doi.org/10.1016/j.clsr.2023.105924","url":null,"abstract":"<div><p>Understandable privacy information builds trust with users and therefore provides an important competitive advantage for the provider. However, designing privacy information that is both truthful and easy for users to understand is challenging. There are many complex balancing decisions to be made, not only with respect to legal but also visual and user experience design issues. This is why designing understandable privacy information requires combining at least three disciplines that have had little to do with each other in current practice: law, visual design, and user experience design research. The challenges of combining all three disciplines actually culminate in the design and use of Privacy Icons, which are expected to make lengthy legal texts clear and easy to understand (see Art. 12 sect. 7 of the EU General Data Protection Regulation). However, that is much easier said than done. In this paper, we summarise our key learnings from a five years research process on how to design Privacy Icons as a component of effective transparency and user controls. We will provide examples of information and control architectures for privacy policies, forms of consent (especially in the form of cookie banners), privacy dashboards and consent agents in which Privacy Icons may be embedded, 2) a non-exhaustive set of more than 150 Privacy Icons, and above all 3) a concept and process model that can be used to implement the requirements of the GDPR in terms of transparency and user controls in an <em>effective</em> way, according to the data protection by design approach in Art. 25 sect. 1 GDPR. The paper will show that it is a rocky road to the stars and we still haven't arrived – but at least we know how to go.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0267364923001346/pdfft?md5=346ff487adc0ce805af363fe1afb0633&pid=1-s2.0-S0267364923001346-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discrimination for the sake of fairness by design and its legal framework 为公平而设计的歧视及其法律框架
IF 2.9 3区 社会学
Computer Law & Security Review Pub Date : 2024-01-27 DOI: 10.1016/j.clsr.2023.105916
Holly Hoch , Corinna Hertweck , Michele Loi , Aurelia Tamò-Larrieux
{"title":"Discrimination for the sake of fairness by design and its legal framework","authors":"Holly Hoch ,&nbsp;Corinna Hertweck ,&nbsp;Michele Loi ,&nbsp;Aurelia Tamò-Larrieux","doi":"10.1016/j.clsr.2023.105916","DOIUrl":"https://doi.org/10.1016/j.clsr.2023.105916","url":null,"abstract":"<div><p>As algorithms are increasingly enlisted to make critical determinations about human actors, the more frequently we see these algorithms appear in sensational headlines crying foul on discrimination. There is broad consensus among computer scientists working on this issue that such discrimination can be reduced by intentionally collecting and consciously using sensitive information<span> about demographic features like sex, gender, race, religion etc. Companies implementing such algorithms might, however, be wary of allowing algorithms access to such data as they fear legal repercussions, as the promoted standard has been to omit protected attributes, otherwise dubbed “fairness through unawareness”. This paper asks whether such wariness is justified in light of EU data protection and anti-discrimination laws. In order to answer this question, we introduce a specific case and analyze how EU law might apply when an algorithm accesses sensitive information to make fairer predictions. We review whether such measures constitute discrimination, and for who, arriving at different conclusions based on how we define the harm of discrimination and the groups we compare. Finding that several legal claims could arise regarding the use of sensitive information, we ultimately conclude that the proffered fairness measures would be considered a positive (or affirmative) action under EU law. As such, the appropriate use of sensitive information in order to increase the fairness of an algorithm is a positive action, and not per se prohibited by EU law.</span></p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信