Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium最新文献

筛选
英文 中文
Analyzing the Feasibility and Generalizability of Fingerprinting Internet of Things Devices 指纹物联网设备的可行性及推广分析
Dilawer Ahmed, Anupam Das, Fareed Zaffar
{"title":"Analyzing the Feasibility and Generalizability of Fingerprinting Internet of Things Devices","authors":"Dilawer Ahmed, Anupam Das, Fareed Zaffar","doi":"10.2478/popets-2022-0057","DOIUrl":"https://doi.org/10.2478/popets-2022-0057","url":null,"abstract":"Abstract In recent years, we have seen rapid growth in the use and adoption of Internet of Things (IoT) devices. However, some loT devices are sensitive in nature, and simply knowing what devices a user owns can have security and privacy implications. Researchers have, therefore, looked at fingerprinting loT devices and their activities from encrypted network traffic. In this paper, we analyze the feasibility of fingerprinting IoT devices and evaluate the robustness of such fingerprinting approach across multiple independent datasets — collected under different settings. We show that not only is it possible to effectively fingerprint 188 loT devices (with over 97% accuracy), but also to do so even with multiple instances of the same make-and-model device. We also analyze the extent to which temporal, spatial and data-collection-methodology differences impact fingerprinting accuracy. Our analysis sheds light on features that are more robust against varying conditions. Lastly, we comprehensively analyze the performance of our approach under an open-world setting and propose ways in which an adversary can enhance their odds of inferring additional information about unseen devices (e.g., similar devices manufactured by the same company).","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"578 - 600"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47824182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Understanding Privacy-Related Advice on Stack Overflow 了解有关堆栈溢出的隐私相关建议
Mohammad Tahaei, Tianshi Li, Kami Vaniea
{"title":"Understanding Privacy-Related Advice on Stack Overflow","authors":"Mohammad Tahaei, Tianshi Li, Kami Vaniea","doi":"10.2478/popets-2022-0038","DOIUrl":"https://doi.org/10.2478/popets-2022-0038","url":null,"abstract":"Abstract Privacy tasks can be challenging for developers, resulting in privacy frameworks and guidelines from the research community which are designed to assist developers in considering privacy features and applying privacy enhancing technologies in early stages of software development. However, how developers engage with privacy design strategies is not yet well understood. In this work, we look at the types of privacy-related advice developers give each other and how that advice maps to Hoepman’s privacy design strategies. We qualitatively analyzed 119 privacy-related accepted answers on Stack Overflow from the past five years and extracted 148 pieces of advice from these answers. We find that the advice is mostly around compliance with regulations and ensuring confidentiality with a focus on the inform, hide, control, and minimize of the Hoepman’s privacy design strategies. Other strategies, abstract, separate, enforce, and demonstrate, are rarely advised. Answers often include links to official documentation and online articles, highlighting the value of both official documentation and other informal materials such as blog posts. We make recommendations for promoting the under-stated strategies through tools, and detail the importance of providing better developer support to handle third-party data practices.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"114 - 131"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47937492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
How to prove any NP statement jointly? Efficient Distributed-prover Zero-Knowledge Protocols 如何联合证明任何NP命题?高效的分布式证明者零知识协议
Pankaj Dayama, A. Patra, Protik Paul, Nitin Singh, Dhinakaran Vinayagamurthy
{"title":"How to prove any NP statement jointly? Efficient Distributed-prover Zero-Knowledge Protocols","authors":"Pankaj Dayama, A. Patra, Protik Paul, Nitin Singh, Dhinakaran Vinayagamurthy","doi":"10.2478/popets-2022-0055","DOIUrl":"https://doi.org/10.2478/popets-2022-0055","url":null,"abstract":"Abstract Traditional zero-knowledge protocols have been studied and optimized for the setting where a single prover holds the complete witness and tries to convince a verifier about a predicate on the witness, without revealing any additional information to the verifier. In this work, we study the notion of distributed-prover zero knowledge (DPZK) for arbitrary predicates where the witness is shared among multiple mutually distrusting provers and they want to convince a verifier that their shares together satisfy the predicate. We make the following contributions to the notion of distributed proof generation: (i) we propose a new MPC-style security definition to capture the adversarial settings possible for different collusion models between the provers and the verifier, (ii) we discuss new efficiency parameters for distributed proof generation such as the number of rounds of interaction and the amount of communication among the provers, and (iii) we propose a compiler that realizes distributed proof generation from the zero-knowledge protocols in the Interactive Oracle Proofs (IOP) paradigm. Our compiler can be used to obtain DPZK from arbitrary IOP protocols, but the concrete efficiency overheads are substantial in general. To this end, we contribute (iv) a new zero-knowledge IOP Graphene which can be compiled into an efficient DPZK protocol. The (D + 1)-DPZK protocol D-Graphene, with D provers and one verifier, admits O(N1/c) proof size with a communication complexity of O(D2 ·(N1−2/c+Ns)), where N is the number of gates in the arithmetic circuit representing the predicate and Ns is the number of wires that depends on inputs from two or more parties. Significantly, only the distributed proof generation in D-Graphene requires interaction among the provers. D-Graphene compares favourably with the DPZK protocols obtained from the state-of-art zero-knowledge protocols, even those not modelled as IOPs.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"517 - 556"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41499208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Revisiting Identification Issues in GDPR ‘Right Of Access’ Policies: A Technical and Longitudinal Analysis 重新审视GDPR“访问权”政策中的识别问题:技术和纵向分析
Mariano Di Martino, Isaac Meers, P. Quax, Kenneth M. Andries, W. Lamotte
{"title":"Revisiting Identification Issues in GDPR ‘Right Of Access’ Policies: A Technical and Longitudinal Analysis","authors":"Mariano Di Martino, Isaac Meers, P. Quax, Kenneth M. Andries, W. Lamotte","doi":"10.2478/popets-2022-0037","DOIUrl":"https://doi.org/10.2478/popets-2022-0037","url":null,"abstract":"Abstract Several data protection regulations permit individuals to request all personal information that an organization holds about them by utilizing Subject Access Requests (SARs). Prior work has observed the identification process of such requests, demonstrating weak policies that are vulnerable to potential data breaches. In this paper, we analyze and compare prior work in terms of methodologies, requested identification credentials and threat models in the context of privacy and cybersecurity. Furthermore, we have devised a longitudinal study in which we examine the impact of responsible disclosures by re-evaluating the SAR authentication processes of 40 organizations after they had two years to improve their policies. Here, we demonstrate that 53% of the previously vulnerable organizations have not corrected their policy and an additional 27% of previously non-vulnerable organizations have potentially weakened their policies instead of improving them, thus leaking sensitive personal information to potential adversaries. To better understand state-of-the-art SAR policies, we interviewed several Data Protection Officers and explored the reasoning behind their processes from a viewpoint in the industry and gained insights about potential criminal abuse of weak SAR policies. Finally, we propose several technical modifications to SAR policies that reduce privacy and security risks of data controllers.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"95 - 113"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41733982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Who Knows I Like Jelly Beans? An Investigation Into Search Privacy 谁知道我喜欢果冻豆?搜索隐私调查
Daniel Kats, David Silva, Johann Roturier
{"title":"Who Knows I Like Jelly Beans? An Investigation Into Search Privacy","authors":"Daniel Kats, David Silva, Johann Roturier","doi":"10.2478/popets-2022-0053","DOIUrl":"https://doi.org/10.2478/popets-2022-0053","url":null,"abstract":"Abstract Internal site search is an integral part of how users navigate modern sites, from restaurant reservations to house hunting to searching for medical solutions. Search terms on these sites may contain sensitive information such as location, medical information, or sexual preferences; when further coupled with a user’s IP address or a browser’s user agent string, this information can become very specific, and in some cases possibly identifying. In this paper, we measure the various ways by which search terms are sent to third parties when a user submits a search query. We developed a methodology for identifying and interacting with search components, which we implemented on top of an instrumented headless browser. We used this crawler to visit the Tranco top one million websites and analyzed search term leakage across three vectors: URL query parameters, payloads, and the Referer HTTP header. Our crawler found that 512,701 of the top 1 million sites had internal site search. We found that 81.3% of websites containing internal site search sent (or leaked from a user’s perspective) our search terms to third parties in some form. We then compared our results to the expected results based on a natural language analysis of the privacy policies of those leaking websites (where available) and found that about 87% of those privacy policies do not mention search terms explicitly. However, about 75% of these privacy policies seem to mention the sharing of some information with third-parties in a generic manner. We then present a few countermeasures, including a browser extension to warn users about imminent search term leakage to third parties. We conclude this paper by making recommendations on clarifying the privacy implications of internal site search to end users.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"426 - 446"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48923575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comprehensive Analysis of Privacy Leakage in Vertical Federated Learning During Prediction 垂直联邦学习预测过程中隐私泄露的综合分析
Xue Jiang, Xuebing Zhou, Jens Grossklags
{"title":"Comprehensive Analysis of Privacy Leakage in Vertical Federated Learning During Prediction","authors":"Xue Jiang, Xuebing Zhou, Jens Grossklags","doi":"10.2478/popets-2022-0045","DOIUrl":"https://doi.org/10.2478/popets-2022-0045","url":null,"abstract":"Abstract Vertical federated learning (VFL), a variant of federated learning, has recently attracted increasing attention. An active party having the true labels jointly trains a model with other parties (referred to as passive parties) in order to use more features to achieve higher model accuracy. During the prediction phase, all the parties collaboratively compute the predicted confidence scores of each target record and the results will be finally returned to the active party. However, a recent study by Luo et al. [28] pointed out that the active party can use these confidence scores to reconstruct passive-party features and cause severe privacy leakage. In this paper, we conduct a comprehensive analysis of privacy leakage in VFL frameworks during the prediction phase. Our study improves on previous work [28] regarding two aspects. We first design a general gradient-based reconstruction attack framework that can be flexibly applied to simple logistic regression models as well as multi-layer neural networks. Moreover, besides performing the attack under the white-box setting, we give the first attempt to conduct the attack under the black-box setting. Extensive experiments on a number of real-world datasets show that our proposed attack is effective under different settings and can achieve at best twice or thrice of a reduction of attack error compared to previous work [28]. We further analyze a list of potential mitigation approaches and compare their privacy-utility performances. Experimental results demonstrate that privacy leakage from the confidence scores is a substantial privacy risk in VFL frameworks during the prediction phase, which cannot be simply solved by crypto-based confidentiality approaches. On the other hand, processing the confidence scores with information compression and randomization approaches can provide strengthened privacy protection.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"263 - 281"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48930456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Efficient Set Membership Proofs using MPC-in-the-Head 在头部使用MPC的有效集成员证明
Aarushi Goel, M. Green, Mathias Hall-Andersen, Gabriel Kaptchuk
{"title":"Efficient Set Membership Proofs using MPC-in-the-Head","authors":"Aarushi Goel, M. Green, Mathias Hall-Andersen, Gabriel Kaptchuk","doi":"10.2478/popets-2022-0047","DOIUrl":"https://doi.org/10.2478/popets-2022-0047","url":null,"abstract":"Abstract Set membership proofs are an invaluable part of privacy preserving systems. These proofs allow a prover to demonstrate knowledge of a witness w corresponding to a secret element x of a public set, such that they jointly satisfy a given NP relation, i.e. ℛ(w, x) = 1 and x is a member of a public set {x1, . . . , x𝓁}. This allows the identity of the prover to remain hidden, eg. ring signatures and confidential transactions in cryptocurrencies. In this work, we develop a new technique for efficiently adding logarithmic-sized set membership proofs to any MPC-in-the-head based zero-knowledge protocol (Ishai et al. [STOC’07]). We integrate our technique into an open source implementation of the state-of-the-art, post quantum secure zero-knowledge protocol of Katz et al. [CCS’18].We find that using our techniques to construct ring signatures results in signatures (based only on symmetric key primitives) that are between 5 and 10 times smaller than state-of-the-art techniques based on the same assumptions. We also show that our techniques can be used to efficiently construct post-quantum secure RingCT from only symmetric key primitives.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"304 - 324"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43386179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Updatable Private Set Intersection 可更新的专用集交集
S. Badrinarayanan, Peihan Miao, Tiancheng Xie
{"title":"Updatable Private Set Intersection","authors":"S. Badrinarayanan, Peihan Miao, Tiancheng Xie","doi":"10.2478/popets-2022-0051","DOIUrl":"https://doi.org/10.2478/popets-2022-0051","url":null,"abstract":"Abstract Private set intersection (PSI) allows two mutually distrusting parties each with a set as input, to learn the intersection of both their sets without revealing anything more about their respective input sets. Traditionally, PSI studies the static setting where the computation is performed only once on both parties’ input sets. We initiate the study of updatable private set intersection (UPSI), which allows parties to compute the intersection of their private sets on a regular basis with sets that also constantly get updated. We consider two specific settings. In the first setting called UPSI with addition, parties can add new elements to their old sets. We construct two protocols in this setting, one allowing both parties to learn the output and the other only allowing one party to learn the output. In the second setting called UPSI with weak deletion, parties can additionally delete their old elements every t days. We present a protocol for this setting allowing both parties to learn the output. All our protocols are secure against semi-honest adversaries and have the guarantee that both the computational and communication complexity only grow with the set updates instead of the entire sets. Finally, we implement our UPSI with addition protocols and compare with the state-of-the-art PSI protocols. Our protocols compare favorably when the total set size is sufficiently large, the new updates are sufficiently small, or in networks with low bandwidth.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"378 - 406"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41971170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Building a Privacy-Preserving Smart Camera System 构建一个保护隐私的智能摄像头系统
Yohan Beugin, Quinn K. Burke, Blaine Hoak, Ryan Sheatsley, Eric Pauley, Gang Tan, Syed Rafiul Hussain, P. Mcdaniel
{"title":"Building a Privacy-Preserving Smart Camera System","authors":"Yohan Beugin, Quinn K. Burke, Blaine Hoak, Ryan Sheatsley, Eric Pauley, Gang Tan, Syed Rafiul Hussain, P. Mcdaniel","doi":"10.2478/popets-2022-0034","DOIUrl":"https://doi.org/10.2478/popets-2022-0034","url":null,"abstract":"Abstract Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) access the video footage without the users’ knowledge or consent—violating the core tenet of user privacy. In this paper, we present CaCTUs, a privacy-preserving smart Camera system Controlled Totally by Users. CaCTUs returns control to the user; the root of trust begins with the user and is maintained through a series of cryptographic protocols, designed to support popular features, such as sharing, deleting, and viewing videos live. We show that the system can support live streaming with a latency of 2 s at a frame rate of 10 fps and a resolution of 480 p. In so doing, we demonstrate that it is feasible to implement a performant smart-camera system that leverages the convenience of a cloud-based model while retaining the ability to control access to (private) data.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"25 - 46"},"PeriodicalIF":0.0,"publicationDate":"2022-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45645108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Visualizing Privacy-Utility Trade-Offs in Differentially Private Data Releases 可视化差异私有数据发布中的隐私效用权衡
Priyanka Nanayakkara, Johes Bater, Xi He, J. Hullman, Jennie Duggan
{"title":"Visualizing Privacy-Utility Trade-Offs in Differentially Private Data Releases","authors":"Priyanka Nanayakkara, Johes Bater, Xi He, J. Hullman, Jennie Duggan","doi":"10.2478/popets-2022-0058","DOIUrl":"https://doi.org/10.2478/popets-2022-0058","url":null,"abstract":"Abstract Organizations often collect private data and release aggregate statistics for the public’s benefit. If no steps toward preserving privacy are taken, adversaries may use released statistics to deduce unauthorized information about the individuals described in the private dataset. Differentially private algorithms address this challenge by slightly perturbing underlying statistics with noise, thereby mathematically limiting the amount of information that may be deduced from each data release. Properly calibrating these algorithms—and in turn the disclosure risk for people described in the dataset—requires a data curator to choose a value for a privacy budget parameter, ɛ. However, there is little formal guidance for choosing ɛ, a task that requires reasoning about the probabilistic privacy–utility tradeoff. Furthermore, choosing ɛ in the context of statistical inference requires reasoning about accuracy trade-offs in the presence of both measurement error and differential privacy (DP) noise. We present Visualizing Privacy (ViP), an interactive interface that visualizes relationships between ɛ, accuracy, and disclosure risk to support setting and splitting ɛ among queries. As a user adjusts ɛ, ViP dynamically updates visualizations depicting expected accuracy and risk. ViP also has an inference setting, allowing a user to reason about the impact of DP noise on statistical inferences. Finally, we present results of a study where 16 research practitioners with little to no DP background completed a set of tasks related to setting ɛ using both ViP and a control. We find that ViP helps participants more correctly answer questions related to judging the probability of where a DP-noised release is likely to fall and comparing between DP-noised and non-private confidence intervals.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"601 - 618"},"PeriodicalIF":0.0,"publicationDate":"2022-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43389600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信