Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security最新文献

筛选
英文 中文
Securing Container-based Clouds with Syscall-aware Scheduling 使用系统调用感知调度保护基于容器的云
Michael V. Le, Salman Ahmed, Dan Williams, H. Jamjoom
{"title":"Securing Container-based Clouds with Syscall-aware Scheduling","authors":"Michael V. Le, Salman Ahmed, Dan Williams, H. Jamjoom","doi":"10.1145/3579856.3582835","DOIUrl":"https://doi.org/10.1145/3579856.3582835","url":null,"abstract":"Container-based clouds—in which containers are the basic unit of isolation—face security concerns because, unlike Virtual Machines, containers directly interface with the underlying highly privileged kernel through the wide and vulnerable system call interface. Regardless of whether a container itself requires dangerous system calls, a compromised or malicious container sharing the host (a bad neighbor) can compromise the host kernel using a vulnerable syscall, thereby compromising all other containers sharing the host. In this paper, rather than attempting to eliminate host compromise, we limit the effectiveness of attacks by bad neighbors to a subset of the cluster. To do this, we propose a new metric dubbed Extraneous System call Exposure (ExS). Scheduling containers to minimize ExS reduces the number of nodes that expose a vulnerable system call and as a result the number of affected containers in the cluster. Experimenting with 42 popular containers on SySched, our greedy scheduler implementation in Kubernetes, we demonstrate that SySched can reduce up to 46% more victim nodes and up to 48% more victim containers compared to the Kubernetes default scheduling while also reducing overall host attack surface by 20%.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133279392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BFU: Bayesian Federated Unlearning with Parameter Self-Sharing BFU:参数自共享贝叶斯联合学习
Wen Wang, Zhiyi Tian, Chenhan Zhang, An Liu, Shui Yu
{"title":"BFU: Bayesian Federated Unlearning with Parameter Self-Sharing","authors":"Wen Wang, Zhiyi Tian, Chenhan Zhang, An Liu, Shui Yu","doi":"10.1145/3579856.3590327","DOIUrl":"https://doi.org/10.1145/3579856.3590327","url":null,"abstract":"As the right to be forgotten has been legislated worldwide, many studies attempt to design machine unlearning mechanisms to enable data erasure from a trained model. Existing machine unlearning studies focus on centralized learning, where the server can access all users’ data. However, in a popular scenario, federated learning (FL), the server cannot access users’ training data. In this paper, we investigate the problem of machine unlearning in FL. We formalize a federated unlearning problem and propose a bayesian federated unlearning (BFU) approach to implement unlearning for a trained FL model without sharing raw data with the server. Specifically, we first introduce an unlearning rate in BFU to balance the trade-off between forgetting the erased data and remembering the original global model, making it adaptive to different unlearning tasks. Then, to mitigate accuracy degradation caused by unlearning, we propose BFU with parameter self-sharing (BFU-SS). BFU-SS considers data erasure and maintaining learning accuracy as two tasks and optimizes them together during unlearning. Extensive comparisons between our methods and the state-of-art federated unlearning method demonstrate the superiority of our proposed realizations.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129554702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New Look at Blockchain Leader Election: Simple, Efficient, Sustainable and Post-Quantum 区块链领袖选举的新视角:简单、高效、可持续和后量子
Muhammed F. Esgin, O. Ersoy, Veronika Kuchta, J. Loss, A. Sakzad, Ron Steinfeld, Xiangwen Yang, Raymond K. Zhao
{"title":"A New Look at Blockchain Leader Election: Simple, Efficient, Sustainable and Post-Quantum","authors":"Muhammed F. Esgin, O. Ersoy, Veronika Kuchta, J. Loss, A. Sakzad, Ron Steinfeld, Xiangwen Yang, Raymond K. Zhao","doi":"10.1145/3579856.3595792","DOIUrl":"https://doi.org/10.1145/3579856.3595792","url":null,"abstract":"In this work, we study the blockchain leader election problem. The purpose of such protocols is to elect a leader who decides on the next block to be appended to the blockchain, for each block proposal round. Solutions to this problem are vital for the security of blockchain systems. We introduce an efficient blockchain leader election method with security based solely on standard assumptions for cryptographic hash functions (rather than public-key cryptographic assumptions) and that does not involve a racing condition as in Proof-of-Work based approaches. Thanks to the former feature, our solution provides the highest confidence in security, even in the post-quantum era. A particularly scalable application of our solution is in the Proof-of-Stake setting, and we investigate our solution in the Algorand blockchain system. We believe our leader election approach can be easily adapted to a range of other blockchain settings. At the core of Algorand’s leader election is a verifiable random function (VRF). Our approach is based on introducing a simpler primitive which still suffices for the blockchain leader election problem. In particular, we analyze the concrete requirements in an Algorand-like blockchain setting to accomplish leader election, which leads to the introduction of indexed VRF (iVRF). An iVRF satisfies modified uniqueness and pseudorandomness properties (versus a full-fledged VRF) that enable an efficient instantiation based on a hash function without requiring any complicated zero-knowledge proofs of correct PRF evaluation. We further extend iVRF to an authenticated iVRF with forward-security, which meets all the requirements to establish an Algorand-like consensus. Our solution is simple, flexible and incurs only a 32-byte additional overhead when combined with the current best solution to constructing a forward-secure signature (in the post-quantum setting). We implemented our (authenticated) iVRF proposal in C language on a standard computer and show that it significantly outperforms other quantum-safe VRF proposals in almost all metrics. Particularly, iVRF evaluation and verification can be executed in 0.02 ms, which is even faster than ECVRF used in Algorand.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115238140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
IGA : An Improved Genetic Algorithm to Construct Weightwise (Almost) Perfectly Balanced Boolean Functions with High Weightwise Nonlinearity IGA:一种改进的遗传算法,用于构造具有高度加权非线性的加权(几乎)完美平衡布尔函数
Lili Yan, Jingyi Cui, Jian Liu, Guangquan Xu, Lidong Han, Alireza Jolfaei, Xi Zheng
{"title":"IGA : An Improved Genetic Algorithm to Construct Weightwise (Almost) Perfectly Balanced Boolean Functions with High Weightwise Nonlinearity","authors":"Lili Yan, Jingyi Cui, Jian Liu, Guangquan Xu, Lidong Han, Alireza Jolfaei, Xi Zheng","doi":"10.1145/3579856.3590337","DOIUrl":"https://doi.org/10.1145/3579856.3590337","url":null,"abstract":"The Boolean functions satisfying secure properties on the restricted sets of inputs are studied recently due to their importance in the framework of the FLIP stream cipher. However, finding Boolean functions with optimal cryptographic properties is an open research problem in the cryptographic community. This paper presents an Improved Genetic Algorithm (IGA) with the directed changes that keep the weightwise balancedness of Boolean functions. A cross-protection strategy is proposed to ensure that the offspring has the same weightwise balancedness characteristics of the parents while implementing crossover. Then, a large number of weightwise (almost) perfectly balanced (W(A)PB) functions with a good nonlinearity profile are obtained based on IGA. Finally, we make comparisons between our constructions and relevant works. The comparisons show that IGA has a significant advantage for reaching the W(A)PB functions with high weightwise nonlinearity. Moreover, it is the first time to obtain the 8-variable WPB functions with the weightwise nonlinearity of 28 in the restricted sets of inputs with Hamming weight of 4, and list the statistical indicators of the weightwise nonlinearity for W(A)PB functions for input size n = 9, 10.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"38 33","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113934298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SoK: Systematizing Attack Studies in Federated Learning – From Sparseness to Completeness 联邦学习中的系统化攻击研究——从稀疏到完备
Geetanjli Sharma, Pathum Chamikara Mahawaga Arachchige, Mohan Baruwal Chhetri, Yi-Ping Phoebe Chen
{"title":"SoK: Systematizing Attack Studies in Federated Learning – From Sparseness to Completeness","authors":"Geetanjli Sharma, Pathum Chamikara Mahawaga Arachchige, Mohan Baruwal Chhetri, Yi-Ping Phoebe Chen","doi":"10.1145/3579856.3590328","DOIUrl":"https://doi.org/10.1145/3579856.3590328","url":null,"abstract":"Federated Learning (FL) is a machine learning technique that enables multiple parties to collaboratively train a model using their private datasets. Given its decentralized nature, FL has inherent vulnerabilities that make it susceptible to adversarial attacks. The success of an attack on FL depends upon several (latent) factors, including the adversary’s strength, the chosen attack strategy, and the effectiveness of the defense measures in place. There is a growing body of literature on empirical attack studies on FL, but no systematic way to compare and evaluate the completeness of these studies, which raises questions about their validity. To address this problem, we introduce a causal model that captures the relationship between the different (latent) factors, and their reflexive indicators, that can impact the success of an attack on FL. The proposed model, inspired by structural equation modeling, helps systematize the existing literature on FL attack studies and provides a way to compare and contrast their completeness. We validate the model and demonstrate its utility through experimental evaluation of select attack studies. Our aim is to help researchers in the FL domain design more complete attack studies and improve the understanding of FL vulnerabilities.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"54 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114111085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Going Haywire: False Friends in Federated Learning and How to Find Them 误入歧途:联邦学习中的假朋友以及如何找到他们
William Aiken, Paula Branco, Guy-Vincent Jourdan
{"title":"Going Haywire: False Friends in Federated Learning and How to Find Them","authors":"William Aiken, Paula Branco, Guy-Vincent Jourdan","doi":"10.1145/3579856.3595790","DOIUrl":"https://doi.org/10.1145/3579856.3595790","url":null,"abstract":"Federated Learning (FL) promises to offer a major paradigm shift in the way deep learning models are trained at scale, yet malicious clients can surreptitiously embed backdoors into models via trivial augmentation on their own subset of the data. This is especially true in small- and medium-scale FL systems, which consist of dozens, rather than millions, of clients. In this work, we investigate a novel attack scenario for an FL architecture consisting of multiple non-i.i.d. silos of data in which each distribution has a unique backdoor attacker and where the model convergences of adversaries are not more similar than those of benign clients. We propose a new method, dubbed Haywire, as a security-in-depth approach to respond to this novel attack scenario. Our defense utilizes a combination of kPCA dimensionality reduction of fully-connected layers in the network, KMeans anomaly detection to drop anomalous clients, and server aggregation robust to outliers via the Geometric Median. Our solution prevents the contamination of the global model despite having no access to the backdoor triggers. We evaluate the performance of Haywire from model-accuracy, defense-performance, and attack-success perspectives against multiple baselines. Through an extensive set of experiments, we find that Haywire produces the best performances at preventing backdoor attacks while simultaneously not unfairly penalizing benign clients. We carried out additional in-depth experiments across multiple runs that demonstrate the reliability of Haywire.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116540697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
POSTER: Leveraging eBPF to enhance sandboxing of WebAssembly runtimes 海报:利用eBPF增强WebAssembly运行时的沙箱
M. Abbadini, M. Beretta, Dario Facchinetti, Gianluca Oldani, Matthew Rossi, S. Paraboschi
{"title":"POSTER: Leveraging eBPF to enhance sandboxing of WebAssembly runtimes","authors":"M. Abbadini, M. Beretta, Dario Facchinetti, Gianluca Oldani, Matthew Rossi, S. Paraboschi","doi":"10.1145/3579856.3592831","DOIUrl":"https://doi.org/10.1145/3579856.3592831","url":null,"abstract":"WebAssembly is a binary instruction format designed as a portable compilation target enabling the deployment of untrusted code in a safe and efficient manner. While it was originally designed to be run inside web browsers, modern runtimes like Wasmtime and WasmEdge can execute WebAssembly directly on various systems. In order to access system resources with a universal hostcall interface, a standardization effort named WebAssembly System Interface (WASI) is currently undergoing. With specific regard to the file system, runtimes must prevent hostcalls to access arbitrary locations, thus they introduce security checks to only permit access to a pre-defined list of directories. This approach not only suffers from poor granularity, it is also error-prone and has led to several security issues. In this work we replace the security checks in hostcall wrappers with eBPF programs, enabling the introduction of fine-grained per-module policies. Preliminary experiments confirm that our approach introduces limited overhead to existing runtimes.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121979227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks LoDen:使联邦学习中的每个客户端都成为对抗中毒成员推理攻击的防御者
Mengyao Ma, Yanjun Zhang, Pathum Chamikara Mahawaga Arachchige, L. Zhang, Mohan Baruwal Chhetri, Guangdong Bai
{"title":"LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks","authors":"Mengyao Ma, Yanjun Zhang, Pathum Chamikara Mahawaga Arachchige, L. Zhang, Mohan Baruwal Chhetri, Guangdong Bai","doi":"10.1145/3579856.3590334","DOIUrl":"https://doi.org/10.1145/3579856.3590334","url":null,"abstract":"Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample’s prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients’ unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to in most cases. The code of LoDen is available at https://github.com/UQ-Trust-Lab/LoDen.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128065708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Boost Off/On-Manifold Adversarial Robustness for Deep Learning with Latent Representation Mixup 基于潜在表示混合的深度学习增强Off/On-Manifold对抗鲁棒性
Mengdie Huang, Yi Xie, Xiaofeng Chen, Jin Li, Chang Dong, Zheli Liu, Willy Susilo
{"title":"Boost Off/On-Manifold Adversarial Robustness for Deep Learning with Latent Representation Mixup","authors":"Mengdie Huang, Yi Xie, Xiaofeng Chen, Jin Li, Chang Dong, Zheli Liu, Willy Susilo","doi":"10.1145/3579856.3595786","DOIUrl":"https://doi.org/10.1145/3579856.3595786","url":null,"abstract":"Deep neural networks excel at solving intuitive tasks that are hard to describe formally, such as classification, but are easily deceived by maliciously crafted samples, leading to misclassification. Recently, it has been observed that the attack-specific robustness of models obtained through adversarial training does not generalize well to novel or unseen attacks. While data augmentation through mixup in the input space has been shown to improve the generalization and robustness of models, there has been limited research progress on mixup in the latent space. Furthermore, almost no research on mixup has considered the robustness of models against emerging on-manifold adversarial attacks. In this paper, we first design a latent-space data augmentation strategy called dual-mode manifold interpolation, which allows for interpolating disentangled representations of source samples in two modes: convex mixing and binary mask mixing, to synthesize semantic samples. We then propose a resilient training framework, LatentRepresentationMixup (LarepMixup), that employs mixed examples and softlabel-based cross-entropy loss to refine the boundary. Experimental investigations on diverse datasets (CIFAR-10, SVHN, ImageNet-Mixed10) demonstrate that our approach delivers competitive performance in training models that are robust to off/on-manifold adversarial example attacks compared to leading mixup training techniques.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124516751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZEKRA: Zero-Knowledge Control-Flow Attestation ZEKRA:零知识控制-流程认证
Heini Bergsson Debes, Edlira Dushku, Thanassis Giannetsos, Ali Marandi
{"title":"ZEKRA: Zero-Knowledge Control-Flow Attestation","authors":"Heini Bergsson Debes, Edlira Dushku, Thanassis Giannetsos, Ali Marandi","doi":"10.1145/3579856.3582833","DOIUrl":"https://doi.org/10.1145/3579856.3582833","url":null,"abstract":"To detect runtime attacks against programs running on a remote computing platform, Control-Flow Attestation (CFA) lets a (trusted) verifier determine the legality of the program’s execution path, as recorded and reported by the remote platform (prover). However, besides complicating scalability due to verifier complexity, this assumption regarding the verifier’s trustworthiness renders existing CFA schemes prone to privacy breaches and implementation disclosure attacks under “honest-but-curious” adversaries. Thus, to suppress sensitive details from the verifier, we propose to have the prover outsource the verification of the attested execution path to an intermediate worker of which the verifier only learns the result. However, since a worker might be dishonest about the outcome of the verification, we propose a purely cryptographical solution of transforming the verification of the attested execution path into a verifiable computational task that can be reliably outsourced to a worker without relying on any trusted execution environment. Specifically, we propose to express a program-agnostic execution path verification task inside an arithmetic circuit whose correct execution can be verified by untrusted verifiers in zero knowledge.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122432981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信