Proceedings of the Seventeenth European Conference on Computer Systems最新文献

筛选
英文 中文
Sharing is caring: secure and efficient shared memory support for MVEEs 共享即关怀:安全高效的mvee共享内存支持
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519558
Jonas Vinck, Bert Abrath, Bart Coppens, Alexios Voulimeneas, B. De Sutter, Stijn Volckaert
{"title":"Sharing is caring: secure and efficient shared memory support for MVEEs","authors":"Jonas Vinck, Bert Abrath, Bart Coppens, Alexios Voulimeneas, B. De Sutter, Stijn Volckaert","doi":"10.1145/3492321.3519558","DOIUrl":"https://doi.org/10.1145/3492321.3519558","url":null,"abstract":"Multi-Variant Execution Environments (MVEEs) are a powerful tool for protecting legacy software against memory corruption attacks. MVEEs employ software diversity to run multiple variants of the same program in lockstep, whilst providing them with the same inputs and comparing their behavior. Well-constructed variants will behave equivalently under normal operating conditions but diverge when under attack. The MVEE detects these divergences and takes action before compromised variants can damage the host system. Existing MVEEs replicate inputs at the system call boundary, and therefore do not support programs that use shared-memory IPC with other processes, since shared memory pages can be read from and written to directly without system calls. We analyzed modern applications, ranging from web servers, over media players, to browsers, and observe that they rely heavily on shared memory, in some cases for their basic functioning and in other cases for enabling more advanced functionality. It follows that modern applications cannot enjoy the security provided by MVEEs unless those MVEEs support shared-memory IPC. This paper first identifies the requirements for supporting shared-memory IPC in an MVEE. We propose a design that involves techniques to identify and instrument accesses to shared memory pages, as well as techniques to replicate I/O through shared-memory IPC. We implemented these techniques in a prototype MVEE and report our findings through an evaluation of a range of benchmark programs. Our contributions enable the use of MVEEs on a far wider range of programs than previously supported. By overcoming one of the major remaining limitations of MVEEs, our contributions can help to bolster their real-world adoption.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122667756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DeepRest
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519564
Ka-Ho Chow, Umesh Deshpande, S. Seshadri, Ling Liu
{"title":"DeepRest","authors":"Ka-Ho Chow, Umesh Deshpande, S. Seshadri, Ling Liu","doi":"10.1145/3492321.3519564","DOIUrl":"https://doi.org/10.1145/3492321.3519564","url":null,"abstract":"Interactive microservices expose API endpoints to be invoked by users. For such applications, precisely estimating the resources required to serve specific API traffic is challenging. This is because an API request can interact with different components and consume different resources for each component. The notion of API traffic is vital to application owners since the API endpoints often reflect business logic, e.g., a customer transaction. The existing systems that simply rely on historical resource utilization are not API-aware and thus cannot estimate the resource requirement accurately. This paper presents DeepRest, a deep learning-driven resource estimation system. DeepRest formulates resource estimation as a function of API traffic and learns the causality between user interactions and resource utilization directly in a production environment. Our evaluation shows that DeepRest can estimate resource requirements with over 90% accuracy, even if the API traffic to be estimated has never been observed (e.g., 3× more users than ever or unseen traffic shape). We further apply resource estimation for application sanity checks. DeepRest identifies system anomalies by verifying whether the resource utilization is justifiable by how the application is being used. It can successfully identify two major cyber threats: ransomware and cryptojacking attacks.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"61 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120902024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fleche
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519554
Minhui Xie, Youyou Lu, Jiazhen Lin, Qing Wang, Jian Gao, K. Ren, J. Shu
{"title":"Fleche","authors":"Minhui Xie, Youyou Lu, Jiazhen Lin, Qing Wang, Jian Gao, K. Ren, J. Shu","doi":"10.1145/3492321.3519554","DOIUrl":"https://doi.org/10.1145/3492321.3519554","url":null,"abstract":"Deep learning based models have dominated current production recommendation systems. However, the gap between CPU-side DRAM data accessing and GPU processing still impedes their inference performance. GPU-resident cache can bridge this gap, but we find that existing systems leave the benefits to cache the embedding table, a huge sparse structure, on GPU unexploited. In this paper, we present Fleche, a holistic cache scheme with detailed designs for efficient GPU-resident embedding caching. Fleche (1) uses one cache backend for all embedding tables to improve the total cache utilization, and (2) merges small kernel calls into one unitary call to reduce the overhead of kernel maintenance (e.g., kernel launching and synchronizing). Furthermore, we carefully design the cache query workflow for finer-grain parallelism. Evaluations with real-world datasets show that compared with the prior art, Fleche significantly improves the throughput of embedding layer by 2.0 -- 5.4×, and gets up to 2.4× speedup of end-to-end inference throughput.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"14 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113959771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
KASLR in the age of MicroVMs microvm时代的KASLR
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519578
Benjamin Holmes, J. Waterman, Dan Williams
{"title":"KASLR in the age of MicroVMs","authors":"Benjamin Holmes, J. Waterman, Dan Williams","doi":"10.1145/3492321.3519578","DOIUrl":"https://doi.org/10.1145/3492321.3519578","url":null,"abstract":"Address space layout randomization (ASLR) is a widely used component of computer security aimed at preventing code reuse and/or data-only attacks. Modern kernels utilize kernel ASLR (KASLR) and finer-grained forms, such as functional granular KASLR (FGKASLR), but do so as part of an inefficient bootstrapping process we call bootstrap self-randomization. Meanwhile, under increasing pressure to optimize their boot times, microVM architectures such as AWS Firecracker have resorted to eliminating bootstrapping steps, particularly decompression and relocation from the guest kernel boot process, leaving them without KASLR. In this paper, we present in-monitor KASLR, in which the virtual machine monitor efficiently implements KASLR for the guest kernel by skipping the expensive kernel self-relocation steps. We prototype in-monitor KASLR and FGKASLR in the open-source Firecracker virtual machine monitor demonstrating, on a microVM configured kernel, boot times 22% and 16% faster than bootstrapped KASLR and FGKASLR methods, respectively. We also show the low overhead of in-monitor KASLR, with only 4% (2 ms) increase in boot times on average compared to a kernel without KASLR. We also discuss the implications and future opportunities for in-monitor approaches.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125715923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APT-GET: profile-guided timely software prefetching APT-GET:配置文件引导的及时软件预取
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519583
Saba Jamilan, Tanvir Ahmed Khan, Grant Ayers, Baris Kasikci, Heiner Litz
{"title":"APT-GET: profile-guided timely software prefetching","authors":"Saba Jamilan, Tanvir Ahmed Khan, Grant Ayers, Baris Kasikci, Heiner Litz","doi":"10.1145/3492321.3519583","DOIUrl":"https://doi.org/10.1145/3492321.3519583","url":null,"abstract":"Prefetching which predicts future memory accesses and preloads them from main memory, is a widely-adopted technique to overcome the processor-memory performance gap. Unfortunately, hardware prefetchers implemented in today's processors cannot identify complex and irregular memory access patterns exhibited by modern data-driven applications and hence developers need to rely on software prefetching techniques. We investigate the challenges of enabling effective, automated software data prefetching. Our investigation reveals that the state-of-the-art compiler-based prefetching mechanism falls short in achieving high performance due to its static nature. Based on this insight, we design APT-GET, a novel profile-guided technique that ensures prefetch timeliness by leveraging dynamic execution time information. APT-GET leverages efficient hardware support such as Intel's Last Branch Record (LBR), for collecting application execution profiles with negligible overhead to characterize the execution time of loads. APT-GET then introduces a novel analytical model to find the optimal prefetch-distance and prefetch injection site based on the collected profile to enable timely prefetches. We study APT-GET in the context of 10 real-world applications and demonstrate that it achieves a speedup of up to 1.98× and of 1.30× on average. By ensuring prefetch timeliness, APT-GET improves the performance by 1.25× over the state-of-the-art software data prefetching mechanism.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127649612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
LiteReconfig LiteReconfig
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519577
Ran Xu, Jayoung Lee, Pengcheng Wang, S. Bagchi, Yin Li, S. Chaterji
{"title":"LiteReconfig","authors":"Ran Xu, Jayoung Lee, Pengcheng Wang, S. Bagchi, Yin Li, S. Chaterji","doi":"10.1145/3492321.3519577","DOIUrl":"https://doi.org/10.1145/3492321.3519577","url":null,"abstract":"An adaptive video object detection system selects different execution paths at runtime, based on video content and available resources, so as to maximize accuracy under a target latency objective (e.g., 30 frames per second). Such a system is well suited to mobile devices with limited computing resources, and often running multiple contending applications. Existing solutions suffer from two major drawbacks. First, collecting feature values to decide on an execution branch is expensive. Second, there is a switching overhead for transitioning between branches and this overhead depends on the transition pair. LiteReconfig, an efficient and adaptive video object detection framework, addresses these challenges. LiteReconfig features a cost-benefit analyzer to decide which features to use, and which execution branch to run, at inference time. Furthermore, LiteReconfig has a content-aware accuracy prediction model, to select an execution branch tailored for frames in a video stream. We demonstrate that LiteReconfig achieves significantly improved accuracy under a set of varying latency objectives than existing systems, while maintaining up to 50 fps on an NVIDIA AGX Xavier board. Our code, with DOI, is available at https://doi.org/10.5281/zenodo.6345733.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124335730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Characterizing the performance of intel optane persistent memory: a close look at its on-DIMM buffering 表征intel optane持久存储器的性能:仔细观察其在dimm上的缓冲
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519556
Lingfeng Xiang, Xingsheng Zhao, J. Rao, Song Jiang, Hong Jiang
{"title":"Characterizing the performance of intel optane persistent memory: a close look at its on-DIMM buffering","authors":"Lingfeng Xiang, Xingsheng Zhao, J. Rao, Song Jiang, Hong Jiang","doi":"10.1145/3492321.3519556","DOIUrl":"https://doi.org/10.1145/3492321.3519556","url":null,"abstract":"We present a comprehensive and in-depth study of Intel Optane DC persistent memory (DCPMM). Our focus is on exploring the internal design of Optane's on-DIMM read-write buffering and its impacts on application-perceived performance, read and write amplifications, the overhead of different types of persists, and the tradeoffs between persistency models. While our measurements confirm the results of the existing profiling studies, we have new discoveries and offer new insights. Notably, we find that read and write are managed differently in separate on-DIMM read and write buffers. Comparable in size, the two buffers serve distinct purposes. The read buffer offers higher concurrency and effective on-DIMM prefetching, leading to high read bandwidth and superior sequential performance. However, it does not help hide media access latency. In contrast, the write buffer offers limited concurrency but is a critical stage in a pipeline that supports asynchronous write in the DDR-T protocol. Surprisingly, in addition to write coalescing, the write buffer delivers lower than read and consistent write latency regardless of the working set size, the type of write, the access pattern, or the persistency model. Furthermore, we discover that the mismatch between cacheline access granularity and the 3D-Xpoint media access granularity negatively impacts the effectiveness of CPU cache prefetching and leads to wasted persistent memory bandwidth. Our proposition is to decouple read and write in the performance analysis and optimization of persistent programs. We present three case studies based on this insight and demonstrate considerable performance improvements. We verify the results on two generations of Optane DCPMM.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126959030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Hardening binaries against more memory errors 加固二进制文件,防止出现更多内存错误
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519580
Gregory J. Duck, Yuntong Zhang, R. Yap
{"title":"Hardening binaries against more memory errors","authors":"Gregory J. Duck, Yuntong Zhang, R. Yap","doi":"10.1145/3492321.3519580","DOIUrl":"https://doi.org/10.1145/3492321.3519580","url":null,"abstract":"Memory errors, such as buffer overflows and use-after-free, remain the root cause of many security vulnerabilities in modern software. The use of closed source software further exacerbates the problem, as source-based memory error mitigation cannot be applied. While many memory error detection tools exist, most are based on a single error detection methodology with resulting known limitations, such as incomplete memory error detection (redzones) or false error detections (low-fat pointers). In this paper we introduce RedFat, a memory error hardening tool for stripped binaries that is fast, practical and scalable. The core idea behind RedFat is to combine complementary error detection methodologies---redzones and low-fat pointers---in order to detect more memory errors that can be detected by each individual methodology alone. However, complementary error detection also inherits the limitations of each approach, such as false error detections from low-fat pointers. To mitigate this, we introduce a profile-based analysis that automatically determines the strongest memory error protection possible without negative side effects. We implement RedFat on top of a scalable binary rewriting framework, and demonstrate low overheads compared to the current state-of-the-art. We show RedFat to be language agnostic on C/C++/Fortran binaries with minimal requirements, and works with stripped binaries for both position independent/dependent code. We also show that the RedFat instrumentation can scale to very large/complex binaries, such as Google Chrome.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126836007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Verified programs can party: optimizing kernel extensions via post-verification merging 经过验证的程序可以通过验证后合并来优化内核扩展
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519562
H. Kuo, Kaiyu Chen, Yicheng Lu, Daniel W. Williams, Sibin Mohan, Tianyi Xu
{"title":"Verified programs can party: optimizing kernel extensions via post-verification merging","authors":"H. Kuo, Kaiyu Chen, Yicheng Lu, Daniel W. Williams, Sibin Mohan, Tianyi Xu","doi":"10.1145/3492321.3519562","DOIUrl":"https://doi.org/10.1145/3492321.3519562","url":null,"abstract":"Operating system (OS) extensions are more popular than ever. For example, Linux BPF is marketed as a \"superpower\" that allows user programs to be downloaded into the kernel, verified to be safe and executed at kernel hook points. So, BPF extensions have high performance and are often placed at performance-critical paths for tracing and filtering. However, although BPF extension programs execute in a shared kernel environment and are already individually verified, they are often executed independently in chains. We observe that the chain pattern has large performance overhead, due to indirect jumps penalized by security mitigations (e.g., Spectre), loops, and memory accesses. In this paper, we argue for a separation of concerns. We propose to decouple the execution of BPF extensions from their verification requirements---BPF extension programs can be collectively optimized, after each BPF extension program is individually verified and loaded into the shared kernel. We present KFuse, a framework that dynamically and automatically merges chains of BPF programs by transforming indirect jumps into direct jumps, unrolling loops, and saving memory accesses, without loss of security or flexibility. KFuse can merge BPF programs that are (1) installed by multiple principals, (2) maintained to be modular and separate, (3) installed at different points of time, and (4) split into smaller, verifiable programs via BPF tail calls. KFuse demonstrates 85% performance improvement of BPF chain execution and 7% of application performance improvement over existing BPF use cases (systemd's Seccomp BPF filters). It achieves more significant benefits for longer chains.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122414708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimizing the interval-centric distributed computing model for temporal graph algorithms 优化时间图算法的间隔中心分布式计算模型
Proceedings of the Seventeenth European Conference on Computer Systems Pub Date : 2022-03-28 DOI: 10.1145/3492321.3519588
Animesh Baranawal, Yogesh L. Simmhan
{"title":"Optimizing the interval-centric distributed computing model for temporal graph algorithms","authors":"Animesh Baranawal, Yogesh L. Simmhan","doi":"10.1145/3492321.3519588","DOIUrl":"https://doi.org/10.1145/3492321.3519588","url":null,"abstract":"Temporal graphs assign lifespans to their vertices, edges and attributes. Large temporal graphs are common for finding the shortest paths in transit networks and contact tracing for COVID-19. Graph programming abstractions like Interval-centric Computing Model (ICM) extend Google's Pregel model to intuitively compose and execute time-dependent graph algorithms in a distributed environment. However, the benefits of easier algorithmic design in ICM are offset by performance bottlenecks in its TimeWarp shuffle and messaging phases. Here, we design several optimizations to ICM to reduce these overheads. We propose local optimizations within a vertex execution by unrolling messages before TimeWarp (LU), and deferring messaging till all local computations complete (DS). We also temporally partition the interval graph into windows (WICM) to flatten the execution load. We offer a proof of equivalence between ICM and these techniques. Our detailed empirical evaluation for six real-world graphs with up to 133M vertices, 5.5B edges and 365 time-points, for six temporal traversal algorithms executing on a commodity cluster with 8 nodes, shows that LU, DS and WICM together significantly reduce the average algorithm runtime by ≈ 61% (≈ 15 mins) over ICM, and reduce message communication by ≈ 38%(≈ 3.2B) on average.","PeriodicalId":196414,"journal":{"name":"Proceedings of the Seventeenth European Conference on Computer Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信