2020 IEEE International Symposium on Workload Characterization (IISWC)最新文献

筛选
英文 中文
Characterizing the Scale-Up Performance of Microservices using TeaStore 使用TeaStore描述微服务的扩展性能
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00014
Sriyash Caculo, K. Lahiri, Subramaniam Kalambur
{"title":"Characterizing the Scale-Up Performance of Microservices using TeaStore","authors":"Sriyash Caculo, K. Lahiri, Subramaniam Kalambur","doi":"10.1109/IISWC50251.2020.00014","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00014","url":null,"abstract":"Cloud-based applications architected using microservices are becoming increasingly common. While recent work has studied how to optimize the performance of these applications at the data-center level, comparatively little is known about how these services utilize end-server compute resources. Major advances have been made in recent years in terms of the compute density offered by cloud servers, thanks to the emergence of mainstream, high-core count CPU designs. Consequently, it has become equally important to understand the ability of microservices to “scale up” within a server and make effective use of available resources. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors. This paper presents a study of a publicly available microservice based application on a state-of-the-art x86 server supporting 128 logical CPUs per socket. We highlight the significant performance opportunities that exist when the scaling properties of individual services and knowledge of the underlying processor topology are properly exploited. Using such techniques, we demonstrate a throughput uplift of 22% and a latency reduction of 18% over a performance-tuned baseline of our microservices workload. In addition, we describe how such microservice-based applications are distinct from workloads commonly used for designing general-purpose server processors.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134199377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Scalable and Fast Lazy Persistency on GPUs gpu上的可伸缩和快速延迟持久性
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00032
Ardhi Wiratama Baskara Yudha, K. Kimura, Huiyang Zhou, Yan Solihin
{"title":"Scalable and Fast Lazy Persistency on GPUs","authors":"Ardhi Wiratama Baskara Yudha, K. Kimura, Huiyang Zhou, Yan Solihin","doi":"10.1109/IISWC50251.2020.00032","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00032","url":null,"abstract":"GPUs applications, including many scientific and machine learning applications, increasingly demand larger memory capacity. NVM is promising higher density compared to DRAM and better future scaling potentials. Long running GPU applications can benefit from NVM by exploiting its persistency, allowing crash recovery of data in memory. In this paper, we propose mapping Lazy Persistency (LP) to GPUs and identify the design space of such mapping. We then characterize LP performance on GPUs, varying the checksum type, reduction method, use of locking, and hash table designs. Armed with insights into the performance bottlenecks, we propose a hash table-less method that performs well on hundreds and thousands of threads, achieving persistency with nearly negligible (2.1%) slowdown for a variety of representative benchmarks. We also propose a directive-based programming language support to simplify programming effort for adding LP to GPU applications.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121773327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Selective Event Processing for Energy Efficient Mobile Gaming with SNIP 基于SNIP的高效节能移动游戏的选择性事件处理
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00035
Prasanna Venkatesh Rengasamy, Haibo Zhang, Shulin Zhao, A. Sivasubramaniam, M. Kandemir, C. Das
{"title":"Selective Event Processing for Energy Efficient Mobile Gaming with SNIP","authors":"Prasanna Venkatesh Rengasamy, Haibo Zhang, Shulin Zhao, A. Sivasubramaniam, M. Kandemir, C. Das","doi":"10.1109/IISWC50251.2020.00035","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00035","url":null,"abstract":"Gaming is an important class of workloads for mobile devices. They are not only one of the biggest markets for game developers and app stores, but also amongst the most stressful applications for the SoC. In these workloads, much of the computation is user-driven, i.e. events captured from sensors drive the computation to be performed. Consequently, event processing constitutes the bulk of energy drain for these applications. To address this problem, we conduct a detailed characterization of event processing activities in several popular games and show that (i) some of the events are exactly repetitive in their inputs, not requiring any processing at all; or (ii) a significant number of events are redundant in that even if the inputs for these events are different, the output matches events already processed. Memoization is one of the obvious choices to optimize such behavior, however the problem is a lot more challenging in this context because the computation can span even functional/OS boundaries, and the input space required for tables can takes gigabytes of storage. Instead, our Selecting Necessary InPuts (SNIP) software solution uses machine learning to isolate the input features that we really need to track in order to considerably shrink memoization tables. We show that SNIP can save up to 32% of the energy in these games without requiring any hardware modifications.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129273466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Rigorous Benchmarking and Performance Analysis Methodology for Python Workloads Python工作负载的严格基准测试和性能分析方法
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00017
Arthur Crapé, L. Eeckhout
{"title":"A Rigorous Benchmarking and Performance Analysis Methodology for Python Workloads","authors":"Arthur Crapé, L. Eeckhout","doi":"10.1109/IISWC50251.2020.00017","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00017","url":null,"abstract":"Computer architecture and computer systems research and development is heavily driven by benchmarking and performance analysis. It is thus of paramount importance that rigorous methodologies are used to draw correct conclusions and steer research and development in the right direction. While rigorous methodologies are widely used for native and managed programming language workloads, scripting language workloads are subject to ad-hoc methodologies which lead to incorrect and misleading conclusions. In particular, we find incorrect public statements regarding different virtual machines for Python, the most popular scripting language. The incorrect conclusion is a result of using the geometric mean speedup and not making a distinction between start-up and steady-state performance. In this paper, we propose a statistically rigorous benchmarking and performance analysis methodology for Python workloads, which makes a distinction between start-up and steady-state performance and which summarizes average performance across a set of benchmarks using the harmonic mean speedup. We find that a rigorous methodology makes a difference in practice. In particular, we find that the PyPy JIT compiler outperforms the CPython interpreter by 1.76 × for steady-state while being 2% slower for start-up, which refutes the statement on the PyPy website that ‘PyPy outperforms CPython by 4.4× on average’ based on the geometric mean speedup and not making a distinction between start-up and steady-state. We use the proposed methodology to analyze Python workloads which yields several interesting findings regarding PyPy versus CPython performance, start-up versus steady-state performance, the impact of a workload's input size, and Python workload execution characteristics at the microarchitecture level.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128745903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Demystifying Power and Performance Bottlenecks in Autonomous Driving Systems 揭开自动驾驶系统的动力和性能瓶颈之谜
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00028
P. H. E. Becker, J. Arnau, Antonio González
{"title":"Demystifying Power and Performance Bottlenecks in Autonomous Driving Systems","authors":"P. H. E. Becker, J. Arnau, Antonio González","doi":"10.1109/IISWC50251.2020.00028","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00028","url":null,"abstract":"Autonomous Vehicles (AVs) have the potential to radically change the automotive industry. However, computing solutions for AVs have to meet severe performance and power constraints to guarantee a safe driving experience. Current solutions either exhibit high cost and power dissipation or fail to meet the stringent latency constraints. Therefore, the popularization of AVs requires a low-cost yet effective computing system. Understanding the sources of latency and energy consumption is key in order to improve autonomous driving systems. In this paper, we present a detailed characterization of Autoware, a modern self-driving car system. We analyze the performance and power of the different components and leverage hardware counters to identify the main bottlenecks. Our approach to AV characterization avoids pitfalls of previous works: profiling individual components in isolation and neglecting LiDAR-related components. We base our characterization on a rigorous methodology that considers the entire software stack. Profiling the end-to-end system accounts for interference and contention among different components that run in parallel, also including memory transfers to communicate data. We show that all these factors have a high impact on latency and cannot be measured by profiling isolated modules. Our characterization provides novel insights, some of the interesting findings are the following. First, contention among different modules drastically impacts latency and performance predictability. Second, LiDAR-related components are important contributors to the latency of the system. Finally, a modern platform with a high-end CPU and GPU cannot achieve real-time performance when considering the entire end-to-end system.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126747030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Port or Shim? Stress Testing Application Performance on Intel SGX 港口还是港口?在Intel SGX上压力测试应用程序性能
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00021
Aisha Hasan, Ryan D. Riley, D. Ponomarev
{"title":"Port or Shim? Stress Testing Application Performance on Intel SGX","authors":"Aisha Hasan, Ryan D. Riley, D. Ponomarev","doi":"10.1109/IISWC50251.2020.00021","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00021","url":null,"abstract":"Intel's newer processors come equipped with Software Guard Extensions (SGX) technology, allowing developers to write sections of code that run in a protected area of memory known as an enclave. In this work, we compare performance of two scenarios for running existing code on SGX. In one, a developer manually ports the code to SGX. In the other, a shim-layer and library OS are used to run the code unmodified on SGX. Our initial results demonstrate that when running an existing benchmarking tool under SGX, in addition to being much faster for development, code running in the library OS also tends to run at the same speed or faster than code that is manually ported. After obtaining this result, we then go on to design a series of microbenchmarks to characterize exactly what types of workloads would benefit from manual porting. We find that if the application to be ported has a small sensitive working set (less than the 6MB available cache size of the CPU), infrequently needs to enter the enclave (less than 110,000 times per second), and spends most of its time working on data outside of the enclave, then it may indeed perform better if it is manually ported as opposed to run in a shim.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130012444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Accelerating Number Theoretic Transformations for Bootstrappable Homomorphic Encryption on GPUs gpu上可自引导同态加密的加速数论变换
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00033
Sangpyo Kim, Wonkyung Jung, J. Park, Jung Ho Ahn
{"title":"Accelerating Number Theoretic Transformations for Bootstrappable Homomorphic Encryption on GPUs","authors":"Sangpyo Kim, Wonkyung Jung, J. Park, Jung Ho Ahn","doi":"10.1109/IISWC50251.2020.00033","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00033","url":null,"abstract":"Homomorphic encryption (HE) draws huge attention as it provides a way of privacy-preserving computations on encrypted messages. Number Theoretic Transform (NTT), a specialized form of Discrete Fourier Transform (DFT) in the finite field of integers, is the key algorithm that enables fast computation on encrypted ciphertexts in HE. Prior works have accelerated NTT and its inverse transformation on a popular parallel processing platform, GPU, by leveraging DFT optimization techniques. However, these GPU-based studies lack a comprehensive analysis of the primary differences between NTT and DFT or only consider small HE parameters that have tight constraints in the number of arithmetic operations that can be performed without decryption. In this paper, we analyze the algorithmic characteristics of NTT and DFT and assess the performance of NTT when we apply the optimizations that are commonly applicable to both DFT and NTT on modern GPUs. From the analysis, we identify that NTT suffers from severe main-memory bandwidth bottleneck on large HE parameter sets. To tackle the main-memory bandwidth issue, we propose a novel NTT-specific on-the-fly root generation scheme dubbed on-the-fly twiddling (OT). Compared to the baseline radix-2 NTT implementation, after applying all the optimizations, including OT, we achieve 4.2⨯ speedup on a modern GPU.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132975844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A Sparse Tensor Benchmark Suite for CPUs and GPUs 稀疏张量基准套件的cpu和gpu
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00027
Jiajia Li, M. Lakshminarasimhan, Xiaolong Wu, Ang Li, C. Olschanowsky, K. Barker
{"title":"A Sparse Tensor Benchmark Suite for CPUs and GPUs","authors":"Jiajia Li, M. Lakshminarasimhan, Xiaolong Wu, Ang Li, C. Olschanowsky, K. Barker","doi":"10.1109/IISWC50251.2020.00027","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00027","url":null,"abstract":"Tensor computations present significant performance challenges that impact a wide spectrum of applications ranging from machine learning, healthcare analytics, social network analysis, data mining to quantum chemistry and signal processing. Efforts to improve the performance of tensor computations include exploring data layout, execution scheduling, and parallelism in common tensor kernels. This work presents a benchmark suite for arbitrary-order sparse tensor kernels using state-of-the-art tensor formats: coordinate (COO) and hierarchical coordinate (HiCOO) on CPUs and GPUs. It presents a set of reference tensor kernel implementations that are compatible with real-world tensors and power law tensors extended from synthetic graph generation techniques. We also propose Roofline performance models for these kernels to provide insights of computer platforms from sparse tensor view. This benchmark suite along with the synthetic tensor generator is publicly available.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127245881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pocolo: Power Optimized Colocation in Power Constrained Environments Pocolo:功率受限环境下的功率优化主机托管
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00010
Iyswarya Narayanan, Adithya Kumar, A. Sivasubramaniam
{"title":"Pocolo: Power Optimized Colocation in Power Constrained Environments","authors":"Iyswarya Narayanan, Adithya Kumar, A. Sivasubramaniam","doi":"10.1109/IISWC50251.2020.00010","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00010","url":null,"abstract":"There is a considerable amount of prior effort on co-locating applications on datacenter servers for boosting resource utilization. However, we note that it is equally important to take power into consideration from the co-location viewpoint. Applications can still interfere on power in stringent power constrained infrastructures, despite no direct resource contention between the coexisting applications. This becomes particularly important with dynamic load variations, where even if the power capacity is tuned for the peak load of an application, co-locating another application with it during its off-period can lead to overshooting of the power capacity. Therefore, to extract maximum returns on datacenter infrastructure investments one needs to jointly handle power and server resources. We explore this problem in the context of a private-cloud cluster which is provisioned for a primary latency-critical application, but also admits secondary best-effort applications to improve utilization during off-peak periods. Our solution, Pocolo, draws on principles from economics to reason about resource demands in power constrained environments and provides answers to the when/where/what questions pertaining to co-location. We implement Pocolo on a Linux cluster to demonstrate its performance and cost benefits over a number of latency-sensitive and best-effort datacenter workloads.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122020519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Study of APIs for Graph Analytics Workloads 图形分析工作负载的api研究
2020 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2020-10-01 DOI: 10.1109/IISWC50251.2020.00030
Hochan Lee, D. Wong, Loc Hoang, Roshan Dathathri, G. Gill, Vishwesh Jatala, D. Kuck, K. Pingali
{"title":"A Study of APIs for Graph Analytics Workloads","authors":"Hochan Lee, D. Wong, Loc Hoang, Roshan Dathathri, G. Gill, Vishwesh Jatala, D. Kuck, K. Pingali","doi":"10.1109/IISWC50251.2020.00030","DOIUrl":"https://doi.org/10.1109/IISWC50251.2020.00030","url":null,"abstract":"Traditionally, parallel graph analytics workloads have been implemented in systems like Pregel, GraphLab, Galois, and Ligra that support graph data structures and graph operations directly. An alternative approach is to express graph workloads in terms of sparse matrix kernels such as sparse matrix-vector and matrix-matrix multiplication. An API for these kernels has been defined by the GraphBLAS project. The SuiteSparse project has implemented this API on shared-memory platforms, and the LAGraph project is building a library of graph algorithms using this API. How does the matrix-based approach perform compared to the graph-based approach? Our experiments on a 56 core CPU show that for representative graph workloads, LAGraph/SuiteSparse solutions are 5x slower on the average than Galois solutions. We argue that this performance gap arises from inherent limitations of a matrix-based API: regardless of which architecture a matrix-based algorithm is run on, it is subject to the same inherent limitations of the matrix-based API.","PeriodicalId":365983,"journal":{"name":"2020 IEEE International Symposium on Workload Characterization (IISWC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130613071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信