2017 IEEE International Symposium on Workload Characterization (IISWC)最新文献

筛选
英文 中文
Why do programs have heavy tails? 为什么程序有沉重的尾巴?
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-12-05 DOI: 10.1109/IISWC.2017.8167771
Hiroshi Sasaki, Fang-Hsiang Su, Teruo Tanimoto, S. Sethumadhavan
{"title":"Why do programs have heavy tails?","authors":"Hiroshi Sasaki, Fang-Hsiang Su, Teruo Tanimoto, S. Sethumadhavan","doi":"10.1109/IISWC.2017.8167771","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167771","url":null,"abstract":"Designing and optimizing computer systems require deep understanding of the underlying system. Historically many important observations that led to the development of essential hardware and software optimizations were driven by empirical studies of program behavior. In this paper we report an interesting property of dynamic program execution by viewing it as a changing (or social) network. In a program social network, two instructions are friends if there is a producer-consumer relationship between them. One prominent result is that the outdegree of instructions follow heavy tails or power law distributions, i.e., a few instructions produce values for many instructions while most instructions do so for very few instructions. In other words, the number of instruction dependencies is highly skewed. In this paper we investigate this curious phenomenon. By analyzing a large set of workloads under different compilers, compilation options, ISAs and inputs we find that the dependence skew is widespread, suggesting that it is fundamental. We also observe that the skew is fractal across time and space. Finally, we describe conditions under which skew emerges within programs and provide evidence that suggests that the heavy-tailed distributions are a unique program property.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
MeNa: A memory navigator for modern hardware in a scale-out environment MeNa:面向横向扩展环境中的现代硬件的内存导航器
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167751
Hosein Mohammadi Makrani, H. Homayoun
{"title":"MeNa: A memory navigator for modern hardware in a scale-out environment","authors":"Hosein Mohammadi Makrani, H. Homayoun","doi":"10.1109/IISWC.2017.8167751","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167751","url":null,"abstract":"Scale-out infrastructure such as Cloud is built upon a large network of multi-core processors. Performance, power consumption, and capital cost of such infrastructure depend on the overall system configuration including number of processing cores, core frequency, memory hierarchy and capacity, number of memory channels, and memory data rate. Among these parameters, memory subsystem is known to be one of the performance bottlenecks, contributing significantly to the overall capital and operational cost of the server. Also, given the rise of Big Data and analytics applications, this could potentially pose an even bigger challenge to the performance of cloud applications and cost of cloud infrastructure. Hence it is important to understand the role of memory subsystem in cloud infrastructure and in particular for this emerging class of applications. Despite the increasing interest in recent years, little work has been done in understanding memory requirements trends and developing accurate and effective models to predict performance and cost of memory subsystem. Currently there is no well-defined methodology for selecting a memory configuration that reduces execution time and power consumption by considering the capital and operational cost of cloud. In this paper, through a comprehensive real-system empirical analysis of performance, we address these challenges by first characterizing diverse types of scale-out applications across a wide range of memory configuration parameters. The characterization helps to accurately capture applications' behavior and derive a model to predict their performance. Based on the developed predictive model, we propose MeNa, which is a methodology to maximize the performance/cost ratio of scale-out applications running in cloud environment. MeNa navigates memory and processor parameters to find the system configuration for a given application and a given budget, to maximum performance. Compared to brute force method, MeNa achieves more than 90% accuracy for identifying the right configuration parameters to maximize performance/cost ratio. Moreover, we show how MeNa can be effectively leveraged for server designers to find architectural insights or subscribers to allocate just enough budget to maximize performance of their applications in cloud","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127347096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Memory requirements of hadoop, spark, and MPI based big data applications on commodity server class architectures 基于hadoop、spark和MPI的大数据应用在商用服务器类架构下的内存需求
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167763
Hosein Mohammadi Makrani, H. Homayoun
{"title":"Memory requirements of hadoop, spark, and MPI based big data applications on commodity server class architectures","authors":"Hosein Mohammadi Makrani, H. Homayoun","doi":"10.1109/IISWC.2017.8167763","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167763","url":null,"abstract":"Emerging big data frameworks requires computational resources and memory subsystems that can naturally scale to manage massive amounts of diverse data. Given the large size and heterogeneity of the data, it is currently unclear whether big data frameworks such as Hadoop, Spark, and MPI will require high performance and large capacity memory to cope with this change and exactly what role main memory subsystems will play; particularly in terms of energy efficiency. The primary purpose of this study is to answer these questions through empirical analysis of different memory configurations available on commodity hardware and to assess the impact of these configurations on the performance and power of these well-established frameworks. Our results reveal that while for Hadoop there is no major demand for high-end DRAM, Spark and MPI iterative tasks (e.g. machine learning) are benefiting from a high-end DRAM; in particular high frequency and large numbers of channels. Among the configurable parameters, our results indicate that increasing the number of DRAM channels reduces DRAM power and improves the energy-efficiency across all three frameworks.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124388652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Moka: Model-based concurrent kernel analysis Moka:基于模型的并发核分析
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167777
Leiming Yu, Xun Gong, Yifan Sun, Q. Fang, Norman Rubin, D. Kaeli
{"title":"Moka: Model-based concurrent kernel analysis","authors":"Leiming Yu, Xun Gong, Yifan Sun, Q. Fang, Norman Rubin, D. Kaeli","doi":"10.1109/IISWC.2017.8167777","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167777","url":null,"abstract":"GPUs continue to increase the number of compute resources with each new generation. Many data-parallel applications have been re-engineered to leverage the thousands of cores on the GPU. But not every kernel can fully utilize all the resources available. Many applications contain multiple kernels that could potentially be run concurrently. To better utilize the massive resources on the GPU, device vendors have started to support Concurrent Kernel Execution (CKE). However, the application throughput provided by CKE is subject to a number of factors, including the kernel configuration attributes, the dynamic behavior of each kernel (e.g., compute-intentive vs. memory-intensive), the kernel launch order and inter-kernel dependencies. Minor changes in any of theses factors can have a large impact on the effectiveness of CKE. In this paper, we present Moka, an empirical model for tuning concurrent kernel performance. Moka allows us to accurately predict the resulting performance and scalability of multi-kernel applications when using CKE. We consider both static and dynamic workload characteristics that impact the utility of CKE, and leverage these metrics to drive kernel scheduling decisions on NVIDIA GPUs. The underlying data transfer pattern and GPU resource contention are analyzed in detail. Our model is able to accurately predict the performance ceiling of concurrent kernel execution. We validate our model using several real-world applications that have multiple kernels that can run concurrently, and evaluate CKE performance on a NVIDIA Maxwell GPU. Our model is able to predict the performance of CKE applications accurately, providing estimates that differ by less than 12% as compared to actual runtime performance. Using our estimates, we can quickly find the best CKE strategy for our applications to achieve improved application throughput. We believe we have developed a useful tool to aid application programmers to accelerate their applications using CKE.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"63 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128162211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Co-locating and concurrent fine-tuning MapReduce applications on microservers for energy efficiency 在微服务器上共同定位和并发微调MapReduce应用程序以提高能源效率
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167753
Maria Malik, D. Tullsen, H. Homayoun
{"title":"Co-locating and concurrent fine-tuning MapReduce applications on microservers for energy efficiency","authors":"Maria Malik, D. Tullsen, H. Homayoun","doi":"10.1109/IISWC.2017.8167753","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167753","url":null,"abstract":"Datacenters provide flexibility and high performance for users and cost efficiency for operators. However, the high computational demands of big data and analytics technologies such as MapReduce, a dominant programming model and framework for big data analytics, mean that even small changes in the efficiency of execution in the data center can have a large effect on user cost and operational cost. Fine-tuning configuration parameters of MapReduce applications at the application, architecture, and system levels plays a crucial role in improving the energy-efficiency of the server and reducing the operational cost. In this work, through methodical investigation of performance and power measurements, we demonstrate how the interplay among various MapReduce configurations as well as application and architecture level parameters create new opportunities to co-locate MapReduce applications at the node level. We also show how concurrently fine-tuning optimization parameters for multiple scheduled MapReduce applications improves energy-efficiency compared to fine-tuning parameters for each application separately. In this paper, we present Co-Located Application Optimization (COLAO) that co-schedules multiple MapReduce applications at the node level to enhance energy efficiency. Our results show that through co-locating MapReduce applications and fine-tuning configuration parameters concurrently, COLAO reduces the number of nodes by half to execute MapReduce applications while improving the EDP by 2.2X on average, compared to fine-tuning applications individually and run them serially for a broad range of studied workloads.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130200538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
HeteroSync: A benchmark suite for fine-grained synchronization on tightly coupled GPUs HeteroSync:在紧密耦合的gpu上进行细粒度同步的基准测试套件
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167781
Matthew D. Sinclair, Johnathan Alsop, S. Adve
{"title":"HeteroSync: A benchmark suite for fine-grained synchronization on tightly coupled GPUs","authors":"Matthew D. Sinclair, Johnathan Alsop, S. Adve","doi":"10.1109/IISWC.2017.8167781","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167781","url":null,"abstract":"Traditionally GPUs focused on streaming, data-parallel applications, with little data reuse or sharing and coarse-grained synchronization. However, the rise of general-purpose GPU (GPGPU) computing has made GPUs desirable for applications with more general sharing patterns and fine-grained synchronization, especially for recent GPUs that have a unified address space and coherent caches. Prior work has introduced microbenchmarks to measure the impact of these changes, but each paper uses its own set of microbenchmarks. In this work, we combine several of these sets together in a single suite, HeteroSync. HeteroSync includes several synchronization primitives, data sharing at different levels of the memory hierarchy, and relaxed atomics. We characterize the scalability of HeteroSync for different coherence protocols and consistency models on modern, tightly coupled CPU-GPU systems and show that certain algorithms, coherence protocols, and consistency models scale better than others.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133840836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Understanding power-performance relationship of energy-efficient modern DRAM devices 了解高能效现代DRAM器件的功率性能关系
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167762
Sukhan Lee, Yuhwan Ro, Y. Son, Hyunyoon Cho, N. Kim, Jung Ho Ahn
{"title":"Understanding power-performance relationship of energy-efficient modern DRAM devices","authors":"Sukhan Lee, Yuhwan Ro, Y. Son, Hyunyoon Cho, N. Kim, Jung Ho Ahn","doi":"10.1109/IISWC.2017.8167762","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167762","url":null,"abstract":"As servers are equipped with more memory modules each with larger capacity, main-memory systems are now the second highest energy-consuming component in big-memory servers and their energy consumption even becomes comparable to processors in some servers. Meanwhile, it is critical for big-memory servers and their main-memory systems to offer high energy efficiency. Prior work exploited mobile LPDDR devices' advantages (lower power than DDR devices) while attempting to surmount their limitations (longer latency, lower bandwidth, or both). However, we demonstrate that such main memory architectures (based on the latest LPDDR4 devices) are no longer effective. This is because the power consumption of present DDR4 devices has substantially decreased by adopting the strength of mobile and graphics memory whereas LPDDR4 has sacrificed energy efficiency and focused more on increasing data transfer rates; we also exhibit that the power consumption of DDR4 devices can substantially vary across manufacturers. Moreover, investigating a new energy-saving feature of DDR4 devices in depth, we show that activating this feature often hurts overall energy efficiency of servers due to its performance penalties.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128205041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A framework for fast and fair evaluation of automata processing hardware 一个快速公正评估自动机处理硬件的框架
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167767
Xiaodong Yu, Kaixi Hou, Hao Wang, Wu-chun Feng
{"title":"A framework for fast and fair evaluation of automata processing hardware","authors":"Xiaodong Yu, Kaixi Hou, Hao Wang, Wu-chun Feng","doi":"10.1109/IISWC.2017.8167767","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167767","url":null,"abstract":"Programming Micron's Automata Processor (AP) requires expertise in both automata theory and the AP architecture, as programmers have to manually manipulate state transition elements (STEs) and their transitions with a low-level Automata Network Markup Language (ANML). When the required STEs of an application exceed the hardware capacity, multiple reconfigurations are needed. However, most previous AP-based designs limit the dataset size to fit into a single AP board and simply neglect the costly overhead of reconfiguration. This results in unfair performance comparisons between the AP and other processors. To address this issue, we propose a framework for the fast and fair evaluation of AP devices. Our framework provides a hierarchical approach that automatically generates automata for large datasets through user-defined paradigms and allows the use of cascadable macros to achieve highly optimized reconfigurations. We highlight the importance of counting the configuration time in the overall AP performance, which in turn, can provide better insight into identifying essential hardware features, specifically for large-scale problem sizes. Our framework shows that the AP can achieve up to 461x overall speedup fairly compared to CPU counterparts.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Characterizing diverse handheld apps for customized hardware acceleration 表征各种手持应用程序的定制硬件加速
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167776
Prasanna Venkatesh Rengasamy, Haibo Zhang, N. Nachiappan, Shulin Zhao, A. Sivasubramaniam, M. Kandemir, C. Das
{"title":"Characterizing diverse handheld apps for customized hardware acceleration","authors":"Prasanna Venkatesh Rengasamy, Haibo Zhang, N. Nachiappan, Shulin Zhao, A. Sivasubramaniam, M. Kandemir, C. Das","doi":"10.1109/IISWC.2017.8167776","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167776","url":null,"abstract":"Current handhelds incorporate a variety of acceler-ators/IPs for improving their performance and energy efficiency. While these IPs are extremely useful for accelerating parts of a computation, the CPU still expends a significant amount of time and energy in the overall execution. Coarse grain customized hardware of Android APIs and methods, though widely useful, is also not an option due to the high hardware costs. Instead, we propose a fine-grain sequence of instructions, called a Load-to-Store (LOST) sequence, for hardware customization. A LOST sequence starts with a load and ends with a store, including dependent instructions in between. Unlike prior approaches to customization, a LOST sequence is defined based on a sequence of opcodes rather than a sequence of PC addresses or operands. We identify such commonly occurring LOST sequences within and across several popular apps and propose a design to integrate these customized hardware sequences as macro functional units into the CPU data-path. Detailed evaluation shows that such customized LOST sequences can provide an average of 25% CPU speedup, or 12% speedup for the entire system.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"91 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114002138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Cross-layer workload characterization of meta-tracing JIT VMs 元跟踪JIT虚拟机的跨层工作负载表征
2017 IEEE International Symposium on Workload Characterization (IISWC) Pub Date : 2017-10-01 DOI: 10.1109/IISWC.2017.8167760
Berkin Ilbeyi, Carl Friedrich Bolz-Tereick, C. Batten
{"title":"Cross-layer workload characterization of meta-tracing JIT VMs","authors":"Berkin Ilbeyi, Carl Friedrich Bolz-Tereick, C. Batten","doi":"10.1109/IISWC.2017.8167760","DOIUrl":"https://doi.org/10.1109/IISWC.2017.8167760","url":null,"abstract":"Dynamic programming languages are becoming increasingly popular, and this motivates the need for just-in-time (JIT) compilation to close the productivity/performance gap. Unfortunately, developing custom JIT-optimizing virtual machines (VMs) requires significant effort. Recent work has shown the promiseofmeta-JITframeworks, which abstract the language definition from the VM internals. Meta-JITs can enable automatic generation of high-performance JIT-optimizing VMs from high-level language specifications. This paper provides a detailed workload characterization of meta-tracing JITs for two different dynamic programming languages: Python and Racket. We propose a new cross-layer methodology, and then we use this methodology to characterize a diverse selection of benchmarks at the application, framework, interpreter, JIT-intermediate-representation, and microarchitecture level. Our work is able to provide initial answers to important questions about meta-tracing JITs including the potential performance improvement over optimized interpreters, the source of various overheads, and the continued performance gap between JIT-compiled code and statically compiled languages.","PeriodicalId":110094,"journal":{"name":"2017 IEEE International Symposium on Workload Characterization (IISWC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122713720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信