2007 IEEE International Symposium on Performance Analysis of Systems & Software最新文献

筛选
英文 中文
DRAM-Level Prefetching for Fully-Buffered DIMM: Design, Performance and Power Saving 全缓冲DIMM的dram级预取:设计、性能与节能
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363740
Jiang Lin, Hongzhong Zheng, Zhichun Zhu, Zhao Zhang, Howard David
{"title":"DRAM-Level Prefetching for Fully-Buffered DIMM: Design, Performance and Power Saving","authors":"Jiang Lin, Hongzhong Zheng, Zhichun Zhu, Zhao Zhang, Howard David","doi":"10.1109/ISPASS.2007.363740","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363740","url":null,"abstract":"We have studied DRAM-level prefetching for the fully buffered DIMM (FB-DIMM) designed for multi-core processors. FB-DIMM has a unique two-level interconnect structure, with FB-DIMM channels at the first-level connecting the memory controller and advanced memory buffers (AMBs); and DDR2 buses at the second-level connecting the AMBs with DRAM chips. We propose an AMB prefetching method that prefetches memory blocks from DRAM chips to AMBs. It utilizes the redundant bandwidth between the DRAM chips and AMBs but does not consume the crucial channel bandwidth. The proposed method fetches K memory blocks of L2 cache block sizes around the demanded block, where K is a small value ranging from two to eight. The method may also reduce the DRAM power consumption by merging some DRAM precharges and activations. Our cycle-accurate simulation shows that the average performance improvement is 16% for single-core and multi-core workloads constructed from memory-intensive SPEC2000 programs with software cache prefetching enabled; and no workload has negative speedup. We have found that the performance gain comes from the reduction of idle memory latency and the improvement of channel bandwidth utilization. We have also found that there is only a small overlap between the performance gains from the AMB prefetching and the software cache prefetching. The average of estimated power saving is 15%","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115647433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An Analysis of Performance Interference Effects in Virtual Environments 虚拟环境中性能干扰效应分析
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363750
Younggyun Koh, Rob C. Knauerhase, P. Brett, M. Bowman, Z. Wen, C. Pu
{"title":"An Analysis of Performance Interference Effects in Virtual Environments","authors":"Younggyun Koh, Rob C. Knauerhase, P. Brett, M. Bowman, Z. Wen, C. Pu","doi":"10.1109/ISPASS.2007.363750","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363750","url":null,"abstract":"Virtualization is an essential technology in modern datacenters. Despite advantages such as security isolation, fault isolation, and environment isolation, current virtualization techniques do not provide effective performance isolation between virtual machines (VMs). Specifically, hidden contention for physical resources impacts performance differently in different workload configurations, causing significant variance in observed system throughput. To this end, characterizing workloads that generate performance interference is important in order to maximize overall utility. In this paper, we study the effects of performance interference by looking at system-level workload characteristics. In a physical host, we allocate two VMs, each of which runs a sample application chosen from a wide range of benchmark and real-world workloads. For each combination, we collect performance metrics and runtime characteristics using an instrumented Ken hypervisor. Through subsequent analysis of collected data, we identify clusters of applications that generate certain types of performance interference. Furthermore, we develop mathematical models to predict the performance of a new application from its workload characteristics. Our evaluation shows our techniques were able to predict performance with average error of approximately 5%","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129901535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 353
Combining Simulation and Virtualization through Dynamic Sampling 通过动态采样将仿真与虚拟化相结合
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363738
Ayose Falcón, P. Faraboschi, Daniel Ortega
{"title":"Combining Simulation and Virtualization through Dynamic Sampling","authors":"Ayose Falcón, P. Faraboschi, Daniel Ortega","doi":"10.1109/ISPASS.2007.363738","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363738","url":null,"abstract":"The high speed and faithfulness of state-of-the-art virtual machines (VMs) make them the ideal front-end for a system simulation framework. However, VMs only emulate the functional behavior and just provide the minimal timing for the system to run correctly. In a simulation framework supporting the exploration of different configurations, a timing backend is still necessary to accurately determine the performance of the simulated target. As it has been extensively researched, sampling is an excellent approach for fast timing simulation. However, existing sampling mechanisms require capturing information for every instruction and memory access. Hence, coupling a standard sampling technique to a VM implies disabling most of the \"tricks\" used by a VM to accelerate execution, such as the caching and linking of dynamically compiled code. Without code caching, the performance of a VM is severely impacted. In this paper we present a novel dynamic sampling mechanism that overcomes this problem and enables the use of VMs for timing simulation. By making use of the internal information collected by the VM during functional simulation, we can quickly assess important characteristics of the simulated applications (such as phase changes), and activate or deactivate the timing simulation accordingly. This allows us to run unmodified OS and applications over emulated hardware at near-native speed, yet providing a way to insert timing measurements that yield a final accuracy similar to state-of-the-art sampling methods","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131619066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Complete System Power Estimation: A Trickle-Down Approach Based on Performance Events 完整的系统功率估计:基于性能事件的滴入式方法
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363746
W. Bircher, L. John
{"title":"Complete System Power Estimation: A Trickle-Down Approach Based on Performance Events","authors":"W. Bircher, L. John","doi":"10.1109/ISPASS.2007.363746","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363746","url":null,"abstract":"This paper proposes the use of microprocessor performance counters for online measurement of complete system power consumption. While past studies have demonstrated the use of performance counters for microprocessor power, to the best of our knowledge, we are the first to create power models for the entire system based on processor performance events. Our approach takes advantage of the \"trickle-down\" effect of performance events in a microprocessor. We show how well known performance-related events within a microprocessor such as cache misses and DMA transactions are highly correlated to power consumption outside of the microprocessor. Using measurement of an actual system running scientific and commercial workloads we develop and validate power models for five subsystems: memory, chipset, I/O, disk and microprocessor. These models are shown to have an average error of less than 9% per subsystem across the considered workloads. Through the use of these models and existing on-chip performance event counters, it is possible to estimate system power consumption without the need for additional power sensing hardware","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130804068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 202
Simplifying Active Memory Clusters by Leveraging Directory Protocol Threads 利用目录协议线程简化活动内存集群
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363754
Dhiraj D. Kalamkar, Mainak Chaudhuri, M. Heinrich
{"title":"Simplifying Active Memory Clusters by Leveraging Directory Protocol Threads","authors":"Dhiraj D. Kalamkar, Mainak Chaudhuri, M. Heinrich","doi":"10.1109/ISPASS.2007.363754","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363754","url":null,"abstract":"Address re-mapping techniques in so-called active memory systems have been shown to dramatically increase the performance of applications with poor cache and/or communication behavior on shared memory multiprocessors. However, these systems require custom hardware in the memory controller for cache line assembly/disassembly, address translation between re-mapped and normal addresses, and coherence logic. In this paper we make the important observation that on a traditional flexible distributed shared memory (DSM) multiprocessor node, equipped with a coherence protocol thread context as in SMTp or a simple dedicated in-order protocol processing core as in a CMP, the address re-mapping techniques can be implemented in software running on the protocol thread or core without custom hardware in the memory controller while delivering high performance. We implement the active memory address re-mapping techniques of parallel reduction and matrix transpose (two popular kernels in scientific, multimedia, and data mining applications) on these systems, outline the novel coherence protocol extensions needed to make them run efficiently in software protocols, and evaluate these protocols on four different DSM multiprocessor architectures with multi-threaded and/or dual-core nodes. The proposed protocol extensions yield speedup of 1.45 for parallel reduction and 1.29 for matrix transpose on a 16-node DSM multiprocessor when compared to non-active memory baseline systems and achieve performance comparable to the existing active memory architectures that rely on custom hardware in the memory controller","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129124838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PTLsim: A Cycle Accurate Full System x86-64 Microarchitectural Simulator PTLsim:周期精确全系统x86-64微架构模拟器
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363733
Matt T. Yourst
{"title":"PTLsim: A Cycle Accurate Full System x86-64 Microarchitectural Simulator","authors":"Matt T. Yourst","doi":"10.1109/ISPASS.2007.363733","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363733","url":null,"abstract":"In this paper, we introduce PTLsim, a cycle accurate full system x86-64 microprocessor simulator and virtual machine. PTLsim models a modern superscalar out of order x86-64 processor core at a configurable level of detail ranging from RTL-level models of all key pipeline structures, caches and devices up to full-speed native execution on the host CPU. Unlike other microarchitectural simulators, PTLsim targets the real commercially available x86 ISA, rather than a discontinued architecture with limited tools and an uncertain future. PTLsim supports several flavors: a single threaded userspace version and a full system version providing an SMT model and the infrastructure for multi-core support. We first describe what it takes to perform cycle accurate modeling of a complete x86 machine at the muop (micro-operation) level, along with the challenges and requirements for effective full system multi-processor capable simulation. We then describe the internal architecture of full system PTLsim and how it interacts with the Xen hypervisor and PTLsim's native mode co-simulation technology. We experimentally evaluate PTLsim's real world accuracy by configuring it like an AMD Athlon 64 machine before running a demanding full system client-server networked benchmark inside PTLsim. We compare the statistics generated by our model with the actual numbers from the real processor to demonstrate PTLsim is accurate to within 5% across all major parameters. We provide a discussion of prior simulation tools, along with their strengths and weaknesses. We describe why PTLsim's x86 focus is highly relevant, and we use our full system simulation results to demonstrate the pitfalls of userspace only simulation. Finally, we conclude by detailing future work","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125246619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 409
Understanding the Memory Performance of Data-Mining Workloads on Small, Medium, and Large-Scale CMPs Using Hardware-Software Co-simulation 利用硬件软件联合仿真了解小型、中型和大型cmp上数据挖掘工作负载的内存性能
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363734
Wenlong Li, E. Li, A. Jaleel, Jiulong Shan, Yurong Chen, Qigang Wang, R. Iyer, R. Illikkal, Yimin Zhang, Dong Liu, Michael Liao, Wei Wei, Jinhua Du
{"title":"Understanding the Memory Performance of Data-Mining Workloads on Small, Medium, and Large-Scale CMPs Using Hardware-Software Co-simulation","authors":"Wenlong Li, E. Li, A. Jaleel, Jiulong Shan, Yurong Chen, Qigang Wang, R. Iyer, R. Illikkal, Yimin Zhang, Dong Liu, Michael Liao, Wei Wei, Jinhua Du","doi":"10.1109/ISPASS.2007.363734","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363734","url":null,"abstract":"With the amount of data continuing to grow, extracting \"data of interest\" is becoming popular, pervasive, and more important than ever. Data mining, as this process is known as, seeks to draw meaningful conclusions, extract knowledge, and acquire models from vast amounts of data. These compute-intensive data-mining applications, where thread-level parallelism can be effectively exploited, are the design targets of future multi-core systems. As a result, future multi-core systems will be required to process terabyte-level workloads. To understand the memory system performance of data-mining applications, this paper presents the use of hardware-software co-simulation to explore the cache design space of several multi-threaded data mining applications. Our study reveals that the workloads are memory intensive, have large working-set sizes, and exhibit good data locality. We find that large DRAM caches can be useful to address their large working-set sizes","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127477027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Performance Analysis of Cell Broadband Engine for High Memory Bandwidth Applications 面向高存储带宽应用的小区宽带引擎性能分析
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363751
Daniel Jiménez-González, X. Martorell, Alex Ramírez
{"title":"Performance Analysis of Cell Broadband Engine for High Memory Bandwidth Applications","authors":"Daniel Jiménez-González, X. Martorell, Alex Ramírez","doi":"10.1109/ISPASS.2007.363751","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363751","url":null,"abstract":"The cell broadband engine (CBE) is designed to be a general purpose platform exposing an enormous arithmetic performance due to its eight SIMD-only synergistic processor elements (SPEs), capable of achieving 134.4 GFLOPS (16.8 GFLOPS * 8) at 2.1 GHz, and a 64-bit power processor element (PPE). Each SPE has a 256Kb non-coherent local memory, and communicates to other SPEs and main memory through its DMA controller. CBE main memory is connected to all the CBE processor elements (PPE and SPEs) through the element interconnect bus (EIB), which has a 134.4 GB/s bandwidth performance peak at half the processor speed. Therefore, CBE platform is suitable to be used by applications using MPI and streaming programming models with a potential high performance peak. In this paper we focus on the communication part of those applications, and measure the actual memory bandwidth that each of the CBE processor components can sustain. We have measured the sustained bandwidth between PPE and memory, SPE and memory, two individual SPEs to determine if this bandwidth depends on their physical location, pairs of SPEs to achieve maximum bandwidth in nearly-ideal conditions, and in a cycle of SPEs representing a streaming kind of computation. Our results on a real machine show that following some strict programming rules, individual SPE to SPE communication almost achieves the peak bandwidth when using the DMA controllers to transfer memory chunks of at least 1024 Bytes. In addition, SPE to memory bandwidth should be considered in streaming programming. For instance, implementing two data streams using 4 SPEs each can be more efficient than having a single data stream using the 8 SPEs","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126737976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Reverse State Reconstruction for Sampled Microarchitectural Simulation 采样微建筑仿真的反向状态重构
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363749
Paul D. Bryan, Michel C. Rosier, T. Conte
{"title":"Reverse State Reconstruction for Sampled Microarchitectural Simulation","authors":"Paul D. Bryan, Michel C. Rosier, T. Conte","doi":"10.1109/ISPASS.2007.363749","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363749","url":null,"abstract":"For simulation, a tradeoff exists between speed and accuracy. The more instructions simulated from the workload, the more accurate the results - but at a higher cost. To reduce processor simulation times, a variety of techniques have been introduced. Statistically sampled simulation is one method that mitigates the cost of simulation while retaining high accuracy. A contiguous group of instructions, called a cluster, is simulated and then a fast type of simulation is used to skip to the next group. As instructions are skipped, non-sampling bias is introduced and must be removed for accurate measurements to be taken. In this paper, the reverse state reconstruction warm-up method is introduced. While skipping between clusters, the data necessary for reconstruction are recorded. Later, these data are scanned in reverse order so that processor state can be approximated without functionally applying every skipped instruction. By trading storage for speed, the proposed method introduces the concept of on-demand state reconstruction for sampled simulations. Using this technique, the method isolates ineffectual instructions from the skipped instructions without the use of profiling. Compared to SMARTS, reverse state reconstruction achieves a maximum and average speedup ratio of 2.45 and 1.64, respectively, with minimal sacrifice to accuracy (less than 0.3%)","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"96 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127994429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Modeling and Characterizing Power Variability in Multicore Architectures 多核架构中功率可变性的建模与表征
2007 IEEE International Symposium on Performance Analysis of Systems & Software Pub Date : 2007-04-25 DOI: 10.1109/ISPASS.2007.363745
Ke Meng, Frank Huebbers, R. Joseph, Y. Ismail
{"title":"Modeling and Characterizing Power Variability in Multicore Architectures","authors":"Ke Meng, Frank Huebbers, R. Joseph, Y. Ismail","doi":"10.1109/ISPASS.2007.363745","DOIUrl":"https://doi.org/10.1109/ISPASS.2007.363745","url":null,"abstract":"Parameter variation due to manufacturing error is an unavoidable consequence of technology scaling in future generations. The impact of random variation in physical factors such as gate length and interconnect spacing have a profound impact on not only performance of chips, but also their power behavior. While circuit-level techniques such as adaptive body-biasing can help to mitigate mal-fabricated chips, they cannot completely alleviate severe within die variations forecasted for near future designs. Despite the large impact that power variability have on future designs, there is a lack of published work that examines architectural implications of this phenomenon. In this work, we develop architecture level models that model power variability due to manufacturing error and examine its influence on multicore designs. We introduce VariPower, a tool for modeling power variability based on an microarchitectural description and floorplan of a chip. In particular, our models are based on layout level SPICE simulations and project power variability for different microarchitectural blocks using statistical analysis. Using VariPower: (1) we characterize power variability for multicore processors, (2) explore application sensitivity to power variability, and (3) examine clustering techniques that can appropriately classify groups of processors and chips that have similar variability characteristics","PeriodicalId":439151,"journal":{"name":"2007 IEEE International Symposium on Performance Analysis of Systems & Software","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133808363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信