2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)最新文献

筛选
英文 中文
Boomerang: A Metadata-Free Architecture for Control Flow Delivery 回旋镖:用于控制流交付的无元数据架构
Rakesh Kumar, Cheng-Chieh Huang, Boris Grot, V. Nagarajan
{"title":"Boomerang: A Metadata-Free Architecture for Control Flow Delivery","authors":"Rakesh Kumar, Cheng-Chieh Huang, Boris Grot, V. Nagarajan","doi":"10.1109/HPCA.2017.53","DOIUrl":"https://doi.org/10.1109/HPCA.2017.53","url":null,"abstract":"Contemporary server workloads feature massive instruction footprints stemming from deep, layered software stacks. The active instruction working set of the entire stack can easily reach into megabytes, resulting in frequent front-end stalls due to instruction cache misses and pipeline flushes due to branch target buffer (BTB) misses. While a number of techniques have been proposed to address these problems, every one of them requires dedicated metadata structures, translating into significant storage and complexity costs. In this paper, we ask the question whether it is possible to achieve high-performance control flow delivery without the metadata costs of prior techniques. We revisit a previously proposed approach of branch-predictor-directed prefetching, which leverages just the branch predictor and BTB to discover and prefetch the missing instruction cache blocks by exploring the program control flow ahead of the core front-end. Contrary to conventional wisdom, we find that this approach can be effective in covering instruction cache misses in modern CMPs with long LLC access latencies and multi-MB server binaries. Our first contribution lies in explaining the reasons for the efficacy of branch-predictor-directed prefetching. Our second contribution is in Boomerang, a metadata-free architecture for control flow delivery. Boomerang leverages a branch-predictor-directed prefetcher to discover and prefill not only the instruction cache blocks, but also the missing BTB entries. Crucially, we demonstrate that the additional hardware cost required to identify and fill BTB misses is negligible. Our experimental evaluation shows that Boomerang matches the performance of the state-of-the-art control flow delivery scheme without the latter's high metadata and complexity overheads.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134462513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
ATOM: Atomic Durability in Non-volatile Memory through Hardware Logging ATOM:通过硬件日志记录在非易失性内存中的原子持久性
A. Joshi, V. Nagarajan, Stratis Viglas, M. Cintra
{"title":"ATOM: Atomic Durability in Non-volatile Memory through Hardware Logging","authors":"A. Joshi, V. Nagarajan, Stratis Viglas, M. Cintra","doi":"10.1109/HPCA.2017.50","DOIUrl":"https://doi.org/10.1109/HPCA.2017.50","url":null,"abstract":"Non-volatile memory (NVM) is emerging as a fast byte-addressable alternative for storing persistent data. Ensuring atomic durability in NVM requires logging. Existing techniques have proposed software logging either by using streaming stores for an undo log, or, by relying on the combination of clflush and mfence for a redo log. These techniques are suboptimal because they waste precious execution cycles to implement logging, which is fundamentally a data movement operation. We propose ATOM, a hardware log manager based on undo logging that performs the logging operation out of the critical path. We present the design principles behind ATOM and two techniques to optimize its performance. Our results show that ATOM achieves an improvement of 27% to 33% for micro-benchmarks and 60% for TPC-C over a baseline undo log design.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"348 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116261471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 104
Enabling Effective Module-Oblivious Power Gating for Embedded Processors 使能嵌入式处理器的有效模块无关功率门控
Hari Cherupalli, Henry Duwe, Weidong Ye, Rakesh Kumar, J. Sartori
{"title":"Enabling Effective Module-Oblivious Power Gating for Embedded Processors","authors":"Hari Cherupalli, Henry Duwe, Weidong Ye, Rakesh Kumar, J. Sartori","doi":"10.1109/HPCA.2017.48","DOIUrl":"https://doi.org/10.1109/HPCA.2017.48","url":null,"abstract":"The increasingly-stringent power and energy requirements of emerging embedded applications have led to a strong recent interest in aggressive power gating techniques. Conventional techniques for aggressive power gating perform module-based power gating in processors, where power domains correspond to RTL modules. We observe that there can be significant power benefits from module-oblivious power gating, where power domains can include an arbitrary set of gates, possibly from multiple RTL modules. However, since it is not possible to infer the activity of module-oblivious power domains from software alone, conventional software-based power management techniques cannot be applied for module-oblivious power gating in processors. Also, since module-oblivious domains are not encapsulated with a well-defined port list and functionality like RTL modules, hardware-based management of module-oblivious domains is prohibitively expensive. In this paper, we present a technique for low-cost management of module-oblivious power domains in embedded processors. The technique involves symbolic simulation-based co-analysis of a processor's hardware design and a software binary to derive profitable and safe power gating decisions for a given set of module-oblivious domains when the software binary is run on the processor. Our technique is automated, does not require programmer intervention, and incurs low management overhead. We demonstrate that module-oblivious power gating based on our technique reduces leakage energy by 2x with respect to state-of-the-art aggressive module-based power gating for a common embedded processor.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128724976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
High-Bandwidth Low-Latency Approximate Interconnection Networks 高带宽低延迟近似互连网络
Daichi Fujiki, K. Ishii, I. Fujiwara, Hiroki Matsutani, H. Amano, H. Casanova, M. Koibuchi
{"title":"High-Bandwidth Low-Latency Approximate Interconnection Networks","authors":"Daichi Fujiki, K. Ishii, I. Fujiwara, Hiroki Matsutani, H. Amano, H. Casanova, M. Koibuchi","doi":"10.1109/HPCA.2017.38","DOIUrl":"https://doi.org/10.1109/HPCA.2017.38","url":null,"abstract":"Computational applications are subject to various kinds of numerical errors, ranging from deterministic round-off errors to soft errors caused by non-deterministic bit flips, which do not lead to application failure but corrupt application results. Non-deterministic bit flips are typically mitigated in hardware using various error correcting codes (ECC). But in practice, due to performance and cost concerns, these techniques do not guarantee error-free execution. On large-scale computing platforms, soft errors occur with non-negligible probability in RAM and on the CPU, and it has become clear that applications must tolerate them. For some applications, this tolerance is intrinsic as result quality can remain acceptable even in the presence of soft errors (e.g., data analysis applications, multimedia applications). Tolerance can also be built into the application, resolving data corruptions in software during application execution. By contrast, today's optical networks hold on to a rigid error-free standard, which imposes limits on network performance scalability. In this work we propose high-bandwidth, low-latency approximate networks with the following three features:(1) Optical links that exploit multi-level quadrature amplitude modulation (QAM) for achieving high bandwidth, (2) Avoidance of forward error correction (FEC), which makes optical link error-prone but affords lower latency, and(3) The use of symbol mapping coding between bit sequence and QAM to ensure data integrity that is sufficient for practical soft-error-tolerant applications. Discrete-event simulation results for application benchmarks show that approx networks achieve speedups up to 2.94 when compared to conventional networks.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126275004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Partial Row Activation for Low-Power DRAM System 低功耗DRAM系统的部分行激活
Yebin Lee, Hyeonggyu Kim, Seokin Hong, Soontae Kim
{"title":"Partial Row Activation for Low-Power DRAM System","authors":"Yebin Lee, Hyeonggyu Kim, Seokin Hong, Soontae Kim","doi":"10.1109/HPCA.2017.35","DOIUrl":"https://doi.org/10.1109/HPCA.2017.35","url":null,"abstract":"Owing to increasing demand of faster and larger DRAM system, the DRAM system accounts for a large portion of the total power consumption of computing systems. As memory traffic and DRAM bandwidth grow, the row activation and I/O power consumptions are becoming major contributors to total DRAM power consumption. Thus, reducing row activation and I/O power consumptions has big potential for improving the power and energy efficiency of the computing systems. To this end, we propose a partial row activation scheme for memory writes, in which DRAM is re-architected to mitigate row overfetching problem of modern DRAMs and to reduce row activation power consumption. In addition, accompanying I/O power consumption in memory writes is also reduced by transferring only a part of cache line data that must be written to partially opened rows. In our proposed scheme, partial rows ranging from a one-eighth row to a full row can be activated to minimize row activation granularity for memory writes and the full bandwidth of the conventional DRAM can be maintained for memory reads. Our partial row activation scheme is shown to reduce total DRAM power consumption by up to 32% and 23% on average, which outperforms previously proposed schemes in DRAM power saving with almost no performance loss.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115703157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Design and Analysis of an APU for Exascale Computing 百亿亿次计算机辅助动力装置的设计与分析
T. Vijayaraghavan, Yasuko Eckert, G. Loh, M. Schulte, Mike Ignatowski, Bradford M. Beckmann, W. Brantley, J. Greathouse, Wei Huang, Arun Karunanithi, Onur Kayiran, Mitesh R. Meswani, Indrani Paul, Matthew Poremba, Steven E. Raasch, S. Reinhardt, G. Sadowski, Vilas Sridharan
{"title":"Design and Analysis of an APU for Exascale Computing","authors":"T. Vijayaraghavan, Yasuko Eckert, G. Loh, M. Schulte, Mike Ignatowski, Bradford M. Beckmann, W. Brantley, J. Greathouse, Wei Huang, Arun Karunanithi, Onur Kayiran, Mitesh R. Meswani, Indrani Paul, Matthew Poremba, Steven E. Raasch, S. Reinhardt, G. Sadowski, Vilas Sridharan","doi":"10.1109/HPCA.2017.42","DOIUrl":"https://doi.org/10.1109/HPCA.2017.42","url":null,"abstract":"The challenges to push computing to exaflop levels are difficult given desired targets for memory capacity, memory bandwidth, power efficiency, reliability, and cost. This paper presents a vision for an architecture that can be used to construct exascale systems. We describe a conceptual Exascale Node Architecture (ENA), which is the computational building block for an exascale supercomputer. The ENA consists of an Exascale Heterogeneous Processor (EHP) coupled with an advanced memory system. The EHP provides a high-performance accelerated processing unit (CPU+GPU), in-package high-bandwidth 3D memory, and aggressive use of die-stacking and chiplet technologies to meet the requirements for exascale computing in a balanced manner. We present initial experimental analysis to demonstrate the promise of our approach, and we discuss remaining open research challenges for the community.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133211324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Designing Low-Power, Low-Latency Networks-on-Chip by Optimally Combining Electrical and Optical Links 设计低功耗、低延迟的片上网络,优化结合电和光链路
Sebastian Werner, J. Navaridas, M. Luján
{"title":"Designing Low-Power, Low-Latency Networks-on-Chip by Optimally Combining Electrical and Optical Links","authors":"Sebastian Werner, J. Navaridas, M. Luján","doi":"10.1109/HPCA.2017.23","DOIUrl":"https://doi.org/10.1109/HPCA.2017.23","url":null,"abstract":"Optical on-chip communication is considered a promising candidate to overcome latency and energy bottlenecks of electrical interconnects. Although recently proposed hybrid Networks-on-chip (NoCs), which implement both electrical and optical links, improve power efficiency, they often fail to combine these two interconnect technologies efficiently and suffer from considerable laser power overheads caused by high-bandwidth optical links. We argue that these overheads can be avoided by inserting a higher quantity of low-bandwidth optical links in a topology, as this yields lower optical loss and in turn laser power. Moreover, when optimally combined with electrical links for short distances, this can be done without trading off latency. We present the effectiveness of this concept with Lego, our hybrid, mesh-based NoC that provides high power efficiency by utilizing electrical links for local traffic, and low-bandwidth optical links for long distances. Electrical links are placed systematically to outweigh the serialization delay introduced by the optical links, simplify router microarchitecture, and allow to save optical resources. Our routing algorithm always chooses the link that offers the lowest latency and energy. Compared to state-of-the-art proposals, Lego increases throughput-per-watt by at least 40%, and lowers latency by 35% on average for synthetic traffic. On SPLASH-2/PARSEC workloads, Lego improves power efficiency by at least 37% (up to 3.5x).","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115584021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Design and Evaluation of AWGR-Based Photonic NoC Architectures for 2.5D Integrated High Performance Computing Systems 基于awgr的2.5D集成高性能计算系统光子NoC架构设计与评价
P. Grani, R. Proietti, V. Akella, S. Yoo
{"title":"Design and Evaluation of AWGR-Based Photonic NoC Architectures for 2.5D Integrated High Performance Computing Systems","authors":"P. Grani, R. Proietti, V. Akella, S. Yoo","doi":"10.1109/HPCA.2017.17","DOIUrl":"https://doi.org/10.1109/HPCA.2017.17","url":null,"abstract":"In future performance improvement of the basic building block of supercomputers has to come through increased integration enabled by 3D (vertical) and 2.5D (horizontal) die-stacking. But to take advantage of this integration we need an interconnection network between the memory and compute die that not only can provide an order of magnitude higher bandwidth but also consume an order of magnitude less power than today's state of the art electronic interconnects. Weshow how Arrayed Waveguide Grating Router-based photonic interconnects implemented on the silicon interposer can be used to realize a 16 × 16 photonic Network-on-Chip (NoC) with a bisection bandwidth of 16 Tb/s. We propose a baseline network, which consumes 2.57 pJ/bit assuming 100% utilization. We show that the power is dominated by the electro-optical interface of the transmitter, which can be reduced by a more aggressive design that improves the energy per bit to 0.454 pJ/bit at 100% utilization. Compared to recently proposed interposer-based electrical NoC's we show an average performance improvement of 25% on the PARSEC benchmark suite on a 64-core system using the Gem5 simulation framework.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122053724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Tiny Directory: Efficient Shared Memory in Many-Core Systems with Ultra-Low-Overhead Coherence Tracking 微型目录:具有超低开销相干跟踪的多核系统中的高效共享内存
Sudhanshu Shukla, Mainak Chaudhuri
{"title":"Tiny Directory: Efficient Shared Memory in Many-Core Systems with Ultra-Low-Overhead Coherence Tracking","authors":"Sudhanshu Shukla, Mainak Chaudhuri","doi":"10.1109/HPCA.2017.24","DOIUrl":"https://doi.org/10.1109/HPCA.2017.24","url":null,"abstract":"The sparse directory has emerged as a critical component for supporting the shared memory abstraction in multi-and many-core chip-multiprocessors. Recent research efforts have explored ways to reduce the number of entries in the sparse directory. These include tracking coherence of private regions at a coarse grain, not tracking blocks that belong to pages identified as private by the operating system (OS), and not tracking a subset of blocks that are speculated to be private by the hardware. These techniques require support for multi-grain coherence, assistance of OS, or broadcast-based recovery on sharing an untracked block that is wrongly speculated as private. In this paper, wedesign a robust minimally-sized sparse directory that can offer adequate performance while enjoying the simplicity, scalability, and OS-independence of traditional broadcast-free block-grain coherence. We begin our exploration with a naive design that does not have a sparse directory and the location/sharers of a block are tracked by borrowing a portion of the block's last-level cache (LLC) data way. Such a design, however, lengthens the critical path from two transactions to three transactions (two hops to three hops) for the blocks that experience frequent shared read accesses. We address this problem by architecting a tiny sparse directory that dynamically identifies and tracks a selected subset of the blocks that experience a large volume of shared accesses. We augment the tiny directory proposal with an option of selectively spilling into the LLC space for tracking the coherence of the critical shared blocks that the tiny directory fails to accommodate. Detailed simulation-based study on a 128-core system with a large set of multi-threaded applications spanning scientific, general-purpose, and commercial computing shows that our coherence tracking proposal operating with (1/32)x to (1/256)x sparse directories offers performance within a percentage of a traditional 2x sparse directory.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132230012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
SOUP-N-SALAD: Allocation-Oblivious Access Latency Reduction with Asymmetric DRAM Microarchitectures 非对称DRAM微架构下的分配无关访问延迟降低
Yuhwan Ro, Hyunyoon Cho, Eojin Lee, Daejin Jung, Y. Son, Jung Ho Ahn, Jae W. Lee
{"title":"SOUP-N-SALAD: Allocation-Oblivious Access Latency Reduction with Asymmetric DRAM Microarchitectures","authors":"Yuhwan Ro, Hyunyoon Cho, Eojin Lee, Daejin Jung, Y. Son, Jung Ho Ahn, Jae W. Lee","doi":"10.1109/HPCA.2017.31","DOIUrl":"https://doi.org/10.1109/HPCA.2017.31","url":null,"abstract":"Memory access latency has a significant impact on application performance. Unfortunately, the random access latency of DRAM has been scaling relatively slowly, and often directly affects the critical path of execution, especially for applications with insufficient locality or memory-level parallelism. The existing low-latency DRAM organizations either incur significant area overhead or burden the software stack with non-uniform access latency. This paper proposes two microarchitectural techniques to provide uniformly low access time over the entire DRAM chip. The first technique is SALAD, a new DRAM device architecture that provides symmetric access latency with asymmetric DRAM bank organizations. Because local regions have lower data transfer time due to their proximity to the I/O pads, SALAD applies high aspect-ratio (i.e., low-latency) mats only to remote regions to offset the difference in data transfer time, resulting in symmetrically low latency across regions. The second technique is SOUP (skewed organization of µ banks with pipelined accesses), which leverages asymmetry in column access latency within a region due to non-uniform distance to the column decoders. By starting I/O transfers as soon as data from near cells arrive, instead of waiting for the entire column data, SOUP further saves two memory clock cycles for column accesses for all regions. The resulting design, called SOUP-N-SALAD, improves IPC and EDP by 9.6% (11.2%) and 18.2% (21.8%) over the baseline DDR4 device, respectively, for memory-intensive SPEC CPU2006 workloads without any software modifications, while incurring only 3% (6%) area overhead.","PeriodicalId":118950,"journal":{"name":"2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114952559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信