2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)最新文献

筛选
英文 中文
Efficient footprint caching for Tagless DRAM Caches 无标签DRAM缓存的高效内存占用缓存
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446068
Hakbeom Jang, Yongjun Lee, JongWon Kim, Youngsok Kim, Jang-Hyun Kim, Jinkyu Jeong, Jae W. Lee
{"title":"Efficient footprint caching for Tagless DRAM Caches","authors":"Hakbeom Jang, Yongjun Lee, JongWon Kim, Youngsok Kim, Jang-Hyun Kim, Jinkyu Jeong, Jae W. Lee","doi":"10.1109/HPCA.2016.7446068","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446068","url":null,"abstract":"Efficient cache tag management is a primary design objective for large, in-package DRAM caches. Recently, Tagless DRAM Caches (TDCs) have been proposed to completely eliminate tagging structures from both on-die SRAM and in-package DRAM, which are a major scalability bottleneck for future multi-gigabyte DRAM caches. However, TDC imposes a constraint on DRAM cache block size to be the same as OS page size (e.g., 4KB) as it takes a unified approach to address translation and cache tag management. Caching at a page granularity, or page-based caching, incurs significant off-package DRAM bandwidth waste by over-fetching blocks within a page that are not actually used. Footprint caching is an effective solution to this problem, which fetches only those blocks that will likely be touched during the page's lifetime in the DRAM cache, referred to as the page's footprint. In this paper we demonstrate TDC opens up unique opportunities to realize efficient footprint caching with higher prediction accuracy and a lower hardware cost than the original footprint caching scheme. Since there are no cache tags in TDC, the footprints of cached pages are tracked at TLB, instead of cache tag array, to incur much lower on-die storage overhead than the original design. Besides, when a cached page is evicted, its footprint will be stored in the corresponding page table entry, instead of an auxiliary on-die structure (i.e., Footprint History Table), to prevent footprint thrashing among different pages, thus yielding higher accuracy in footprint prediction. The resulting design, called Footprint-augmented Tagless DRAM Cache (F-TDC), significantly improves the bandwidth efficiency of TDC, and hence its performance and energy efficiency. Our evaluation with 3D Through-Silicon-Via-based in-package DRAM demonstrates an average reduction of off-package bandwidth by 32.0%, which, in turn, improves IPC and EDP by 17.7% and 25.4%, respectively, over the state-of-the-art TDC with no footprint caching.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125245844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
A complete key recovery timing attack on a GPU 一个完整的密钥恢复定时攻击的GPU
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446081
Z. Jiang, Yunsi Fei, D. Kaeli
{"title":"A complete key recovery timing attack on a GPU","authors":"Z. Jiang, Yunsi Fei, D. Kaeli","doi":"10.1109/HPCA.2016.7446081","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446081","url":null,"abstract":"Graphics Processing Units (GPUs) have become mainstream parallel computing devices. They are deployed on diverse platforms, and an increasing number of applications have been moved to GPUs to exploit their massive parallel computational resources. GPUs are starting to be used for security services, where high-volume data is encrypted to ensure integrity and confidentiality. However, the security of GPUs has only begun to receive attention. Issues such as side-channel vulnerability have not been addressed. The goal of this paper is to evaluate the side-channel security of GPUs and demonstrate a complete AES (Advanced Encryption Standard) key recovery using known ciphertext through a timing channel. To the best of our knowledge, this is the first work that clearly demonstrates the vulnerability of a commercial GPU architecture to side-channel timing attacks. Specifically, for AES-128, we have been able to recover all key bytes utilizing a timing side channel in under 30 minutes.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129874570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 82
A large-scale study of soft-errors on GPUs in the field 在该领域对gpu软误差进行了大规模研究
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446091
Bin Nie, Devesh Tiwari, Saurabh Gupta, E. Smirni, James H. Rogers
{"title":"A large-scale study of soft-errors on GPUs in the field","authors":"Bin Nie, Devesh Tiwari, Saurabh Gupta, E. Smirni, James H. Rogers","doi":"10.1109/HPCA.2016.7446091","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446091","url":null,"abstract":"Parallelism provided by the GPU architecture has enabled domain scientists to simulate physical phenomena at a much faster rate and finer granularity than what was previously possible by CPU-based large-scale clusters. Architecture researchers have been investigating reliability characteristics of GPUs and innovating techniques to increase the reliability of these emerging computing devices. Such efforts are often guided by technology projections and simplistic scientific kernels, and performed using architectural simulators and modeling tools. Lack of large-scale field data impedes the effectiveness of such efforts. This study attempts to bridge this gap by presenting a large-scale field data analysis of GPU reliability. We characterize and quantify different kinds of soft-errors on the Titan supercomputer's GPU nodes. Our study uncovers several interesting and previously unknown insights about the characteristics and impact of soft-errors.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121136746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Efficient GPU hardware transactional memory through early conflict resolution 通过早期冲突解决高效GPU硬件事务性内存
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446071
Sui Chen, Lu Peng
{"title":"Efficient GPU hardware transactional memory through early conflict resolution","authors":"Sui Chen, Lu Peng","doi":"10.1109/HPCA.2016.7446071","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446071","url":null,"abstract":"It has been proposed that Transactional Memory be added to Graphics Processing Units (GPUs) in recent years. One proposed hardware design, Warp TM, can scale to 1000s of concurrent transactions. As a programming method that can atomicize an arbitrary number of memory access locations and greatly reduce the efforts to program parallel applications, transactional memory handles the complexity of inter-thread synchronization. However, when thousands of transactions run concurrently on a GPU, conflicts and resource contentions arise, causing performance loss. In this paper, we identify and analyze the cause of conflicts and contentions and propose two enhancements that try to resolve conflicts early: (1) Early-Abort global conflict resolution that allows conflicts to be detected before they reach the Commit Units so that contention in the Commit Units is reduced and (2) Pause-and-Go execution scheme that reduces the chance of conflict and the performance penalty of re-executing long transactions. These two enhancements are enabled by a single hardware modification. Our evaluation shows the combination of the two enhancements greatly improves overall execution speed while reducing energy consumption.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122211428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Restore truncation for performance improvement in future DRAM systems 恢复截断以提高未来DRAM系统的性能
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446093
Xianwei Zhang, Youtao Zhang, B. Childers, Jun Yang
{"title":"Restore truncation for performance improvement in future DRAM systems","authors":"Xianwei Zhang, Youtao Zhang, B. Childers, Jun Yang","doi":"10.1109/HPCA.2016.7446093","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446093","url":null,"abstract":"Scaling DRAM below 20nm has become a major challenge due to intrinsic limitations in the structure of a bit cell. Future DRAM chips are likely to suffer from significant variations and degraded timings, such as taking much more time to restore cell data after read and write access. In this paper, we propose restore truncation (RT), a low-cost restore strategy to improve performance of DRAM modules that adopt relaxed restore timing. After an access, RT restores a bit cell's voltage only to the level required to persist data to the next scheduled refresh rather than to the default full voltage. Because restore time is shortened, the performance of the cell is improved under process variations. We devise two schemes to balance performance, energy consumption, and hardware overhead. We simulate our proposed RT schemes and compare them with the state of the art. Experimental results show that, on average, RT improves performance by 19.5% and reduces energy consumption by 17%.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116933379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Pushing the limits of accelerator efficiency while retaining programmability 在保持可编程性的同时推动加速器效率的极限
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446051
Tony Nowatzki, Vinay Gangadhar, K. Sankaralingam, G. Wright
{"title":"Pushing the limits of accelerator efficiency while retaining programmability","authors":"Tony Nowatzki, Vinay Gangadhar, K. Sankaralingam, G. Wright","doi":"10.1109/HPCA.2016.7446051","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446051","url":null,"abstract":"The waning benefits of device scaling have caused a push towards domain specific accelerators (DSAs), which sacrifice programmability for efficiency. While providing huge benefits, DSAs are prone to obsoletion due to domain volatility, have recurring design and verification costs, and have large area footprints when multiple DSAs are required in a single device. Because of the benefits of generality, this work explores how far a programmable architecture can be pushed, and whether it can come close to the performance, energy, and area efficiency of a DSA-based approach. Our insight is that DSAs employ common specialization principles for concurrency, computation, communication, data-reuse and coordination, and that these same principles can be exploited in a programmable architecture using a composition of known microarchitectural mechanisms. Specifically, we propose and study an architecture called LSSD, which is composed of many low-power and tiny cores, each having a configurable spatial architecture, scratchpads, and DMA. Our results show that a programmable, specialized architecture can indeed be competitive with a domain-specific approach. Compared to four prominent and diverse DSAs, LSSD can match the DSAs' 10× to 150× speedup over an OOO core, with only up to 4× more area and power than a single DSA, while retaining programmability.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117071827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
McVerSi: A test generation framework for fast memory consistency verification in simulation McVerSi:模拟中快速内存一致性验证的测试生成框架
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446099
M. Elver, V. Nagarajan
{"title":"McVerSi: A test generation framework for fast memory consistency verification in simulation","authors":"M. Elver, V. Nagarajan","doi":"10.1109/HPCA.2016.7446099","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446099","url":null,"abstract":"The memory consistency model (MCM), which formally specifies the behaviour of the memory system, is used by programmers to reason about parallel programs. It is imperative that hardware adheres to the promised MCM. For this reason, hardware designs must be verified against the specified MCM. One common way to do this is via executing tests, where specific threads of instruction sequences are generated and their executions are checked for adherence to the MCM. It would be extremely beneficial to execute such tests under simulation, i.e. when the functional design implementation of the hardware is being prototyped. Most prior verification methodologies, however, target post-silicon environments, which when applied under simulation would be too slow. We propose McVerSi, a test generation framework for fast MCM verification of a full-system design implementation under simulation. Our primary contribution is a Genetic Programming (GP) based approach to MCM test generation, which relies on a novel crossover function that prioritizes memory operations contributing to non-determinism, thereby increasing the probability of uncovering MCM bugs. To guide tests towards exercising as much logic as possible, the simulator's reported coverage is used as the fitness function. Furthermore, we increase test throughput by making the test workload simulation-aware. We evaluate our proposed framework using the Gem5 cycle accurate simulator in full-system mode with Ruby. We discover 2 new bugs due to the faulty interaction of the pipeline and the cache coherence protocol. Crucially, these bugs would not have been discovered through individual verification of the pipeline or the coherence protocol. We study 11 bugs in total. Our GP-based test generation approach finds all bugs consistently, therefore providing much higher guarantees compared to alternative approaches (pseudo-random test generation and litmus tests).","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117090922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Modeling cache performance beyond LRU 超越LRU的缓存性能建模
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446067
Nathan Beckmann, Daniel Sánchez
{"title":"Modeling cache performance beyond LRU","authors":"Nathan Beckmann, Daniel Sánchez","doi":"10.1109/HPCA.2016.7446067","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446067","url":null,"abstract":"Modern processors use high-performance cache replacement policies that outperform traditional alternatives like least-recently used (LRU). Unfortunately, current cache models do not capture these high-performance policies as most use stack distances, which are inherently tied to LRU or its variants. Accurate predictions of cache performance enable many optimizations in multicore systems. For example, cache partitioning uses these predictions to divide capacity among applications in order to maximize performance, guarantee quality of service, or achieve other system objectives. Without an accurate model for high-performance replacement policies, these optimizations are unavailable to modern processors. We present a new probabilistic cache model designed for high-performance replacement policies. It uses absolute reuse distances instead of stack distances, and models replacement policies as abstract ranking functions. These innovations let us model arbitrary age-based replacement policies. Our model achieves median error of less than 1% across several high-performance policies on both synthetic and SPEC CPU2006 benchmarks. Finally, we present a case study showing how to use the model to improve shared cache performance.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117092247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Improving smartphone user experience by balancing performance and energy with probabilistic QoS guarantee 通过概率QoS保证平衡性能和能量来改善智能手机用户体验
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446053
Benjamin Gaudette, Carole-Jean Wu, S. Vrudhula
{"title":"Improving smartphone user experience by balancing performance and energy with probabilistic QoS guarantee","authors":"Benjamin Gaudette, Carole-Jean Wu, S. Vrudhula","doi":"10.1109/HPCA.2016.7446053","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446053","url":null,"abstract":"User satisfaction is pivotal to the success of a mobile application. A recent study has shown that 49% of users would abandon a web-based application if it failed to load within 10 seconds. At the same time, it is imperative to maximize energy efficiency to ensure maximum usage of the limited energy source available to smartphones while maintaining the necessary levels of user satisfaction. An important factor to consider, that has been previously neglected, is variability of execution times of an application, requiring them to be modeled as stochastic quantities. This changes the nature of the objective function and the constraints of the underlying optimization problem. In this paper, we present a new approach to optimal energy control of mobile applications running on modern smartphone devices, focusing on the need to ensure a specified level of user satisfaction. The proposed statistical models address both single and multi-stage applications and are used in the formulation of an optimization problem, the solution to which is a static, lightweight controller that optimizes energy efficiency of mobile applications, subject to constraints on the likelihood that the application execution time meets a given deadline. We demonstrate the proposed models and the corresponding optimization method on three common mobile applications running on a real Qualcomm Snapdragon 8074 mobile chipset. The results show that the proposed statistical estimates of application execution times are within 99.34% of the measured values. Additionally, on the actual Qualcomm Snapdragon 8074 mobile chipset, the proposed control scheme achieves a 29% power savings over commonly-used Linux governors while maintaining an average web page load time of 2 seconds with a likelihood of 90%.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121478742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
DUANG: Fast and lightweight page migration in asymmetric memory systems DUANG:在非对称内存系统中快速轻量级的页面迁移
2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) Pub Date : 2016-03-12 DOI: 10.1109/HPCA.2016.7446088
Hao Wang, Jie Zhang, S. Shridhar, Gieseo Park, Myoungsoo Jung, N. Kim
{"title":"DUANG: Fast and lightweight page migration in asymmetric memory systems","authors":"Hao Wang, Jie Zhang, S. Shridhar, Gieseo Park, Myoungsoo Jung, N. Kim","doi":"10.1109/HPCA.2016.7446088","DOIUrl":"https://doi.org/10.1109/HPCA.2016.7446088","url":null,"abstract":"Main memory systems have gone through dramatic increases in bandwidth and capacity. At the same time, their random access latency has remained relatively constant. For given memory technology, optimizing the latency typically requires sacrificing the density (i.e., cost per bit), which is one of the most critical concerns for memory industry. Recent studies have proposed memory architectures comprised of asymmetric (fast/low-density and slow/high-density) regions to optimize between overall latency and negative impact on density. Such memory architectures attempt to cost-effectively offer both high capacity and high performance. Yet they present a unique challenge, requiring direct placements of hot memory pages1 in the fast region and/or expensive runtime page migrations. In this paper, we propose a novel resistive memory architecture sharing a set of row buffers between a pair of neighboring banks. It enables two attractive techniques: (1) migrating memory pages between slow and fast banks with little performance overhead and (2) adaptively allocating more row buffers to busier banks based on memory access patterns. For an asymmetric memory architecture with both slow/high-density and fast/low-density banks, our shared row-buffer architecture can capture 87-93% of the performance of a memory architecture with only fast banks.","PeriodicalId":417994,"journal":{"name":"2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信