2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)最新文献

筛选
英文 中文
Improving cache performance using read-write partitioning 使用读写分区提高缓存性能
S. Khan, Alaa R. Alameldeen, C. Wilkerson, O. Mutlu, Daniel A. Jiménez
{"title":"Improving cache performance using read-write partitioning","authors":"S. Khan, Alaa R. Alameldeen, C. Wilkerson, O. Mutlu, Daniel A. Jiménez","doi":"10.1109/HPCA.2014.6835954","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835954","url":null,"abstract":"Cache read misses stall the processor if there are no independent instructions to execute. In contrast, most cache write misses are off the critical path of execution, since writes can be buffered in the cache or the store buffer. With few exceptions, cache lines that serve loads are more critical for performance than cache lines that serve only stores. Unfortunately, traditional cache management mechanisms do not take into account this disparity between read-write criticality. This paper proposes a Read-Write Partitioning (RWP) policy that minimizes read misses by dynamically partitioning the cache into clean and dirty partitions, where partitions grow in size if they are more likely to receive future read requests. We show that exploiting the differences in read-write criticality provides better performance over prior cache management mechanisms. For a single-core system, RWP provides 5% average speedup across the entire SPEC CPU2006 suite, and 14% average speedup for cache-sensitive benchmarks, over the baseline LRU replacement policy. We also show that RWP can perform within 3% of a new yet complex instruction-address-based technique, Read Reference Predictor (RRP), that bypasses cache lines which are unlikely to receive any read requests, while requiring only 5.4% of RRP's state overhead. On a 4-core system, our RWP mechanism improves system throughput by 6% over the baseline and outperforms three other state-of-the-art mechanisms we evaluate.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122825771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Exploiting thermal energy storage to reduce data center capital and operating expenses 利用热能储存来减少数据中心的资本和运营费用
Wenli Zheng, Kai Ma, Xiaorui Wang
{"title":"Exploiting thermal energy storage to reduce data center capital and operating expenses","authors":"Wenli Zheng, Kai Ma, Xiaorui Wang","doi":"10.1109/HPCA.2014.6835924","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835924","url":null,"abstract":"Power shaving has recently been proposed to dynamically shave the power peaks of a data center with energy storage devices (ESD), such that more servers can be safely hosted. In addition to the reduction of capital investment (cap-ex), power shaving also helps cut the electricity bills (op-ex) of a data center by reducing the high utility tariffs related to peak power. However, existing work on power shaving focuses exclusively on electrical ESDs (e.g., UPS batteries) to shave the server-side power demand. In this paper, we propose TE-Shave, a generalized power shaving framework that exploits both UPS batteries and a new knob, thermal energy storage (TES) tanks equipped in many data centers. Specifically, TE-Shave utilizes stored cold water or ice to manipulate the cooling power, which accounts for 30-40% of the total power cost of a data center. Our extensive evaluation with real-world workload traces shows that TE-Shave saves cap-ex and op-ex up to $2,668/day and $825/day, respectively, for a data center with 17,920 servers. Even for future data centers that are projected to have more efficient cooling and thus a smaller portion of cooling power, e.g., a quarter of today's level, TE-Shave still leads to 28% more savings than existing work that focuses only on the server-side power. TE-Shave is also coordinated with traditional TES solutions for further reduced op-ex.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126519618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Up by their bootstraps: Online learning in Artificial Neural Networks for CMP uncore power management 自力更生:CMP非核心电源管理的人工神经网络在线学习
Jae-Yeon Won, X. Chen, Paul V. Gratz, Jiang Hu, V. Soteriou
{"title":"Up by their bootstraps: Online learning in Artificial Neural Networks for CMP uncore power management","authors":"Jae-Yeon Won, X. Chen, Paul V. Gratz, Jiang Hu, V. Soteriou","doi":"10.1109/HPCA.2014.6835941","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835941","url":null,"abstract":"With increasing core counts in Chip Multi-Processor (CMP) designs, the size of the on-chip communication fabric and shared Last-Level Caches (LLC), which we term uncore here, is also growing, consuming as much as 30% of die area and a significant portion of chip power budget. In this work, we focus on improving the uncore energy-efficiency using dynamic voltage and frequency scaling. Previous approaches are mostly restricted to reactive techniques, which may respond poorly to abrupt workload and uncore utility changes. We find, however, there are predictable patterns in uncore utility which point towards the potential of a proactive approach to uncore power management. In this work, we utilize artificial intelligence principles to proactively leverage uncore utility pattern prediction via an Artificial Neural Network (ANN). ANNs, however, require training to produce accurate predictions. Architecting an efficient training mechanism without a priori knowledge of the workload is a major challenge. We propose a novel technique in which a simple Proportional Integral (PI) controller is used as a secondary classifier during ANN training, dynamically pulling the ANN up by its bootstraps to achieve accurate predictions. Both the ANN and the PI controller, then, work in tandem once the ANN training phase is complete. The advantage of using a PI controller to initially train the ANN is a dramatic acceleration of the ANN's initial learning phase. Thus, in a real system, this scenario allows quick power-control adaptation to rapid application phase changes and context switches during execution. We show that the proposed technique produces results comparable to those of pure offline training without a need for prerecorded training sets. Full system simulations using the PARSEC benchmark suite show that the bootstrapped ANN improves the energy-delay product of the uncore system by 27% versus existing state-of-the-art methodologies.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"403 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132438906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
GPUdmm: A high-performance and memory-oblivious GPU architecture using dynamic memory management GPUdmm:使用动态内存管理的高性能、内存无关的GPU架构
Youngsok Kim, Jaewon Lee, Jae-Eon Jo, Jangwoo Kim
{"title":"GPUdmm: A high-performance and memory-oblivious GPU architecture using dynamic memory management","authors":"Youngsok Kim, Jaewon Lee, Jae-Eon Jo, Jangwoo Kim","doi":"10.1109/HPCA.2014.6835963","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835963","url":null,"abstract":"GPU programmers suffer from programmer-managed GPU memory because both performance and programmability heavily depend on GPU memory allocation and CPU-GPU data transfer mechanisms. To improve performance and programmability, programmers should be able to place only the data frequently accessed by GPU on GPU memory while overlapping CPU-GPU data transfers and GPU executions as much as possible. However, current GPU architectures and programming models blindly place entire data on GPU memory, requiring a significantly large GPU memory size. Otherwise, they must trigger unnecessary CPU-GPU data transfers due to an insufficient GPU memory size. In this paper, we propose GPUdmm, a novel GPU architecture to enable high-performance and memory-oblivious GPU programming. First, GPUdmm uses GPU memory as a cache of CPU memory to provide programmers a view of the CPU memory-sized programming space. Second, GPUdmm achieves high performance by exploiting data locality and dynamically transferring data between CPU and GPU memories while effectively overlapping CPU-GPU data transfers and GPU executions. Third, GPUdmm can further reduce unnecessary CPU-GPU data transfers by exploiting simple programmer hints. Our carefully designed and validated experiments (e.g., PCIe/DMA timing) against representative benchmarks show that GPUdmm can achieve up to five times higher performance for the same GPU memory size, or reduce the GPU memory size requirement by up to 75% while maintaining the same performance.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131669015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
QuickRelease: A throughput-oriented approach to release consistency on GPUs QuickRelease:一种在gpu上释放一致性的面向吞吐量的方法
Blake A. Hechtman, Shuai Che, Derek Hower, Yingying Tian, Bradford M. Beckmann, M. Hill, S. Reinhardt, D. Wood
{"title":"QuickRelease: A throughput-oriented approach to release consistency on GPUs","authors":"Blake A. Hechtman, Shuai Che, Derek Hower, Yingying Tian, Bradford M. Beckmann, M. Hill, S. Reinhardt, D. Wood","doi":"10.1109/HPCA.2014.6835930","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835930","url":null,"abstract":"Graphics processing units (GPUs) have specialized throughput-oriented memory systems optimized for streaming writes with scratchpad memories to capture locality explicitly. Expanding the utility of GPUs beyond graphics encourages designs that simplify programming (e.g., using caches instead of scratchpads) and better support irregular applications with finer-grain synchronization. Our hypothesis is that, like CPUs, GPUs will benefit from caches and coherence, but that CPU-style “read for ownership” (RFO) coherence is inappropriate to maintain support for regular streaming workloads. This paper proposes QuickRelease (QR), which improves on conventional GPU memory systems in two ways. First, QR uses a FIFO to enforce the partial order of writes so that synchronization operations can complete without frequent cache flushes. Thus, non-synchronizing threads in QR can re-use cached data even when other threads are performing synchronization. Second, QR partitions the resources required by reads and writes to reduce the penalty of writes on read performance. Simulation results across a wide variety of general-purpose GPU workloads show that QR achieves a 7% average performance improvement compared to a conventional GPU memory system. Furthermore, for emerging workloads with finer-grain synchronization, QR achieves up to 42% performance improvement compared to a conventional GPU memory system without the scalability challenges of RFO coherence. To this end, QR provides a throughput-oriented solution to provide fine-grain synchronization on GPUs.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Over-clocked SSD: Safely running beyond flash memory chip I/O clock specs 超频SSD:安全运行超出闪存芯片I/O时钟规格
Kai Zhao, K. S. Venkataraman, Xuebin Zhang, Jiangpeng Li, Ning Zheng, Tong Zhang
{"title":"Over-clocked SSD: Safely running beyond flash memory chip I/O clock specs","authors":"Kai Zhao, K. S. Venkataraman, Xuebin Zhang, Jiangpeng Li, Ning Zheng, Tong Zhang","doi":"10.1109/HPCA.2014.6835962","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835962","url":null,"abstract":"This paper presents a design strategy that enables aggressive use of flash memory chip I/O link over-clocking in solid-state drives (SSDs) without sacrificing storage reliability. The gradual wear-out and process variation of NAND flash memory makes the worst-case oriented error correction code (ECC) in SSDs largely under-utilized most of the time. This work proposes to opportunistically leverage under-utilized error correction strength to allow error-prone flash memory I/O link over-clocking. Its rationale and key design issues are presented and studied in this paper, and its potential effectiveness has been verified through hardware experiments and system simulations. Using sub-22nm NAND flash memory chips with I/O specs of 166MBps, we carried out extensive experiments and show that the proposed design strategy can enable SSDs safely operate with error-prone I/O link running at 275MBps. Trace-driven SSD simulations over a variety of workload traces show the system read response time can be reduced by over 20%.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133444671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
MP3: Minimizing performance penalty for power-gating of Clos network-on-chip MP3:最大限度地减少Clos片上网络电源门控的性能损失
Lizhong Chen, Lihang Zhao, Ruisheng Wang, T. Pinkston
{"title":"MP3: Minimizing performance penalty for power-gating of Clos network-on-chip","authors":"Lizhong Chen, Lihang Zhao, Ruisheng Wang, T. Pinkston","doi":"10.1109/HPCA.2014.6835940","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835940","url":null,"abstract":"Power-gating is a promising technique to mitigate the increasing static power of on-chip routers. Clos networks are potentially good targets for power-gating because of their path diversity and decoupling between processing elements and most of the routers. While power-gated Clos networks can perform better than power-gated direct networks such as meshes, a significant performance penalty exists when conventional power-gating techniques are used. In this paper, we propose an effective power-gating scheme, called MP3 (Minimal Performance Penalty Power-gating), which is able to achieve minimal (i.e., near-zero) performance penalty and save more static energy than conventional power-gating applied to Clos networks. MP3 is able to completely remove the wakeup latency from the critical path, reduce long-term and transient contention, and actively steer network traffic to create increased power-gating opportunities. Full system evaluation using PARSEC benchmarks shows that the proposed approach can significantly reduce the performance penalty to less than 1% (as opposed to 38% with conventional power-gating) while saving more than 47% of router static energy, with only 2.5% additional area overhead.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117166353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Strategies for anticipating risk in heterogeneous system design 异构系统设计中的风险预测策略
Marisabel Guevara, Benjamin Lubin, Benjamin C. Lee
{"title":"Strategies for anticipating risk in heterogeneous system design","authors":"Marisabel Guevara, Benjamin Lubin, Benjamin C. Lee","doi":"10.1109/HPCA.2014.6835926","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835926","url":null,"abstract":"Heterogeneous design presents an opportunity to improve energy efficiency but raises a challenge in resource management. Prior design methodologies aim for performance and efficiency, yet a deployed system may miss these targets due to run-time effects, which we denote as risk. We propose design strategies that explicitly aim to mitigate risk. We introduce new processor selection criteria, such as the coefficient of variation in performance, to produce heterogeneous configurations that balance performance risks and efficiency rewards. Out of the tens of strategies we consider, risk-aware approaches account for more than 70% of the strategies that produce systems with the best service quality. Applying these risk-mitigating strategies to heterogeneous datacenter design can produce a system that violates response time targets 50% less often.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Dynamic management of TurboMode in modern multi-core chips TurboMode在现代多核芯片中的动态管理
David Lo, C. Kozyrakis
{"title":"Dynamic management of TurboMode in modern multi-core chips","authors":"David Lo, C. Kozyrakis","doi":"10.1109/HPCA.2014.6835969","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835969","url":null,"abstract":"Dynamic overclocking of CPUs, or TurboMode, is a feature recently introduced on all x86 multi-core chips. It leverages thermal and power headroom from idle execution resources to overclock active cores to increase performance. TurboMode can accelerate CPU-bound applications at the cost of additional power consumption. Nevertheless, naive use of TurboMode can significantly increase power consumption without increasing performance. Thus far, there is no strategy for managing TurboMode to optimize its use across all workloads and efficiency metrics. This paper analyzes the impact of TurboMode on a wide range of efficiency metrics (performance, power, cost, and combined metrics such as QPS/W and ED2) for representative server workloads on various hardware configurations. We determine that TurboMode is generally beneficial for performance (up to +24%), cost efficiency (QPS/$ up to +8%), energy-delay product (ED, up to +47%), and energy-delay-squared product (ED2, up to +68%). However, TurboMode is inefficient for workloads that exhibit interference for shared resources. We use this information to build and validate a model that predicts the optimal TurboMode setting for each efficiency metric. We then implement autoturbo, a background daemon that dynamically manages TurboMode in real time without any hardware changes. We demonstrate that autoturbo improves QPS/$, ED, and ED2 by 8%, 47%, and 68% respectively over not using TurboMode. At the same time, autoturbo virtually eliminates all the large drops in those same metrics (-12%, -25%, -25% for QPS/$, ED, and ED2) that occur when TurboMode is used naively (always on).","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126715193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Sandbox Prefetching: Safe run-time evaluation of aggressive prefetchers 沙盒预取:对主动预取器进行安全的运行时评估
Seth H. Pugsley, Zeshan A. Chishti, C. Wilkerson, Peng-fei Chuang, Robert L. Scott, A. Jaleel, Shih-Lien Lu, K. Chow, R. Balasubramonian
{"title":"Sandbox Prefetching: Safe run-time evaluation of aggressive prefetchers","authors":"Seth H. Pugsley, Zeshan A. Chishti, C. Wilkerson, Peng-fei Chuang, Robert L. Scott, A. Jaleel, Shih-Lien Lu, K. Chow, R. Balasubramonian","doi":"10.1109/HPCA.2014.6835971","DOIUrl":"https://doi.org/10.1109/HPCA.2014.6835971","url":null,"abstract":"Memory latency is a major factor in limiting CPU performance, and prefetching is a well-known method for hiding memory latency. Overly aggressive prefetching can waste scarce resources such as memory bandwidth and cache capacity, limiting or even hurting performance. It is therefore important to employ prefetching mechanisms that use these resources prudently, while still prefetching required data in a timely manner. In this work, we propose a new mechanism to determine at run-time the appropriate prefetching mechanism for the currently executing program, called Sandbox Prefetching. Sandbox Prefetching evaluates simple, aggressive offset prefetchers at run-time by adding the prefetch address to a Bloom filter, rather than actually fetching the data into the cache. Subsequent cache accesses are tested against the contents of the Bloom filter to see if the aggressive prefetcher under evaluation could have accurately prefetched the data, while simultaneously testing for the existence of prefetchable streams. Real prefetches are performed when the accuracy of evaluated prefetchers exceeds a threshold. This method combines the ideas of global pattern confirmation and immediate prefetching action to achieve high performance. Sandbox Prefetching improves performance across the tested workloads by 47.6% compared to not using any prefetching, and by 18.7% compared to the Feedback Directed Prefetching technique. Performance is also improved by 1.4% compared to the Access Map Pattern Matching Prefetcher, while incurring considerably less logic and storage overheads.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123796846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信