2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)最新文献

筛选
英文 中文
Record-Replay Architecture as a General Security Framework 作为通用安全框架的记录重放体系结构
Y. Shalabi, Mengjia Yan, N. Honarmand, R. Lee, J. Torrellas
{"title":"Record-Replay Architecture as a General Security Framework","authors":"Y. Shalabi, Mengjia Yan, N. Honarmand, R. Lee, J. Torrellas","doi":"10.1109/HPCA.2018.00025","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00025","url":null,"abstract":"Hardware security features need to strike a careful balance between design intrusiveness and completeness of methods. In addition, they need to be flexible, as security threats continuously evolve. To help address these requirements, this paper proposes a novel framework where Record and Deterministic Replay (RnR) is used to complement hardware security features. We call the framework RnR-Safe. RnR-Safe reduces the cost of security hardware by allowing it to be less precise at detecting attacks, potentially reporting false positives. This is because it relies on on-the-fly replay that transparently verifies whether the alarm is a real attack or a false positive. RnR-Safe uses two replayers: an always-on, fast Checkpoint replayer that periodically creates checkpoints, and a detailed-analysis Alarm replayer that is triggered when there is a threat alarm. As an example application, we use RnR-Safe to thwart Return Oriented Programming (ROP) attacks, including on the Linux kernel. Our design augments the Return Address Stack (RAS) with relatively inexpensive hardware. We evaluate RnR-Safe using a variety of workloads on virtual machines running Linux. We find that RnR-Safe is very effective. Thanks to the judicious RAS hardware extensions and hypervisor changes, the checkpointing replayer has an execution speed comparable to the recorded execution. Also, the alarm replayer needs to handle very few false positives.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"15 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113960663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
LATTE-CC: Latency Tolerance Aware Adaptive Cache Compression Management for Energy Efficient GPUs LATTE-CC:延迟容忍度感知的自适应缓存压缩管理节能gpu
A. Arunkumar, Shin-Ying Lee, Vignesh Soundararajan, Carole-Jean Wu
{"title":"LATTE-CC: Latency Tolerance Aware Adaptive Cache Compression Management for Energy Efficient GPUs","authors":"A. Arunkumar, Shin-Ying Lee, Vignesh Soundararajan, Carole-Jean Wu","doi":"10.1109/HPCA.2018.00028","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00028","url":null,"abstract":"General-purpose GPU applications are significantly constrained by the efficiency of the memory subsystem and the availability of data cache capacity on GPUs. Cache compression, while is able to expand the effective cache capacity and improve cache efficiency, comes with the cost of increased hit latency. This has constrained the application of cache compression to mostly lower level caches, leaving it unexplored for L1 caches and for GPUs. Directly applying state-of-the-art high performance cache compression schemes on GPUs results in a wide performance variation from -52% to 48%. To maximize the performance and energy benefits of cache compression for GPUs, we propose a new compression management scheme, called LATTE-CC. LATTE-CC is designed to exploit the dynamically-varying latency tolerance feature of GPUs. LATTE-CC compresses cache lines based on its prediction of the degree of latency tolerance of GPU streaming multiprocessors and by choosing between three distinct compression modes: no compression, low-latency, and high-capacity. LATTE-CC improves the performance of cache sensitive GPGPU applications by as much as 48.4% and by an average of 19.2%, outperforming the static application of compression algorithms. LATTE-CC also reduces GPU energy consumption by an average of 10%, which is twice as much as that of the state-of-the-art compression scheme.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115673131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
KPart: A Hybrid Cache Partitioning-Sharing Technique for Commodity Multicores 面向商品多核的混合缓存分区共享技术
Nosayba El-Sayed, Anurag Mukkara, Po-An Tsai, H. Kasture, Xiaosong Ma, Daniel Sánchez
{"title":"KPart: A Hybrid Cache Partitioning-Sharing Technique for Commodity Multicores","authors":"Nosayba El-Sayed, Anurag Mukkara, Po-An Tsai, H. Kasture, Xiaosong Ma, Daniel Sánchez","doi":"10.1109/HPCA.2018.00019","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00019","url":null,"abstract":"Cache partitioning is now available in commercial hardware. In theory, software can leverage cache partitioning to use the last-level cache better and improve performance. In practice, however, current systems implement way-partitioning, which offers a limited number of partitions and often hurts performance. These limitations squander the performance potential of smart cache management. We present KPart, a hybrid cache partitioning-sharing technique that sidesteps the limitations of way-partitioning and unlocks significant performance on current systems. KPart first groups applications into clusters, then partitions the cache among these clusters. To build clusters, KPart relies on a novel technique to estimate the performance loss an application suffers when sharing a partition. KPart automatically chooses the number of clusters, balancing the isolation benefits of way-partitioning with its potential performance impact. KPart uses detailed profiling information to make these decisions. This information can be gathered either offline, or online at low overhead using a novel profiling mechanism. We evaluate KPart in a real system and in simulation. KPart improves throughput by 24% on average (up to 79%) on an Intel Broadwell-D system, whereas prior per-application partitioning policies improve throughput by just 1.7% on average and hurt 30% of workloads. Simulation results show that KPart achieves most of the performance of more advanced partitioning techniques that are not yet available in hardware.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129464714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
RCoal: Mitigating GPU Timing Attack via Subwarp-Based Randomized Coalescing Techniques RCoal:通过基于subwarp的随机合并技术减轻GPU时序攻击
Gurunath Kadam, Danfeng Zhang, Adwait Jog
{"title":"RCoal: Mitigating GPU Timing Attack via Subwarp-Based Randomized Coalescing Techniques","authors":"Gurunath Kadam, Danfeng Zhang, Adwait Jog","doi":"10.1109/HPCA.2018.00023","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00023","url":null,"abstract":"Graphics processing units (GPUs) are becoming default accelerators in many domains such as high-performance computing (HPC), deep learning, and virtual/augmented reality. Recently, GPUs have also shown significant speedups for a variety of security-sensitive applications such as encryptions. These speedups have largely benefited from the high memory bandwidth and compute throughput of GPUs. One of the key features to optimize the memory bandwidth consumption in GPUs is intra-warp memory access coalescing, which merges memory requests originating from different threads of a single warp into as few cache lines as possible. However, this coalescing feature is also shown to make the GPUs prone to the correlation timing attacks as it exposes the relationship between the execution time and the number of coalesced accesses. Consequently, an attacker is able to correctly reveal an AES private key via repeatedly gathering encrypted data and execution time on a GPU. In this work, we propose a series of defense mechanisms to alleviate such timing attacks by carefully trading off performance for improved security. Specifically, we propose to randomize the coalescing logic such that the attacker finds it hard to guess the correct number of coalesced accesses generated. To this end, we propose to randomize: a) the granularity (called as subwarp) at which warp threads are grouped together for coalescing, and b) the threads selected by each subwarp for coalescing. Such randomization techniques result in three mechanisms: fixed-sized subwarp (FSS), random-sized subwarp (RSS), and random-threaded subwarp (RTS). We find that the combination of these security mechanisms offers 24- to 961-times improvement in the security against the correlation timing attacks with 5 to 28% performance degradation.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131756860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
OuterSPACE: An Outer Product Based Sparse Matrix Multiplication Accelerator OuterSPACE:一个基于外积的稀疏矩阵乘法加速器
S. Pal, Jonathan Beaumont, Dong-hyeon Park, Aporva Amarnath, Siying Feng, C. Chakrabarti, Hun-Seok Kim, D. Blaauw, T. Mudge, R. Dreslinski
{"title":"OuterSPACE: An Outer Product Based Sparse Matrix Multiplication Accelerator","authors":"S. Pal, Jonathan Beaumont, Dong-hyeon Park, Aporva Amarnath, Siying Feng, C. Chakrabarti, Hun-Seok Kim, D. Blaauw, T. Mudge, R. Dreslinski","doi":"10.1109/HPCA.2018.00067","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00067","url":null,"abstract":"Sparse matrices are widely used in graph and data analytics, machine learning, engineering and scientific applications. This paper describes and analyzes OuterSPACE, an accelerator targeted at applications that involve large sparse matrices. OuterSPACE is a highly-scalable, energy-efficient, reconfigurable design, consisting of massively parallel Single Program, Multiple Data (SPMD)-style processing units, distributed memories, high-speed crossbars and High Bandwidth Memory (HBM). We identify redundant memory accesses to non-zeros as a key bottleneck in traditional sparse matrix-matrix multiplication algorithms. To ameliorate this, we implement an outer product based matrix multiplication technique that eliminates redundant accesses by decoupling multiplication from accumulation. We demonstrate that traditional architectures, due to limitations in their memory hierarchies and ability to harness parallelism in the algorithm, are unable to take advantage of this reduction without incurring significant overheads. OuterSPACE is designed to specifically overcome these challenges. We simulate the key components of our architecture using gem5 on a diverse set of matrices from the University of Florida's SuiteSparse collection and the Stanford Network Analysis Project and show a mean speedup of 7.9× over Intel Math Kernel Library on a Xeon CPU, 13.0× against cuSPARSE and 14.0× against CUSP when run on an NVIDIA K40 GPU, while achieving an average throughput of 2.9 GFLOPS within a 24 W power budget in an area of 87 mm2.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126966140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 179
WIR: Warp Instruction Reuse to Minimize Repeated Computations in GPUs 扭曲指令重用以最小化gpu中的重复计算
Keunsoo Kim, W. Ro
{"title":"WIR: Warp Instruction Reuse to Minimize Repeated Computations in GPUs","authors":"Keunsoo Kim, W. Ro","doi":"10.1109/HPCA.2018.00041","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00041","url":null,"abstract":"Warp instructions with an identical arithmetic operation on same input values produce the identical computation results. This paper proposes warp instruction reuse to allow such repeated warp instructions to reuse previous computation results instead of actually executing the instructions. Bypassing register reading, functional unit, and register writing operations improves energy efficiency. This reuse technique is especially beneficial for GPUs since a GPU warp register is usually as wide as thousands of bits. In addition, we propose warp register reuse which allows identical warp register values to share a single physical register through register renaming. The register reuse technique enables to see if different logical warp registers have an identical value by only looking at their physical warp register IDs. Based on this observation, warp register reuse helps to perform all necessary operations for warp instruction reuse with register IDs, which is substantially more efficient than directly manipulating register values. Performance evaluation shows that 20.5% SM energy and 10.7% GPU energy can be saved by allowing 18.7% of warp instructions to reuse prior results.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128792266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Secure DIMM: Moving ORAM Primitives Closer to Memory 安全DIMM:移动ORAM原语更靠近内存
Ali Shafiee, R. Balasubramonian, Mohit Tiwari, Feifei Li
{"title":"Secure DIMM: Moving ORAM Primitives Closer to Memory","authors":"Ali Shafiee, R. Balasubramonian, Mohit Tiwari, Feifei Li","doi":"10.1109/HPCA.2018.00044","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00044","url":null,"abstract":"As more critical applications move to the cloud, there is a pressing need to provide privacy guarantees for data and computation. While cloud infrastructures are vulnerable to a variety of attacks, in this work, we focus on an attack model where an untrusted cloud operator has physical access to the server and can monitor the signals emerging from the processor socket. Even if data packets are encrypted, the sequence of addresses touched by the program serves as an information side channel. To eliminate this side channel, Oblivious RAM constructs have been investigated for decades, but continue to pose large overheads. In this work, we make the case that ORAM overheads can be significantly reduced by moving some ORAM functionality into the memory system. We first design a secure DIMM (or SDIMM) that uses commodity low-cost memory and an ASIC as a secure buffer chip. We then design two new ORAM protocols that leverage SDIMMs to reduce bandwidth, latency, and energy per ORAM access. In both protocols, each SDIMM is responsible for part of the ORAM tree. Each SDIMM performs a number of ORAM operations that are not visible to the main memory channel. By having many SDIMMs in the system, we are able to achieve highly parallel ORAM operations. The main memory channel uses its bandwidth primarily to service blocks requested by the CPU, and to perform a small subset of the many shuffle operations required by conventional ORAM. The new protocols guarantee the same obliviousness properties as Path ORAM. On a set of memory-intensive workloads, our two new ORAM protocols – Independent ORAM and Split ORAM – are able to improve performance by 1.9x and energy by 2.55x, compared to Freecursive ORAM.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123159473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
A Case for Packageless Processors 无封装处理器的案例
Saptadeep Pal, Daniel Petrisko, A. Bajwa, Puneet Gupta, S. Iyer, Rakesh Kumar
{"title":"A Case for Packageless Processors","authors":"Saptadeep Pal, Daniel Petrisko, A. Bajwa, Puneet Gupta, S. Iyer, Rakesh Kumar","doi":"10.1109/HPCA.2018.00047","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00047","url":null,"abstract":"Demand for increasing performance is far outpacing the capability of traditional methods for performance scaling. Disruptive solutions are needed to advance beyond incremental improvements. Traditionally, processors reside inside packages to enable PCB-based integration. We argue that packages reduce the potential memory bandwidth of a processor by at least one order of magnitude, allowable thermal design power (TDP) by up to 70%, and area efficiency by a factor of 5 to 18. Further, silicon chips have scaled well while packages have not. We propose packageless processors - processors where packages have been removed and dies directly mounted on a silicon board using a novel integration technology, Silicon Interconnection Fabric (Si-IF). We show that Si-IF-based packageless processors outperform their packaged counterparts by up to 58% (16% average), 136%(103% average), and 295% (80% average) due to increased memory bandwidth, increased allowable TDP, and reduced area respectively. We also extend the concept of packageless processing to the entire processor and memory system, where the area footprint reduction was up to 76%.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126787180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Crash Consistency in Encrypted Non-volatile Main Memory Systems 加密非易失性主存系统的崩溃一致性
Sihang Liu, Aasheesh Kolli, Jinglei Ren, S. Khan
{"title":"Crash Consistency in Encrypted Non-volatile Main Memory Systems","authors":"Sihang Liu, Aasheesh Kolli, Jinglei Ren, S. Khan","doi":"10.1109/HPCA.2018.00035","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00035","url":null,"abstract":"Non-Volatile Main Memory (NVMM) systems provide high performance by directly manipulating persistent data in-memory, but require crash consistency support to recover data in a consistent state in case of a power failure or system crash. In this work, we focus on the interplay between the crash consistency mechanisms and memory encryption. Memory encryption is necessary for these systems to protect data against the attackers with physical access to the persistent main memory. As decrypting data at every memory read access can significantly degrade the performance, prior works propose to use a memory encryption technique, counter-mode encryption, that reduces the decryption overhead by performing a memory read access in parallel with the decryption process using a counter associated with each cache line. Therefore, a pair of data and counter value is needed to correctly decrypt data after a system crash. We demonstrate that counter-mode encryption does not readily extend to crash consistent NVMM systems as the system will fail to recover data in a consistent state if the encrypted data and associated counter are not written back to memory atomically, a requirement we refer to as counter-atomicity. We show that na¨ıvely enforcing counter-atomicity for all NVMM writes can serialize memory accesses and results in a significant performance degradation. In order to improve the performance, we make an observation that not all writes to NVMM need to be counter-atomic. The crash consistency mechanisms rely on versioning to keep one consistent copy of data intact while manipulating another version directly in-memory. As the recovery process only relies on the unmodified consistent version, it is not necessary to strictly enforce counter-atomicity for the writes that do not affect data recovery. Based on this insight, we propose selective counter-atomicity that allows reordering of writes to data and associated counters when the writes to persistent memory do not alter the recoverable consistent state. We propose efficient software and hardware support to enforce selective counter-atomicity. Our evaluation demonstrates that in a 1/2/4/8- core system, selective counter-atomicity improves performance by 6/11/22/40% compared to a system that enforces counter-atomicity for all NVMM writes. The performance of our selective counter-atomicity design comes within 5% of an ideal NVMM system that provides crash consistency of encrypted data at no cost.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114209526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
SIPT: Speculatively Indexed, Physically Tagged Caches SIPT:推测索引,物理标记缓存
Tianhao Zheng, Haishan Zhu, M. Erez
{"title":"SIPT: Speculatively Indexed, Physically Tagged Caches","authors":"Tianhao Zheng, Haishan Zhu, M. Erez","doi":"10.1109/HPCA.2018.00020","DOIUrl":"https://doi.org/10.1109/HPCA.2018.00020","url":null,"abstract":"First-level (L1) data cache access latency is critical to performance because it services the vast majority of loads and stores. To keep L1 latency low while ensuring low-complexity and simple-to-verify operation, current processors most-typically utilize a virtually-indexed physically-tagged (VIPT) cache architecture. While VIPT caches decrease latency by proceeding with cache access and address translation concurrently, each cache way is constrained by the size of a virtual page. Thus, larger L1 caches are highly-associative, which degrades their access latency and energy. We propose speculatively-indexed physically-tagged (SIPT) caches to enable simultaneously larger, faster, and more efficient L1 caches. A SIPT cache speculates on the value of a few address bits beyond the page offset concurrently with address translation, maintaining the overall safe and reliable architecture of a VIPT cache while eliminating the VIPT design constraints. SIPT is a purely microarchitectural approach that can be used with any software and for all accesses. We evaluate SIPT with simulations of applications under standard Linux. SIPT improves performance by 8.1% on average and reduces total cache-hierarchy energy by 15.6%.","PeriodicalId":154694,"journal":{"name":"2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115222964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信