Proceedings of the 2016 International Conference on Supercomputing最新文献

筛选
英文 中文
Replichard: Towards Tradeoff between Consistency and Performance for Metadata Replichard:在元数据的一致性和性能之间权衡
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926292
Zhiying Li, Ruini Xue, Lixiang Ao
{"title":"Replichard: Towards Tradeoff between Consistency and Performance for Metadata","authors":"Zhiying Li, Ruini Xue, Lixiang Ao","doi":"10.1145/2925426.2926292","DOIUrl":"https://doi.org/10.1145/2925426.2926292","url":null,"abstract":"Metadata scalability is critical for distributed systems as the storage scale is growing rapidly. Because of the strict consistency requirement of metadata, many existing metadata services utilize a fundamentally unscalable design for the sake of easy management, while others provide improved scalability but lead to unacceptable latency and management complexity. Without delivering scalable performance, metadata will be the bottleneck of the entire system. Based on the observation that real file dependencies are few, and there are usually more idempotent than non-idempotent operations, we propose a practical strategy, Replichard, allowing a tradeoff between metadata consistency and scalable performance. Replichard provides metadata services through a cluster of metadata servers, in which a flexible consistency scheme is adopted: strict consistency for non-idempotent operations with dynamic write-lock sharding, and relaxed consistency with accuracy estimations of return values where consistency for idempotent requests is relaxed to achieve high throughput. Write-locks are dynamically created at subtree-level and designated to independent metadata servers in an application-oriented manner. A subtree metadata update that occurs on a particular server is replicated to all metadata servers conforming to the application \"start-end\" semantics, resulting in an eventually consistent namespace. An asynchronous notification mechanism is also devised to enable users to deal with potential stale reads from operations of relaxed consistency. A prototype was implemented based on HDFS, and the experimental results show promising scalability and performance for both micro benchmarks and various real-world applications written in Pig, Hive and MapReduce.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
GreenGear: Leveraging and Managing Server Heterogeneity for Improving Energy Efficiency in Green Data Centers GreenGear:利用和管理服务器异构以提高绿色数据中心的能源效率
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926272
Xu Zhou, Haoran Cai, Q. Cao, Hong Jiang, Lei Tian, C. Xie
{"title":"GreenGear: Leveraging and Managing Server Heterogeneity for Improving Energy Efficiency in Green Data Centers","authors":"Xu Zhou, Haoran Cai, Q. Cao, Hong Jiang, Lei Tian, C. Xie","doi":"10.1145/2925426.2926272","DOIUrl":"https://doi.org/10.1145/2925426.2926272","url":null,"abstract":"In this paper, we propose GreenGear, the first heterogeneous strategy that incorporates wimpy servers into existing green data centers to dynamically deal with power mismatches. Our techniques exploit intelligent green power scheduling policies to provide efficiency-aware power management. We evaluate the GreenGear design on a prototype installed in a test-bed. Compared with a homogeneous server system, GreenGear is able to significantly increase the effective use of the renewable and battery power sources without the supplement of grid power, extending their runtime by 57%, lengthening the UPS lifetime by 2.04X, and improving renewable energy utilization by 51%.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133951440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Variation Among Processors Under Turbo Boost in HPC Systems 高性能计算系统中Turbo Boost下处理器间的差异
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926289
Bilge Acun, P. Miller, L. Kalé
{"title":"Variation Among Processors Under Turbo Boost in HPC Systems","authors":"Bilge Acun, P. Miller, L. Kalé","doi":"10.1145/2925426.2926289","DOIUrl":"https://doi.org/10.1145/2925426.2926289","url":null,"abstract":"The design and manufacture of present-day CPUs causes inherent variation in supercomputer architectures such as variation in power and temperature of the chips. The variation also manifests itself as frequency differences among processors under Turbo Boost dynamic overclocking. This variation can lead to unpredictable and suboptimal performance in tightly coupled HPC applications. In this study, we use compute-intensive kernels and applications to analyze the variation among processors in four top supercomputers: Edison, Cab, Stampede, and Blue Waters. We observe that there is an execution time difference of up to 16% among processors on the Turbo Boost-enabled supercomputers: Edison, Cab, Stampede. There is less than 1% variation on Blue Waters, which does not have a dynamic overclocking feature. We analyze measurements from temperature and power instrumentation and find that intrinsic differences in the chips' power efficiency is the culprit behind the frequency variation. Moreover, we analyze potential solutions such as disabling Turbo Boost, leaving idle cores and replacing slow chips to mitigate the variation. We also propose a speed-aware dynamic task redistribution (load balancing) algorithm to reduce the negative effects of performance variation. Our speed-aware load balancing algorithm improves the performance up to 18% compared to no load balancing performance and 6% better than the non-speed aware counterpart.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132231137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Runtime-Guided Mitigation of Manufacturing Variability in Power-Constrained Multi-Socket NUMA Nodes 功率受限多套接NUMA节点制造可变性的运行时间导向缓解
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926279
Dimitrios Chasapis, Marc Casas, Miquel Moretó, M. Schulz, E. Ayguadé, Jesús Labarta, M. Valero
{"title":"Runtime-Guided Mitigation of Manufacturing Variability in Power-Constrained Multi-Socket NUMA Nodes","authors":"Dimitrios Chasapis, Marc Casas, Miquel Moretó, M. Schulz, E. Ayguadé, Jesús Labarta, M. Valero","doi":"10.1145/2925426.2926279","DOIUrl":"https://doi.org/10.1145/2925426.2926279","url":null,"abstract":"Current large scale systems show increasing power demands, to the point that it has become a huge strain on facilities and budgets. Researchers in academia, labs and industry are focusing on dealing with this \"power wall\", striving to find a balance between performance and power consumption. Some commodity processors enable power capping, which opens up new opportunities for applications to directly manage their power behavior at user level. However, while power capping ensures a system will never exceed a given power limit, it also leads to a new form of heterogeneity: natural manufacturing variability, which was previously hidden by varying power to achieve homogeneous performance, now results in heterogeneous performance caused by different CPU frequencies, potentially for each core, to enforce the power limit. In this work we show how a parallel runtime system can be used to effectively deal with this new kind of performance heterogeneity by compensating the uneven effects of power capping. In the context of a NUMA node composed of several multi-core sockets, our system is able to optimize the energy and concurrency levels assigned to each socket to maximize performance. Applied transparently within the parallel runtime system, it does not require any programmer interaction like changing the application source code or manually reconfiguring the parallel system. We compare our novel runtime analysis with an offline approach and demonstrate that it can achieve equal performance at a fraction of the cost.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123981284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Noise Aware Scheduling in Data Centers 数据中心噪声感知调度
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926268
Hameedah Sultan, Arpit Katiyar, S. Sarangi
{"title":"Noise Aware Scheduling in Data Centers","authors":"Hameedah Sultan, Arpit Katiyar, S. Sarangi","doi":"10.1145/2925426.2926268","DOIUrl":"https://doi.org/10.1145/2925426.2926268","url":null,"abstract":"As the demand for large scale computing is rapidly increasing to serve billions of users across the world, more powerful and densely packed server configurations are being used. Often in developing countries, and in small and medium enterprises, it is hard to place such servers in sound-proof server rooms. Hence, servers are typically placed in close proximity to employees. The noise from the cooling fans in servers adversely affects employees' health, and reduces their productivity. In this paper, we provide a framework for computer architects to measure the acoustic profile in a data center along with the temperature profile, and estimate the sound power levels at points of interest. Additionally, we studied the noise levels obtained upon using algorithms targeted at homogenizing the temperature profile. We found that these algorithms result in high noise levels, sometimes above the permissible levels. So, we propose two heuristics to redistribute workloads in a data center such that noise can be reduced at certain target locations. We obtain a noise reduction of 2-13 dB when compared with uniform workload distribution, and upto 16 dB as compared to temperature aware workload placement, with a reduction of at least 5-6 dB in 75% of the cases. The performance overhead is limited to 1%.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128858557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Peruse and Profit: Estimating the Accelerability of Loops 阅读和利润:估计循环的加速性
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926269
Snehasish Kumar, V. Srinivasan, A. Sharifian, Nick Sumner, Arrvindh Shriraman
{"title":"Peruse and Profit: Estimating the Accelerability of Loops","authors":"Snehasish Kumar, V. Srinivasan, A. Sharifian, Nick Sumner, Arrvindh Shriraman","doi":"10.1145/2925426.2926269","DOIUrl":"https://doi.org/10.1145/2925426.2926269","url":null,"abstract":"There exist a multitude of execution models available today for a developer to target. The choices vary from general purpose processors to fixed-function hardware accelerators with a large number of variations in-between. There is a growing demand to assess the potential benefits of porting or rewriting an application to a target architecture in order to fully exploit the benefits of performance and/or energy efficiency offered by such targets. However, as a first step of this process, it is necessary to determine whether the application has characteristics suitable for acceleration. In this paper, we present Peruse, a tool to characterize the features of loops in an application and to help the programmer understand the amenability of loops for acceleration. We consider a diverse set of features ranging from loop characteristics (e.g., loop exit points) and operation mixes (e.g., control vs data operations) to wider code region characteristics (e.g., idempotency, vectorizability). Peruse is language, architecture, and input independent and uses the intermediate representation of compilers to do the characterization. Using static analyses makes Peruse scalable and enables analysis of large applications to identify and extract interesting loops suitable for acceleration. We show analysis results for unmodified applications from the SPEC CPU benchmark suite, Polybench, and HPC workloads. For an end-user it is more desirable to get an estimate of the potential speedup due to acceleration. We use the workload characterization results of Peruse as features and develop a machine-learning based model to predict the potential speedup of a loop when off-loaded to a fixed function hardware accelerator. We use the model to predict the speedup of loops selected by Peruse and achieve an accuracy of 79%.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127896621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reusing Data Reorganization for Efficient SIMD Parallelization of Adaptive Irregular Applications 重用数据重组实现自适应不规则应用的高效SIMD并行化
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926285
Peng Jiang, Linchuan Chen, G. Agrawal
{"title":"Reusing Data Reorganization for Efficient SIMD Parallelization of Adaptive Irregular Applications","authors":"Peng Jiang, Linchuan Chen, G. Agrawal","doi":"10.1145/2925426.2926285","DOIUrl":"https://doi.org/10.1145/2925426.2926285","url":null,"abstract":"Applying SIMD parallelization to irregular applications with non-continuous and data-dependent memory accesses is challenging. While an application involving a static pattern of indirect accesses (across iterations) can be accelerated by data transformations, such techniques are no longer feasible if the indirect access patterns change over time. In this paper, we propose an indexing method that facilitates the reuse of data reorganization for efficient SIMD parallelization of dynamic irregular applications. This indexing approach is first applied on a class of vertex-centric graph algorithms where the set of active vertices varies over the execution -- the indexing method helps maintain the set of active edges. Next, we focus on unstructured particle interaction applications in which the edges change adaptively, and present an incremental indexing method. In our experimental evaluation, the speedups achieved by utilizing SIMD on graph applications range from 3.04× to 7.19×, and between 2.54× to 4.43× for molecular dynamics.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130478111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
DSMR: A Parallel Algorithm for Single-Source Shortest Path Problem 单源最短路径问题的并行算法
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926287
Saeed Maleki, Donald Nguyen, Andrew Lenharth, M. Garzarán, D. Padua, K. Pingali
{"title":"DSMR: A Parallel Algorithm for Single-Source Shortest Path Problem","authors":"Saeed Maleki, Donald Nguyen, Andrew Lenharth, M. Garzarán, D. Padua, K. Pingali","doi":"10.1145/2925426.2926287","DOIUrl":"https://doi.org/10.1145/2925426.2926287","url":null,"abstract":"The Single Source Shortest Path (SSSP) problem consists in finding the shortest paths from a vertex (the source vertex) to all other vertices in a graph. SSSP has numerous applications. For some algorithms and applications, it is useful to solve the SSSP problem in parallel. This is the case of Betweenness Centrality which solves the SSSP problem for multiple source vertices in large graphs. In this paper, we introduce the Dijkstra Strip Mined Relaxation (DSMR) algorithm, an efficient parallel SSSP algorithm for shared and distributed-memory systems. We also introduce a set of preprocessing optimization techniques that significantly reduce the communication overhead without increasing the total amount of work dramatically. Our results show that, DSMR is faster than the best previous algorithm, parallel Δ-Stepping, by up-to 7.38×.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131329680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
GCaR: Garbage Collection aware Cache Management with Improved Performance for Flash-based SSDs GCaR:支持垃圾收集的缓存管理,提高了基于闪存的ssd的性能
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926263
Suzhen Wu, Yanping Lin, Bo Mao, Hong Jiang
{"title":"GCaR: Garbage Collection aware Cache Management with Improved Performance for Flash-based SSDs","authors":"Suzhen Wu, Yanping Lin, Bo Mao, Hong Jiang","doi":"10.1145/2925426.2926263","DOIUrl":"https://doi.org/10.1145/2925426.2926263","url":null,"abstract":"Garbage Collection (GC) is an important performance concern for flash-based SSDs, because it tends to disrupt the normal operations of an SSD. This problem continues to plague flash-based storage systems, particularly in the high performance computing and enterprise environment. An important root cause for this problem, as revealed by previous studies, is the serious contention for the flash resources and the severe mutually adversary interference between the user I/O requests and GC-induced I/O requests. The on-board buffer cache within SSDs serves to play an essential role in smoothing the gap between the upper-level applications and the lower-level flash chips and alleviating this problem to some extend. Nevertheless, the existing cache replacement algorithms are well optimized to reduce the miss rate of the buffer cache by reducing the I/O traffic to the flash chips as much as possible, but without considering the GC operations within the flash chips. Consequently, they fail to address the root cause of the problem and thus are far from being sufficient and effective in reducing the expensive I/O traffic to the flash chips that are in the GC state. To address this important performance issue in flash-based storage systems, particularly in the HPC and enterprise environment, we propose a Garbage Collection aware Replacement policy, called GCaR, to improve the performance of flash-based SSDs. The basic idea is to give higher priority to caching the data blocks belonging to the flash chips that are in the GC state. This substantially lessens the contentions between the user I/O operations and the GC-induced I/O operations. To verify the effectiveness of GCaR, we have integrated it into the SSD extended Disksim simulator. The simulation results show that GCaR can significantly improve the storage performance by reducing the average response time by up to 40.7%.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124364826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Exploiting Dynamic Reuse Probability to Manage Shared Last-level Caches in CPU-GPU Heterogeneous Processors 利用动态重用概率管理CPU-GPU异构处理器共享的最后一级缓存
Proceedings of the 2016 International Conference on Supercomputing Pub Date : 2016-06-01 DOI: 10.1145/2925426.2926266
S. Rai, Mainak Chaudhuri
{"title":"Exploiting Dynamic Reuse Probability to Manage Shared Last-level Caches in CPU-GPU Heterogeneous Processors","authors":"S. Rai, Mainak Chaudhuri","doi":"10.1145/2925426.2926266","DOIUrl":"https://doi.org/10.1145/2925426.2926266","url":null,"abstract":"Recent commercial chip-multiprocessors (CMPs) have integrated CPU as well as GPU cores on the same die. In today's designs, these cores typically share parts of the memory system resources. However, since the CPU and the GPU cores have vastly different resource requirements, challenging resource partitioning problems arise in such heterogeneous CMPs. In one class of designs, the CPU and the GPU cores share the large on-die last-level SRAM cache. In this paper, we explore mechanisms to dynamically allocate the shared last-level cache space to the CPU and GPU applications in such designs. A CPU core executes an instruction progressively in a pipeline generating memory accesses (for instruction and data) only in a few pipeline stages. On the other hand, a GPU can access different data streams having different semantic meanings and disparate access patterns throughout the rendering pipeline. Such data streams include input vertex, pixel depth, pixel color, texture map, shader instructions, shader data (including shader register spills and fills), etc.. Without carefully designed last-level cache management policies, the CPU and the GPU data streams can interfere with each other leading to significant loss in CPU and GPU performance accompanied by degradation in GPU-rendered 3D animation quality. Our proposal dynamically estimates the reuse probabilities of the GPU streams as well as the CPU data by sampling portions of the CPU and GPU working sets and storing the sampled tags in a small working set sample cache. Since the GPU application working sets are typically very large, for this working set sample cache to be effective, it is custom-designed to have large coverage while requiring few tens of kilobytes of storage. We use the estimated reuse probabilities to design shared last-level cache policies for handling hits and misses to reads and writes from both types of cores. Studies on a detailed heterogeneous CMP simulator show that compared to a state-of-the-art baseline with a 16 MB shared last-level cache, our proposal can improve the performance (frame rate or execution cycles, as applicable) of eighteen GPU workloads spanning DirectX and OpenGL game titles as well as CUDA applications by 12% on average and up to 51% while improving the performance of the co-running quad-core CPU workload mixes by 7% on average and up to 19%.","PeriodicalId":422112,"journal":{"name":"Proceedings of the 2016 International Conference on Supercomputing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130842110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信