2020 IEEE High Performance Extreme Computing Conference (HPEC)最新文献

筛选
英文 中文
A congestion control mechanism for SDN-based fat-tree networks 基于sdn的胖树网络的拥塞控制机制
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286156
Haitham Ghalwash, Chun-Hsi Huang
{"title":"A congestion control mechanism for SDN-based fat-tree networks","authors":"Haitham Ghalwash, Chun-Hsi Huang","doi":"10.1109/HPEC43674.2020.9286156","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286156","url":null,"abstract":"Nowadays, data centers are experiencing increasing applications' need that is mandating a software-oriented network architecture. A Software-Defined Network (SDN) is the new technology to overcome the traditional network's limitations. QoS is one current limitation that needs to be well-structured for successful software-oriented network architecture. Considerable key factors affect QoS, comprising traffic shaping and congestion control. This paper proposes a congestion control mechanism in SDN-based networks to enhance the overall QoS. The proposed mechanism monitors and detects congested parts and reacts automatically to reduce traffic load. The traffic load is redistributed to ensure better performance by re-routing subset flows. The re-routing decision is based on a passively measured QoS metric, port utilization. Experiments are conducted to prove the effectiveness of the proposed mechanism in an SDN-based fat-tree network. Results showed that the TCP recorded a noticeable improvement, namely, 22.4% in the average delay, 21.3% in throughput, 18.6% in max delay, and 15.3% for jitter. Moreover, the maximum monitored port utilization in the aggregation and core switches was also reduced by 22% on average.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114631427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GPU Accelerated Anomaly Detection of Large Scale Light Curves 大规模光曲线的GPU加速异常检测
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286242
Austin Chase Minor, Zhihui Du, Yankui Sun, David A. Bader, Chao Wu, Jianyan Wei
{"title":"GPU Accelerated Anomaly Detection of Large Scale Light Curves","authors":"Austin Chase Minor, Zhihui Du, Yankui Sun, David A. Bader, Chao Wu, Jianyan Wei","doi":"10.1109/HPEC43674.2020.9286242","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286242","url":null,"abstract":"Identifying anomalies in millions of stars in real time is a great challenge. In this paper, we develop a matched filtering based algorithm to detect a typical anomaly, microlensing. The algorithm can detect short timescale microlensing events with high accuracy at their early stage with a very low false-positive rate. Furthermore, a GPU accelerated scalable computational framework, which can enable real time follow-up observation, is designed. This framework efficiently divides the algorithm between CPU and GPU, accelerating large scale light curve processing to meet low latency requirements. Experimental results show that the proposed method can process 200,000 stars (the maximum number of stars processed by a single GWAC telescope) in approximately 3.34 seconds with current commodity hardware while achieving an accuracy of 92% and an average detection occurring approximately 14% before the peak of the anomaly with zero false alarm. Working together with the proposed sharding mechanism, the framework is positioned to be extendable to multiple GPUs to improve the performance further for the higher data throughput requirements of next-generation telescopes.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132223696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Strategies for Parallel Bitonic Sort on a Migratory Thread Architecture 迁移线程结构下并行双音排序的性能策略
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286172
K. Velusamy, Thomas B. Rolinger, Janice O. McMahon
{"title":"Performance Strategies for Parallel Bitonic Sort on a Migratory Thread Architecture","authors":"K. Velusamy, Thomas B. Rolinger, Janice O. McMahon","doi":"10.1109/HPEC43674.2020.9286172","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286172","url":null,"abstract":"Large-scale data analytics often represent vast amounts of sparse data as a graph. As a result, the underlying kernels in data analytics can be reduced down to operations over graphs, such as searches and traversals. Graph algorithms are notoriously difficult to implement for high performance due to the irregular nature of their memory access patterns, resulting in poor utilization of a traditional cache memory hierarchy. As a result, new architectures have been proposed that specifically target irregular applications. One example is the cache-less Emu migratory thread architecture developed by Lucata Technology. While it is important to evaluate and understand irregular applications on a system such as Emu, it is equally important to explore applications which are not irregular themselves, but are often used as key pre-processing steps in irregular applications. Sorting a list of values is one such pre-processing step, as well as one of the fundamental operations in data analytics. In this paper, we extend our prior preliminary evaluation of parallel bitonic sort on the Emu architecture. We explore different performance strategies for bitonic sort by leveraging the unique features of Emu. In doing so, we implement three significant capabilities into bitonic sort: a smart data layout that periodically remaps data to avoid remote accesses, efficient thread spawning strategies, and adaptive loop parallelization to achieve proper load balancing over time. We present a performance evaluation that demonstrates speed-ups as much as 14.26x by leveraging these capabilities.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114080111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Accelerating Distributed Inference of Sparse Deep Neural Networks via Mitigating the Straggler Effect 利用离散效应加速稀疏深度神经网络的分布式推理
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286189
M. Hasanzadeh-Mofrad, R. Melhem, Muhammad Yousuf Ahmad, Mohammad Hammoud
{"title":"Accelerating Distributed Inference of Sparse Deep Neural Networks via Mitigating the Straggler Effect","authors":"M. Hasanzadeh-Mofrad, R. Melhem, Muhammad Yousuf Ahmad, Mohammad Hammoud","doi":"10.1109/HPEC43674.2020.9286189","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286189","url":null,"abstract":"Once a Deep Neural Network (DNN) is trained, an inference algorithm retains the learning and applies it to batches of data. The trained DNN can be sparse because of pruning or following a preset sparse connectivity pattern. Inference in such sparse networks requires less space and time complexities compared to dense ones. Similar to dense DNNs, sparse DNNs can be parallelized using model or data parallelism, whereby the former partitions the network and the latter partitions the input among multiple threads. Model parallelism efficiently utilizes the Last Level Cache (LLC) but has a heavy synchronization cost because of compulsory reductions per layer. In contrast, data parallelism allows independent execution of partitions but suffers from a straggler effect due to a load imbalance among partitions. We combine data and model parallelisms through a new type of parallelism that we denote data-then-model. In data-then-model, each thread starts with data parallelism, thus mitigating the per-layer synchronization cost of model parallelism. After it finishes its partition, it switches to model parallelism to support a slower active thread, hence, alleviating the straggler effect of data parallelism. We compare data-then-model parallelism with data and model parallelisms as well as task-based parallelisms using the IEEE HPEC sparse DNN challenge dataset. On average, we achieve up to 10 to 65% speedup compared to these parallelisms.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128321932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimizing Use of Different Types of Memory for FPGAs in High Performance Computing fpga在高性能计算中不同类型存储器的优化使用
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286144
Kai Huang, Mehmet Güngör, Stratis Ioannidis, M. Leeser
{"title":"Optimizing Use of Different Types of Memory for FPGAs in High Performance Computing","authors":"Kai Huang, Mehmet Güngör, Stratis Ioannidis, M. Leeser","doi":"10.1109/HPEC43674.2020.9286144","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286144","url":null,"abstract":"Accelerators such as Field Programmable Gate Arrays (FPGAs) are increasingly used in high performance computing, and the problems they are applied to process larger and larger amounts of data. FPGA manufacturers have added new types of memory on chip to help ease the memory bottleneck; however, the burden is on the designer to determine how data is allocated to different memory types. We study the use of ultraRAM for a graph application running on Amazon Web Services (AWS) that generates a large amount of intermediate data that is not subsequently accessed sequentially. We investigate different algorithms for mapping data to ultraRAM. Our results show that use of ultraRAM can speed up overall application run time by a factor of 3 or more. Maximizing the amount of ultraRAM used produces the best results, and as problem size grows, judiciously assigning data to ultraRAM vs. DDR results in better performance.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129389035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Comprehensive Comparison and Analysis of OpenACC and OpenMP 4.5 for NVIDIA GPUs NVIDIA gpu的OpenACC与openmp4.5的综合比较与分析
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286203
R. Usha, P. Pandey, N. Mangala
{"title":"A Comprehensive Comparison and Analysis of OpenACC and OpenMP 4.5 for NVIDIA GPUs","authors":"R. Usha, P. Pandey, N. Mangala","doi":"10.1109/HPEC43674.2020.9286203","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286203","url":null,"abstract":"HPC systems having accelerator attached to it is the new normal. However, programming these accelerators to get good performance is very complex and tedious. Hence, directive based programming such as OpenMP and OpenACC are gaining wide popularity for parallel programming. They simplify the programming experience by abstracting the low-level complexities from the user. In this paper, we have done an extensive comparison of OpenMP 4.5 and OpenACC for GPU programming. Performance comparison of these two APIs on NVIDIA Tesla GPUs namely, P100 and V100 has also been captured. Data Transfer times, Kernel Execution times, Total Execution times and Performance portability are the criteria for comparison. The challenges faced while parallelizing the applications using the directives thus leading to improper outputs has also been dotted.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123045867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Identifying Execution Anomalies for Data Intensive Workflows Using Lightweight ML Techniques 使用轻量级ML技术识别数据密集型工作流的执行异常
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286139
Cong Wang, G. Papadimitriou, M. Kiran, A. Mandal, E. Deelman
{"title":"Identifying Execution Anomalies for Data Intensive Workflows Using Lightweight ML Techniques","authors":"Cong Wang, G. Papadimitriou, M. Kiran, A. Mandal, E. Deelman","doi":"10.1109/HPEC43674.2020.9286139","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286139","url":null,"abstract":"Today's computational science applications are increasingly dependent on many complex, data-intensive operations on distributed datasets that originate from a variety of scientific instruments and repositories. To manage this complexity, science workflows are created to automate the execution of these computational and data transfer tasks, which significantly improves scientific productivity. As the scale of workflows rapidly increases, detecting anomalous behaviors in workflow executions has become critical to ensure timely and accurate science products. In this paper, we present a set of lightweight machine learning-based techniques, including both supervised and unsupervised algorithms, to identify anomalous workflow behaviors. We perform anomaly analysis on both workflow-level and task-level datasets collected from real workflow executions on a distributed cloud testbed. Results show that the workflow-level analysis employing k-means clustering can accurately cluster anomalous, i.e. failure-prone and poorly performing workflows into statistically similar classes with a reasonable quality of clustering, achieving over 0.7 for Normalized Mutual Information and Completeness scores. These results affirm the selection of the workflow-level features for workflow anomaly analysis. For task-level analysis, the Decision Tree classifier achieves >80% accuracy, while other tested classifiers can achieve >50% accuracy in most cases. We believe that these promising results can be a foundation for future research on anomaly detection and failure prediction for scientific workflows running in production environments.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134404896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multiscale Data Analysis Using Binning, Tensor Decompositions, and Backtracking 多尺度数据分析使用分箱,张量分解和回溯
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286171
Dimitri Leggas, Thomas Henretty, J. Ezick, M. Baskaran, Brendan von Hofe, Grace H. Cimaszewski, Harper Langston, R. Lethin
{"title":"Multiscale Data Analysis Using Binning, Tensor Decompositions, and Backtracking","authors":"Dimitri Leggas, Thomas Henretty, J. Ezick, M. Baskaran, Brendan von Hofe, Grace H. Cimaszewski, Harper Langston, R. Lethin","doi":"10.1109/HPEC43674.2020.9286171","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286171","url":null,"abstract":"Large data sets can contain patterns at multiple scales (spatial, temporal, etc.). In practice, it is useful for data exploration techniques to detect patterns at each relevant scale. In this paper, we develop an approach to detect activities at multiple scales using tensor decomposition, an unsupervised high-dimensional data analysis technique that finds correlations between different features in the data. This method typically requires that the feature values are discretized during the construction of the tensor in a process called “binning.” We develop a method of constructing and decomposing tensors with different binning schemes of various features in order to uncover patterns across a set of user-defined scales. While binning is necessary to obtain interpretable results from tensor decompositions, it also decreases the specificity of the data. Thus, we develop backtracking methods that enable the recovery of original source data corresponding to patterns found in the decomposition. These technique are discussed in the context of spatiotemporal and network traffic data, and in particular on Automatic Identification System (AIS) data.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115761031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Northeast Cyberteam - Building an Environment for Sharing Best Practices and Solutions for Research Computing 东北网络小组-建立一个分享研究计算最佳实践和解决方案的环境
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286254
J. Goodhue, Julie Ma, A. Maestro, Sia Najafi, B. Segee, S. Valcourt, Ralph Zottola
{"title":"Northeast Cyberteam - Building an Environment for Sharing Best Practices and Solutions for Research Computing","authors":"J. Goodhue, Julie Ma, A. Maestro, Sia Najafi, B. Segee, S. Valcourt, Ralph Zottola","doi":"10.1109/HPEC43674.2020.9286254","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286254","url":null,"abstract":"The Northeast Cyberteam Program is a collaborative effort across Maine, New Hampshire, Vermont, and Massachusetts that seeks to assist researchers at small and medium-sized institutions in the region with making use of cyberinfrastructure, while simultaneously building the next generation of research computing facilitators. Recognizing that research computing facilitators are frequently in short supply, the program also places intentional emphasis on capturing and disseminating best practices in an effort to enable opportunities to leverage and build on existing solutions whenever practical. The program combines direct assistance to computationally intensive research projects; experiential learning opportunities that pair experienced mentors with students interested in research computing facilitation; sharing of resources and knowledge across large and small institutions; and tools that enable efficient oversight and possible replication of these ideas in other regions. Each project involves a researcher seeking to better utilize cyberinfrastructure in research, a student facilitator, and a mentor with relevant domain expertise. These individuals may be at the same institution or at separate institutions. The student works with the researcher and the mentor to become a bridge between the infrastructure and the research domain. Through this model, students receive training and opportunities that otherwise would not be available, research projects get taken to a higher level, and the effectiveness of the mentor is multiplied. Providing tools to enable self-service learning is a key concept in our strategy to develop facilitators through experiential learning, recognizing that one of the most fundamental skills of successful facilitators is their ability to quickly learn enough about new domains and applications to be able draw parallels with their existing knowledge and help to solve the problem at hand. The Cyberteam Portal is used to access the self-service learning resources developed to provide just-in-time information delivery to participants as they embark on projects in unfamiliar domains, and also serves as a receptacle for best practices, tools, and techniques developed during a project. Tools include Ask.CI, an interactive site for questions and answers; a learning resources repository used to collect online training modules vetted by Cyberteam projects that provide starting points for subsequent projects or independent activities; and a Github repository. The Northeast Cyberteam was created with funding from the National Science Foundation, but has developed strategies for sustainable operations. Each project involves a researcher seeking to better utilize cyberinfrastructure in research, a student facilitator, and a mentor with relevant domain expertise. These individuals may be at the same institution or at separate institutions. The student works with the researcher and the mentor to become a bridge between the infrastructure and the ","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121600487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Discrete Integrated Circuit Electronics (DICE) 离散集成电路电子学(DICE)
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286236
Zach Fredin, J. Zemánek, Camron Blackburn, Erik Strand, A. Abdel-Rahman, Premila Rowles, N. Gershenfeld
{"title":"Discrete Integrated Circuit Electronics (DICE)","authors":"Zach Fredin, J. Zemánek, Camron Blackburn, Erik Strand, A. Abdel-Rahman, Premila Rowles, N. Gershenfeld","doi":"10.1109/HPEC43674.2020.9286236","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286236","url":null,"abstract":"We introduce DICE (Discrete Integrated Circuit Electronics). Rather than separately develop chips, packages, boards, blades, and systems, DICE spans these scales in a direct-write process with the three-dimensional assembly of computational building blocks. We present DICE parts, discuss their assembly, programming, and design workflow, illustrate applications in machine learning and high performance computing, and project performance.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信