2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)最新文献

筛选
英文 中文
Scaling of Union of Intersections for Inference of Granger Causal Networks from Observational Data 从观测数据推断格兰杰因果网络的交点并的标度
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00036
M. Balasubramanian, Trevor D. Ruiz, B. Cook, Prabhat, Sharmodeep Bhattacharyya, Aviral Shrivastava, K. Bouchard
{"title":"Scaling of Union of Intersections for Inference of Granger Causal Networks from Observational Data","authors":"M. Balasubramanian, Trevor D. Ruiz, B. Cook, Prabhat, Sharmodeep Bhattacharyya, Aviral Shrivastava, K. Bouchard","doi":"10.1109/IPDPS47924.2020.00036","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00036","url":null,"abstract":"The development of advanced recording and measurement devices in scientific fields is producing high-dimensional time series data. Vector autoregressive (VAR) models are well suited for inferring Granger-causal networks from high dimensional time series data sets, but accurate inference at scale remains a central challenge. We have recently introduced a flexible and scalable statistical machine learning framework, Union of Intersections (UoI), which enables low false-positive and low false-negative feature selection along with low bias and low variance estimation, enhancing interpretation and predictive accuracy. In this paper, we scale the UoI framework for VAR models (algorithm UoIV AR) to infer network connectivity from large time series data sets (TBs). To achieve this, we optimize distributed convex optimization and introduce novel strategies for improved data read and data distribution times. We study the strong and weak scaling of the algorithm on a Xeon-phi based supercomputer (100,000 cores). These advances enable us to estimate the largest VAR model as known (1000 nodes, corresponding to 1M parameters) and apply it to large time series data from neurophysiology (192 neurons) and finance (470 companies).","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"56 1","pages":"264-273"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81954542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
XPlacer: Automatic Analysis of Data Access Patterns on Heterogeneous CPU/GPU Systems XPlacer:异构CPU/GPU系统上数据访问模式的自动分析
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00106
P. Pirkelbauer, Pei-Hung Lin, T. Vanderbruggen, C. Liao
{"title":"XPlacer: Automatic Analysis of Data Access Patterns on Heterogeneous CPU/GPU Systems","authors":"P. Pirkelbauer, Pei-Hung Lin, T. Vanderbruggen, C. Liao","doi":"10.1109/IPDPS47924.2020.00106","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00106","url":null,"abstract":"This paper presents XPlacer, a framework to automatically analyze problematic data access patterns in C++ and CUDA code. XPlacer records heap memory operations in both host and device code for later analysis. To this end, XPlacer instruments read and write operations, function calls, and kernel launches. Programmers mark points in the program execution where the recorded data is analyzed and anomalies diagnosed. XPlacer reports data access anti-patterns, including alternating CPU/GPU accesses to the same memory, memory with low access density, and unnecessary data transfers. The diagnostic also produces summative information about the recorded accesses, which aids users in identifying code that could degrade performance.The paper evaluates XPlacer using LULESH, a Lawrence Livermore proxy application, Rodina benchmarks, and an implementation of the Smith-Waterman algorithm. XPlacer diagnosed several performance issues in these codes. The elimination of a performance problem in LULESH resulted in a 3x speedup on a heterogeneous platform combining Intel CPUs and Nvidia GPUs.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"6 1","pages":"997-1007"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87297018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Spara: An Energy-Efficient ReRAM-Based Accelerator for Sparse Graph Analytics Applications Spara:一种高效节能的基于reram的稀疏图分析加速器
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00077
Long Zheng, Jieshan Zhao, Yu Huang, Qinggang Wang, Zhen Zeng, Jingling Xue, Xiaofei Liao, Hai Jin
{"title":"Spara: An Energy-Efficient ReRAM-Based Accelerator for Sparse Graph Analytics Applications","authors":"Long Zheng, Jieshan Zhao, Yu Huang, Qinggang Wang, Zhen Zeng, Jingling Xue, Xiaofei Liao, Hai Jin","doi":"10.1109/IPDPS47924.2020.00077","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00077","url":null,"abstract":"Resistive random access memory (ReRAM) addresses the high memory bandwidth requirement challenge of graph analytics by integrating the computing logic in the memory. Due to the matrix-structured crossbar architecture, existing ReRAM-based accelerators, when handling real-world graphs that often have the skewed degree distribution, suffer from the severe sparsity problem arising from zero fillings and activation nondeterminism, incurring substantial ineffectual computations.In this paper, we observe that the sparsity sources lie in the consecutive mapping of source and destination vertex index onto the wordline and bitline of a crossbar. Although exhaustive graph reordering improves the sparsity-induced inefficiency, its totally-random (source and destination) vertex mapping leads to expensive overheads. This work exploits the insight in a mid-point vertex mapping with the random wordlines and consecutive bitlines. A cost-effective preprocessing is proposed to exploit the insight by rapidly exploring the crossbar-fit vertex reorderings but ignores the sparsity arising from activation dynamics. We present a novel ReRAM-based graph analytics accelerator, named Spara, which can maximize the workload density of crossbars dynamically by using a tightly-coupled bank parallel architecture further proposed. Results on real-world and synthesized graphs show that Spara outperforms GraphR and GraphSAR by 8.21 × and 5.01 × in terms of performance, and by 8.97 × and 5.68× in terms of energy savings (on average), while incurring a reasonable (<9.98%) pre-processing overhead.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"7 1","pages":"696-707"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87430908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Improved Intermediate Data Management for MapReduce Frameworks 改进的MapReduce框架中间数据管理
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00062
Haoyu Wang, Haiying Shen, Charles Reiss, A. Jain, Yunqiao Zhang
{"title":"Improved Intermediate Data Management for MapReduce Frameworks","authors":"Haoyu Wang, Haiying Shen, Charles Reiss, A. Jain, Yunqiao Zhang","doi":"10.1109/IPDPS47924.2020.00062","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00062","url":null,"abstract":"MapReduce is a popular distributed framework for big data analysis. However, the current MapReduce framework is insufficiently efficient in handling intermediate data, which may cause bottlenecks in I/O operations, computation, and network bandwidth. Previous work addresses the I/O problem by aggregating map task outputs (i.e. intermediate data) for each single reduce task on one machine. Unfortunately, when there are a large number of reduce tasks, their concurrent requests for intermediate data generate a large amount of I/O operations. In this paper, we present APA (Aggregation, Partition, and Allocation), a new intermediate data management system for the MapReduce framework. APA aggregates the intermediate data from the map tasks in each rack to one file, and the file host pushes the needed intermediate data to each reduce task. Thus, it reduces the number of disk seeks involved in handling intermediate data within one job. Rather than evenly distributing the intermediate data among reduce tasks based on the keys as in current MapReduce, APA partitions the intermediate data to balance the execution latency of different reduce tasks. APA further decides where to allocate each reduce task to minimize the intermediate data transmission time between map tasks and reduce tasks. Through experiments on a real MapReduce Hadoop cluster using the HiBench benchmark suite, we show that APA improves the performance of the current Hadoop by 40%-50%.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"77 1","pages":"536-545"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85994936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Accelerated Reply Injection for Removing NoC Bottleneck in GPGPUs 加速应答注入,消除gpgpu的NoC瓶颈
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00013
Yunfan Li, Lizhong Chen
{"title":"Accelerated Reply Injection for Removing NoC Bottleneck in GPGPUs","authors":"Yunfan Li, Lizhong Chen","doi":"10.1109/IPDPS47924.2020.00013","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00013","url":null,"abstract":"The high level of parallelism in GPGPUs has resulted in significantly changed on-chip data traffic behaviors. This demands new research to identify and address the limiting factors of networks-on-chip (NoCs) in the context of GPGPUs. In this paper, we quantitatively analyze the performance of on-chip networks in GPGPUs, and address a serious NoC bottleneck where the reply data from memory controllers experience large contention when being injected to the reply network. To remove this reply injection bottleneck, we propose Accelerated Reply Injection (ARI), a very effective scheme that can supply a fast rate of data traffic from memory controllers to feed the reply injection points, and accelerates the consumption of the injected packets by quickly transferring the packets out of the injection points, thus increasing both supply and consumption of reply traffic injection. Simulation results on a wide range of benchmarks show that the proposed ARI reduces the data stall time in memory controllers by 67.8% on average, and increases IPC by more than 15.4% on average, with less than 1% area overhead.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"96 1","pages":"22-31"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89524681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlashKey:A High-Performance Flash Friendly Key-Value Store FlashKey:一个高性能的Flash友好的键值存储
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00104
Madhurima Ray, K. Kant, Peng Li, S. Trika
{"title":"FlashKey:A High-Performance Flash Friendly Key-Value Store","authors":"Madhurima Ray, K. Kant, Peng Li, S. Trika","doi":"10.1109/IPDPS47924.2020.00104","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00104","url":null,"abstract":"Key-value stores (KVS) provide an efficient storage for increasing amounts of semi-structured or unstructured data generated by many applications. Most KVS in existence have been designed for hard-disk based storage where avoiding random accesses is crucial for good performance. Unfortunately, the resulting storage structures result in high read, write, and space amplifications when used on modern SSDs. In this paper, we introduce a KV store especially designed for SSDs, called FlashKey, and demonstrate that even as an initial implementation, it substantially outperforms the two most popular commercial KVS in existence, namely, Google’s LevelDB and Facebook’s RocksDB. In particular, we show that FlashKey achieves up to 85% improvement in average access latency, 2x improvement in tail latencies, and 12x improvement in write amplification, at comparable or better space-amplification. Furthermore, FlashKey can easily trade off space and write amplifications, thereby providing a new tuning knob that is difficult to implement in LevelDB and RocksDB.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"76 1","pages":"976-985"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88972869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust Server Placement for Edge Computing 边缘计算的健壮服务器布局
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00038
Dongyu Lu, Yuben Qu, Fan Wu, Haipeng Dai, Chao Dong, Guihai Chen
{"title":"Robust Server Placement for Edge Computing","authors":"Dongyu Lu, Yuben Qu, Fan Wu, Haipeng Dai, Chao Dong, Guihai Chen","doi":"10.1109/IPDPS47924.2020.00038","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00038","url":null,"abstract":"In this work, we study the problem of Robust Server Placement (RSP) for edge computing, i.e., in the presence of uncertain edge server failures, how to determine a server placement strategy to maximize the expected overall workload that can be served by edge servers. We mathematically formulate the RSP problem in the form of robust max-min optimization, derived from two consequentially equivalent transformations of the problem that does not consider robustness and followed by a robust conversion. RSP is challenging to solve, because the explicit expression of the objective function in RSP is hard to obtain, and RSP is a robust max-min problem with a matroid constraint and a knapsack constraint, which is still an unexplored problem in the literature. To address the above challenges, we first investigate the special properties of the problem, and reveal that the objective function is monotone submodular. We then prove that the involved constraints form a p-independence system constraint, where p is a constant value related to the ratio of the coefficients in the knapsack constraint. Finally, we propose an algorithm that achieves a provable constant approximation ratio in polynomial time. Both synthetic and trace-driven simulation results show that, given any maximum number of server failures, our proposed algorithm outperforms three state-of-the-art algorithms and approaches the optimal solution, which applies exhaustive exponential searches.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"38 1","pages":"285-294"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89166040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Accelerating Parallel Hierarchical Matrix-Vector Products via Data-Driven Sampling 基于数据驱动采样的并行分层矩阵向量积加速
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00082
Lucas Erlandson, Difeng Cai, Yuanzhe Xi, Edmond Chow
{"title":"Accelerating Parallel Hierarchical Matrix-Vector Products via Data-Driven Sampling","authors":"Lucas Erlandson, Difeng Cai, Yuanzhe Xi, Edmond Chow","doi":"10.1109/IPDPS47924.2020.00082","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00082","url":null,"abstract":"Hierarchical matrices are scalable matrix representations particularly suited to the case where the matrix entries are defined by a smooth kernel function evaluated between pairs of points. In this paper, we present a new scheme to alleviate the computational bottlenecks present in many hierarchical matrix methods. For general kernel functions, a popular approach to construct hierarchical matrices is through interpolation, due to its efficiency compared to computationally expensive algebraic techniques. However, interpolation-based methods often lead to larger ranks, and do not scale well to higher dimensions. We propose a new data-driven method to resolve these issues. The new method is able to accomplish the rank reduction by using a surrogate for the global distribution of points. The surrogate is generated using a hierarchical data-driven sampling. As a result of the lower rank, the construction cost, memory requirements, and matrix-vector product costs decrease. Using state-of-theart dimension independent sampling, the new method makes it possible to tackle problems in higher dimensions. We also discuss an on-the-fly variation of hierarchical matrix construction and matrix-vector products that is able to reduce memory usage by an order of magnitude. This is accomplished by postponing the generation of certain intermediate matrices until they are used, generating them just in time. We provide results demonstrating the effectiveness of our improvements, both individually and in conjunction with each other. For a problem involving 320,000 points in 3D, our data-driven approach reduces the memory usage from 58.75 GiB using state-of-the-art methods (762.9 GiB if stored dense) to 18.60 GiB. In combination with our on-thefly approach, we are able to reduce the total memory usage to 543.74 MiB.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"47 1","pages":"749-758"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81092184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
IPDPS 2020 TOC
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/ipdps47924.2020.00004
Y. Cheng
{"title":"IPDPS 2020 TOC","authors":"Y. Cheng","doi":"10.1109/ipdps47924.2020.00004","DOIUrl":"https://doi.org/10.1109/ipdps47924.2020.00004","url":null,"abstract":"","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79890266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeeSAw: Optimizing Performance of In-Situ Analytics Applications under Power Constraints SeeSAw:在功率限制下优化原位分析应用的性能
2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2020-05-01 DOI: 10.1109/IPDPS47924.2020.00086
I. Marincic, V. Vishwanath, H. Hoffmann
{"title":"SeeSAw: Optimizing Performance of In-Situ Analytics Applications under Power Constraints","authors":"I. Marincic, V. Vishwanath, H. Hoffmann","doi":"10.1109/IPDPS47924.2020.00086","DOIUrl":"https://doi.org/10.1109/IPDPS47924.2020.00086","url":null,"abstract":"Future supercomputers will need to operate under a power budget. At the same time, in-situ analysis—where a set of analysis tasks are concurrently executed and periodically communicate with a scientific simulation—is expected to be a primary HPC workload to overcome the increasing gap between the performance of the storage system relative to the computational capabilities of these machines. Ongoing research focuses on efficient coupling of simulation and analysis considering memory or I/O constraints, but power poses a new constraint that has not yet been addressed for these workflows. There are two state-of-the-art HPC power management approaches: 1) a power-aware scheme that measures and reallocates power based on observed usage and 2) a time-aware scheme that measures the relative time between communicating software modules and reallocates power based on timing differences. We find that considering only one feedback metric has two major drawbacks: 1) both approaches miss opportunities to improve performance and 2) they often make incorrect decisions when facing the unique requirements of in-situ analysis. We therefore propose SeeSAw—an application-aware power management approach, which uses both time and power feedback to balance a power budget and maximize performance for in-situ analysis workloads. We evaluate SeeSAw using the molecular dynamics simulation LAMMPS with a set of built-in analyses running on the Theta supercomputer on up to 1024 nodes. We find that the strictly power-aware approach slows down LAMMPS as much as ∼25%. The strictly time-aware approach shows improvements of up to ∼13% and slowdowns as much as ∼60%. In contrast, SeeSAw achieves ∼4–30% performance improvements.","PeriodicalId":6805,"journal":{"name":"2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"53 1","pages":"789-798"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88269879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信