2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)最新文献

筛选
英文 中文
Message from the 2021 General Co-Chairs 2021年联席主席致辞
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-06-01 DOI: 10.1109/ipdps49936.2021.00005
{"title":"Message from the 2021 General Co-Chairs","authors":"","doi":"10.1109/ipdps49936.2021.00005","DOIUrl":"https://doi.org/10.1109/ipdps49936.2021.00005","url":null,"abstract":"","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122225485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiplicative Weights Algorithms for Parallel Automated Software Repair 并行自动化软件修复的乘权算法
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00107
J. Renzullo, Westley Weimer, S. Forrest
{"title":"Multiplicative Weights Algorithms for Parallel Automated Software Repair","authors":"J. Renzullo, Westley Weimer, S. Forrest","doi":"10.1109/IPDPS49936.2021.00107","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00107","url":null,"abstract":"Multiplicative Weights Update (MWU) algorithms are a form of online learning that is applied to multi-armed bandit problems. Such problems involve allocating a fixed number of trials among multiple options to maximize cumulative payoff. MWU is a popular and effective method for dynamically balancing the trade-off between exploring the value of new options and exploiting the information already gained. However, no clear strategy exists to help practitioners choose which of the several algorithmic designs within this family to deploy. In this paper, three variants of parallel MWU algorithms are considered: Two parallel variants that rely on global memory, and one variant that uses distributed memory. The three variants are first analyzed theoretically, and then their effectiveness is assessed empirically on the task of estimating distributions in the context of stochastic search for repairs to bugs in software. Earlier work on APR suffers from various inefficiencies, and the paper shows how to decompose the problem into two stages: one that is embarrassingly parallel and one that is amenable to MWU. We then model the cost of each MWU variant and derive the conditions under which it is likely to be preferred in practice. We find that all three MWU algorithms achieve accuracy above 90% but that there are significant differences in runtime and total cost. When 90% accuracy is sufficient and evaluating options is expensive, such as in our use case, we find that the algorithm that uses global memory and has high communication cost outperforms the other two. We analyze the reasons for this surprising result.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115649258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
xBGAS: A Global Address Space Extension on RISC-V for High Performance Computing 面向高性能计算的RISC-V全局地址空间扩展
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00054
Xi Wang, John D. Leidel, Brody Williams, Alan Ehret, Miguel Mark, M. Kinsy, Yong Chen
{"title":"xBGAS: A Global Address Space Extension on RISC-V for High Performance Computing","authors":"Xi Wang, John D. Leidel, Brody Williams, Alan Ehret, Miguel Mark, M. Kinsy, Yong Chen","doi":"10.1109/IPDPS49936.2021.00054","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00054","url":null,"abstract":"The tremendous expansion of data volume has driven the transition from monolithic architectures towards systems integrated with discrete and distributed subcomponents in modern scalable high performance computing (HPC) systems. As such, multi-layered software infrastructures have become essential to bridge the gap between heterogeneous commodity devices. However, operations across synthesized components with divergent interfaces inevitably lead to redundant software footprints and undesired latency. Therefore, a scalable and unified computing platform, capable of supporting efficient interactions between individual components, is desirable for largescale data-intensive applications. In this work, we introduce the Extended Base Global Address Space, or xBGAS, microarchitecture extension to the RISC-V instruction set architecture (ISA) for scalable high performance computing. The xBGAS extension provides native ISA-level support for direct accesses to remote shared memory by mapping remote data objects into a system’s extended address space. We perform both software and hardware evaluations of the xBGAS design. The results show that xBGAS reduces instruction count generated by interprocess communication by 69.26% on average. Overall, xBGAS achieves an average performance gain of 21.96% (up to 37.29%) across the tested workloads.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126065121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
CBNet: Minimizing Adjustments in Concurrent Demand-Aware Tree Networks CBNet:最小化并发需求感知树网络的调整
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00046
O. A. D. O. Souza, Olga Goussevskaia, Stefan Schmid
{"title":"CBNet: Minimizing Adjustments in Concurrent Demand-Aware Tree Networks","authors":"O. A. D. O. Souza, Olga Goussevskaia, Stefan Schmid","doi":"10.1109/IPDPS49936.2021.00046","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00046","url":null,"abstract":"This paper studies the design of demand-aware network topologies: networks that dynamically adapt themselves toward the demand they currently serve, in an online manner. While demand-aware networks may be significantly more efficient than demand-oblivious networks, frequent adjustments are still costly. Furthermore, a centralized controller of such networks may become a bottleneck.We present CBNet (Counting-Based self-adjusting Network), a demand-aware network that relies on a distributed control plane supporting concurrent adjustments, while significantly reducing the number of reconfigurations, compared to related work. CBNet comes with formal guarantees and is based on concepts of self-adjusting data structures. We evaluate CBNet analytically and empirically and we find that CBNet can effectively exploit locality structure in the traffic demand.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122515229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Noise-Resilient Empirical Performance Modeling with Deep Neural Networks 基于深度神经网络的抗噪声经验性能建模
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00012
M. Ritter, A. Geiss, Johannes Wehrstein, A. Calotoiu, Thorsten Reimann, T. Hoefler, F. Wolf
{"title":"Noise-Resilient Empirical Performance Modeling with Deep Neural Networks","authors":"M. Ritter, A. Geiss, Johannes Wehrstein, A. Calotoiu, Thorsten Reimann, T. Hoefler, F. Wolf","doi":"10.1109/IPDPS49936.2021.00012","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00012","url":null,"abstract":"Empirical performance modeling is a proven instrument to analyze the scaling behavior of HPC applications. Using a set of smaller-scale experiments, it can provide important insights into application behavior at larger scales. Extra-P is an empirical modeling tool that applies linear regression to automatically generate human-readable performance models. Similar to other regression-based modeling techniques, the accuracy of the models created by Extra-P decreases as the amount of noise in the underlying data increases. This is why the performance variability observed in many contemporary systems can become a serious challenge. In this paper, we introduce a novel adaptive modeling approach that makes Extra-P more noise resilient, exploiting the ability of deep neural networks to discover the effects of numerical parameters, such as the number of processes or the problem size, on performance when dealing with noisy measurements. Using synthetic analysis and data from three different case studies, we demonstrate that our solution improves the model accuracy at high noise levels by up to 25% while increasing their predictive power by about 15%.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126311426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Introducing Application Awareness Into a Unified Power Management Stack 在统一电源管理堆栈中引入应用感知
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00040
D. Wilson, Siddhartha Jana, Aniruddha Marathe, S. Brink, C. Cantalupo, D. Guttman, B. Geltz, Lowren H. Lawson, Asma H. Al-rawi, A. Mohammad, Fuat Keceli, Federico Ardanaz, J. Eastep, A. Coskun
{"title":"Introducing Application Awareness Into a Unified Power Management Stack","authors":"D. Wilson, Siddhartha Jana, Aniruddha Marathe, S. Brink, C. Cantalupo, D. Guttman, B. Geltz, Lowren H. Lawson, Asma H. Al-rawi, A. Mohammad, Fuat Keceli, Federico Ardanaz, J. Eastep, A. Coskun","doi":"10.1109/IPDPS49936.2021.00040","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00040","url":null,"abstract":"Effective power management in a data center is critical to ensure that power delivery constraints are met while maximizing the performance of users’ workloads. Power limiting is needed in order to respond to greater-than-expected power demand. HPC sites have generally tackled this by adopting one of two approaches: (1) a system-level power management approach that is aware of the facility or site-level power requirements, but is agnostic to the application demands; OR (2) a job-level power management solution that is aware of the application design patterns and requirements, but is agnostic to the site-level power constraints. Simultaneously incorporating solutions from both domains often leads to conflicts in power management mechanisms. This, in turn, affects system stability and leads to irreproducibility of performance. To avoid this irreproducibility, HPC sites have to choose between one of the two approaches, thereby leading to missed opportunities for efficiency gains.This paper demonstrates the need for the HPC community to collaborate towards seamless integration of system-aware and application-aware power management approaches. This is achieved by proposing a new dynamic policy that inherits the benefits of both approaches from tight integration of a resource manager and a performance-aware job runtime environment. An empirical comparison of this integrated management approach against state-of-the-art solutions exposes the benefits of investing in end-to-end solutions to optimize for system-wide performance or efficiency objectives. With our proposed system–application integrated policy, we observed up to 7% reduction in system time dedicated to jobs and up to 11% savings in compute energy, compared to a baseline that is agnostic to system power and application design constraints.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121943390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Cori: Dancing to the Right Beat of Periodic Data Movements over Hybrid Memory Systems Cori:在混合存储系统中,随着周期性数据移动的正确节拍起舞
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00043
Thaleia Dimitra Doudali, Daniel Zahka, Ada Gavrilovska
{"title":"Cori: Dancing to the Right Beat of Periodic Data Movements over Hybrid Memory Systems","authors":"Thaleia Dimitra Doudali, Daniel Zahka, Ada Gavrilovska","doi":"10.1109/IPDPS49936.2021.00043","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00043","url":null,"abstract":"Emerging hybrid memory systems that comprise technologies such as Intel’s Optane DC Persistent Memory, exhibit disparities in the access speeds and capacity ratios of their heterogeneous memory components. This breaks many assumptions and heuristics designed for traditional DRAM-only platforms. High application performance is feasible via dynamic data movement across memory units, which maximizes the capacity use of DRAM while ensuring efficient use of the aggregate system resources. Newly proposed solutions use performance models and machine intelligence to optimize which and how much data to move dynamically. However, the decision of when to move this data is based on empirical selection of time intervals, or left to the applications. Our experimental evaluation shows that failure to properly conFigure the data movement frequency can lead to 10%-100% performance degradation for a given data movement policy; yet, there is no established methodology on how to properly conFigure this value for a given workload, platform and policy. We propose Cori, a system-level tuning solution that identifies and extracts the necessary application-level data reuse information, and guides the selection of data movement frequency to deliver gains in application performance and system resource efficiency. Experimental evaluation shows that Cori configures data movement frequencies that provide application performance within 3% of the optimal one, and that it can achieve this up to $5 times$ more quickly than random or brute-force approaches. System-level validation of Cori on a platform with DRAM and Intel’s Optane DC PMEM confirms its practicality and tuning efficiency.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127928885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Designing High-Performance MPI Libraries with On-the-fly Compression for Modern GPU Clusters* 设计高性能的MPI库与现代GPU集群的动态压缩*
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00053
Q. Zhou, C. Chu, N. S. Kumar, Pouya Kousha, S. M. Ghazimirsaeed, H. Subramoni, D. Panda
{"title":"Designing High-Performance MPI Libraries with On-the-fly Compression for Modern GPU Clusters*","authors":"Q. Zhou, C. Chu, N. S. Kumar, Pouya Kousha, S. M. Ghazimirsaeed, H. Subramoni, D. Panda","doi":"10.1109/IPDPS49936.2021.00053","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00053","url":null,"abstract":"While the memory bandwidth of accelerators such as GPU has significantly improved over the last decade, the commodity networks such as Ethernet and InfiniBand are lagging in terms of raw throughput creating. Although there are significant research efforts on improving the large message data transfers for GPU-resident data, the inter-node communication remains the major performance bottleneck due to the data explosion created by the emerging High-Performance Computing (HPC) applications. On the other hand, the recent developments in GPU-based compression algorithms exemplify the potential of using high-performance message compression techniques to reduce the volume of data transferred thereby reducing the load on an already overloaded inter-node communication fabric. The existing GPU-based compression schemes are not designed for “on-the-fly” execution and lead to severe performance degradation when integrated into the communication libraries. In this paper, we take up this challenge and redesign the MVAPICH2 MPI library to enable high-performance, on-the-fly message compression for modern, dense GPU clusters. We also enhance existing implementations of lossless and lossy compression algorithms, MPC and ZFP, to provide high-performance, on-the-fly message compression and decompression. We demonstrate that our proposed designs can offer significant benefits at the microbenchmark and application-levels. The proposed design is able to provide up to 19% and 37% improvement in the GPU computing flops of AWP-ODC with the enhanced MPCOPT and ZFP-OPT schemes, respectively. Moreover, we gain up to 1.56x improvement in Dask throughput. To the best of our knowledge, this is the first work that leverages the GPU-based compression techniques to significantly improve the GPU communication performance for various MPI primitives, MPI-based data science, and HPC applications.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114549784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Argus: Efficient Job Scheduling in RDMA-assisted Big Data Processing rdma辅助大数据处理中的高效作业调度
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00092
Sijie Wu, Hanhua Chen, Yonghui Wang, Hai Jin
{"title":"Argus: Efficient Job Scheduling in RDMA-assisted Big Data Processing","authors":"Sijie Wu, Hanhua Chen, Yonghui Wang, Hai Jin","doi":"10.1109/IPDPS49936.2021.00092","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00092","url":null,"abstract":"Efficient job scheduling is an important and challenging issue in big data processing systems. Traditional designs commonly give priority to data locality during scheduling and follow a network-optimized principle to avoid costly data moving across the network. The emergence of the high-performance Remote Direct Memory Access (RDMA) network brings new opportunities for big data processing systems. However, the existing RDMA-assisted designs ignore the dependency among stages during scheduling and this can result in unsatisfied system efficiency. In this work, we propose Argus, a novel RDMA-assisted job scheduler which achieves high resource utilization by fully exploiting the structure feature of stage dependency. Argus prioritizes the stages whose completion can enable more schedulable stages. We implement Argus on top of RDMA-Spark, and conduct comprehensive experiments to evaluate the performance using large-scale traces collected from real-world systems. Results show that compared to state-of-the-art designs, Argus reduces the job completion time and makespan by 38% and 31%, respectively.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128247459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BiPS: Hotness-aware Bi-tier Parameter Synchronization for Recommendation Models BiPS:热感知推荐模型的双层参数同步
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00069
Qiming Zheng, Quan Chen, Kaihao Bai, Huifeng Guo, Yong Gao, Xiuqiang He, M. Guo
{"title":"BiPS: Hotness-aware Bi-tier Parameter Synchronization for Recommendation Models","authors":"Qiming Zheng, Quan Chen, Kaihao Bai, Huifeng Guo, Yong Gao, Xiuqiang He, M. Guo","doi":"10.1109/IPDPS49936.2021.00069","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00069","url":null,"abstract":"While current deep learning frameworks are mainly optimized for dense-accessed models, they show low throughput and poor scalability in training sparse-accessed recommendation models. Our investigation shows that the poor performance is due to the parameter synchronization bottleneck. We therefore propose BiPS, a bi-tier parameter synchronization system that alleviates the parameter update and the sparse-accessed parameters communication bottleneck. BiPS includes a bi-tier parameter server that accelerates the traditional CPU-based parameter update process, a hotness-aware parameter placement and communication policy to balance the workloads between CPU and GPU and optimize the communication of sparse-accessed parameters. BiPS overlaps the worker computation with the synchronization stage to enable parameter updates in advance. We implement BiPS and incorporate it into mainstream DL frameworks including TensorFlow, MXNet, and PyTorch. The experimental results based on various deep learning frameworks show that BiPS greatly speeds up the training of recommenders (5 - 9$times$) as the model scale increases, without degrading the accuracy.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122763923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信