2017 46th International Conference on Parallel Processing (ICPP)最新文献

筛选
英文 中文
A Dynamic Resource Controller for a Lambda Architecture Lambda架构的动态资源控制器
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.42
M. HoseinyFarahabady, J. Taheri, Z. Tari, Albert Y. Zomaya
{"title":"A Dynamic Resource Controller for a Lambda Architecture","authors":"M. HoseinyFarahabady, J. Taheri, Z. Tari, Albert Y. Zomaya","doi":"10.1109/ICPP.2017.42","DOIUrl":"https://doi.org/10.1109/ICPP.2017.42","url":null,"abstract":"Lambda architecture is a novel event-driven serverless paradigm that allows companies to build scalable and reliable enterprise applications. As an attractive alternative to traditional service oriented architecture (SOA), Lambda architecture can be used in many use cases including BI tools, in-memory graph databases, OLAP, and streaming data processing. In practice, an important aim of Lambda's service providers is devising an efficient way to co-locate multiple Lambda functions with different attributes into a set of available computing resources. However, previous studies showed that consolidated workloads can compete fiercely for shared resources, resulting in severe performance variability/degradation. This paper proposes a resource allocation mechanism for a Lambda platform based on the model predictive control framework. Performance evaluation is carried out by comparing the proposed solution with multiple resource allocation heuristics, namely enhanced versions of spread and binpack, and best-effort approaches. Results confirm that the proposed controller increases the overall resource utilization by 37% on average and achieves a significant improvement in preventing QoS violation incidents compared to others.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116523937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning 面向深度学习的GPU集群高效可扩展多源流广播
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.25
Ching-Hsiang Chu, Xiaoyi Lu, A. Awan, H. Subramoni, J. Hashmi, B. Elton, D. Panda
{"title":"Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning","authors":"Ching-Hsiang Chu, Xiaoyi Lu, A. Awan, H. Subramoni, J. Hashmi, B. Elton, D. Panda","doi":"10.1109/ICPP.2017.25","DOIUrl":"https://doi.org/10.1109/ICPP.2017.25","url":null,"abstract":"Broadcast operations (e.g. MPI_Bcast) have been widely used in deep learning applications to exchange a large amount of data among multiple graphics processing units (GPUs). Recent studies have shown that leveraging the InfiniBand hardware-based multicast (IB-MCAST) protocol can enhance scalability of GPU-based broadcast operations. However, these initial designs with IB-MCAST are not optimized for multi-source broadcast operations with large messages, which is the common communication scenario for deep learning applications. In this paper, we first model existing broadcast schemes and analyze their performance bottlenecks on GPU clusters. Then, we propose a novel broadcast design based on message streaming to better exploit IB-MCAST and NVIDIA GPUDirect RDMA (GDR) technology for efficient large message transfer operation. The proposed design can provide high overlap among multi-source broadcast operations. Experimental results show up to 68% reduction of latency compared to state-of-the-art solutions in a benchmark-level evaluation. The proposed design also shows near-constant latency for a single broadcast operation as a system grows. Furthermore, it yields up to 24% performance improvement in the popular deep learning framework, Microsoft CNTK, which uses multi-source broadcast operations; notably, the performance gains are achieved without modifications to applications. Our model validation shows that the proposed analytical model and experimental results match within a 10% range. Our model also predicts that the proposed design outperforms existing schemes for multi-source broadcast scenarios with increasing numbers of broadcast sources in large-scale GPU clusters.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117296417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Application-Aware Power Coordination on Power Bounded NUMA Multicore Systems 功率受限NUMA多核系统的应用感知功率协调
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.68
Rong Ge, Pengfei Zou, Xizhou Feng
{"title":"Application-Aware Power Coordination on Power Bounded NUMA Multicore Systems","authors":"Rong Ge, Pengfei Zou, Xizhou Feng","doi":"10.1109/ICPP.2017.68","DOIUrl":"https://doi.org/10.1109/ICPP.2017.68","url":null,"abstract":"Power is a critical factor that limits the performance and scalability of modern high performance computer systems. Considering power as a first-order constraint and a scarce system resource, power-bounded computing represents a new perspective to address the power challenge in HPC.In this work we present an application-aware, multi-dimensional power allocation framework to support power-bounded parallel computing on NUMA-enabled multicore systems. This framework utilizes multiple complementary software and hardware power management mechanisms to manage power distribution among sockets, cores, and NUMA memory nodes under a total power budget. More importantly, this framework implements a hierarchical power coordination method that leverages applications' performance and power scalability to efficiently identify an ideal power distribution.We describe the design of the framework and evaluate its performance on a NUMA-enabled multicore system with 24 cores. Experimental results show that the proposed framework performs close to the oracle solution for parallel programs with various power budgets.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115241749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Scalable Write Allocation in the WAFL File System WAFL文件系统中的可伸缩写分配
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.35
Matthew Curtis-Maury, R. Kesavan, Mrinal K. Bhattacharjee
{"title":"Scalable Write Allocation in the WAFL File System","authors":"Matthew Curtis-Maury, R. Kesavan, Mrinal K. Bhattacharjee","doi":"10.1109/ICPP.2017.35","DOIUrl":"https://doi.org/10.1109/ICPP.2017.35","url":null,"abstract":"Enterprise storage systems must scale to increasing core counts to meet stringent performance requirements. Both the NetApp® Data ONTAP® storage operating system and its WAFL® file system have been incrementally parallelized over the years, but some components remain single-threaded. The WAFL write allocator, which is responsible for assigning blocks on persistent storage to dirty data in a way that maximizes write throughput to the storage media, is single-threaded and has become a major scalability bottleneck. This paper presents a new write allocation architecture, White Alligator, for the WAFL file system that scales performance on many cores. We also place the new architecture in the context of the historical parallelization of WAFL and discuss the architectural decisions that have facilitated this parallelism. The resulting system demonstrates increased scalability that results in throughput gains of up to 274% on a many-core storage system.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121124899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
High-Performance and Memory-Saving Sparse General Matrix-Matrix Multiplication for NVIDIA Pascal GPU 用于NVIDIA Pascal GPU的高性能和节省内存的稀疏通用矩阵-矩阵乘法
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.19
Yusuke Nagasaka, Akira Nukada, S. Matsuoka
{"title":"High-Performance and Memory-Saving Sparse General Matrix-Matrix Multiplication for NVIDIA Pascal GPU","authors":"Yusuke Nagasaka, Akira Nukada, S. Matsuoka","doi":"10.1109/ICPP.2017.19","DOIUrl":"https://doi.org/10.1109/ICPP.2017.19","url":null,"abstract":"Sparse general matrix-matrix multiplication (SpGEMM) is one of the key kernels of preconditioners such as algebraic multigrid method or graph algorithms. However, the performance of SpGEMM is quite low on modern processors due to random memory access to both input and output matrices. As well as the number and the pattern of non-zero elements in the output matrix, important for achieving locality, are unknown before the execution. Moreover, the state-of-the-art GPU implementations of SpGEMM requires large amounts of memory for temporary results, limiting the matrix size computable on fast GPU device memory. We propose a new fast SpGEMM algorithm requiring small amount of memory and achieving high performance. Calculation of the pattern and value in output matrix is optimized by using GPU's on-chip shared memory and a hash table. Additionally, our algorithm launches multiple kernels running concurrently to improve the utilization of GPU resources. The kernels for the calculation of each row of output matrix are chosen based on the number of non-zero elements. Performance evaluation using matrices from the Sparse Matrix Collection of University Florida on NVIDIA's Pascal generation GPU shows that our approach achieves speedups of up to x4.3 in single precision and x4.4 in double precision compared to existing SpGEMM libraries. Furthermore, the memory usage is reduced by 14.7% in single precision and 10.9% in double precision on average, allowing larger matrices to be computed.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124400109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Parallel Algorithms for the Computation of Cycles in Relative Neighborhood Graphs 相对邻域图中循环计算的并行算法
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.28
H. Sundar, P. Khurd
{"title":"Parallel Algorithms for the Computation of Cycles in Relative Neighborhood Graphs","authors":"H. Sundar, P. Khurd","doi":"10.1109/ICPP.2017.28","DOIUrl":"https://doi.org/10.1109/ICPP.2017.28","url":null,"abstract":"We present parallel algorithms for computing cycle orders and cycle perimeters in relative neighborhood graphs. This parallel algorithm has wide-ranging applications from microscopic to macroscopic domains, e.g., in histopathological image analysis and wireless network routing. Our algorithm consists of the following steps (sub-algorithms): (1) Uniform partitioning of the graph vertices across processes, (2) Parallel Delaunay triangulation and (3) Parallel computation of the relative neighborhood graph and the cycle orders and perimeters. We evaluated our algorithm on a large dataset with 6.5 Million points and demonstrate excellent fixed-size scalability. We also demonstrate excellent isogranular scalability up to 131K processes. Our largest run was on a dataset with 13 billion points on 131K processes on ORNL's Cray XK7 Titan supercomputer.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128549141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network Aware Multi-User Computation Partitioning in Mobile Edge Clouds 移动边缘云中网络感知的多用户计算分区
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.39
Lei Yang, Jiannong Cao, Zhenyu Wang, Weigang Wu
{"title":"Network Aware Multi-User Computation Partitioning in Mobile Edge Clouds","authors":"Lei Yang, Jiannong Cao, Zhenyu Wang, Weigang Wu","doi":"10.1109/ICPP.2017.39","DOIUrl":"https://doi.org/10.1109/ICPP.2017.39","url":null,"abstract":"Mobile edge cloud has been increasingly concerned by researchers due to its closer distance to mobile users than the traditional cloud on Internet. Offloading computations from mobile devices to the nearby edge cloud is an effective technique to accelerate the applications and/or save energy on the mobile devices. However, the mobile edge cloud usually has limited computation resources and constrained access bandwidth shared by multiple users in its proximity. Thus, allocation of resources and bandwidth among the users is significant to the overall application performance. In this paper, we study network aware multi-user computation partitioning problem in mobile edge clouds, i.e., to decide for each user which parts of the application should be offload onto the edge cloud, and which others should be executed locally, and meanwhile to allocate the access bandwidth among the users, such that the average application performance of the users is maximized.This problem is novel in that we consider the competition among users for both computing resources and bandwidth, and jointly optimizes the partitioning decisions with the allocation of resources and bandwidths among users, while most existing works either focus on the single user computation partitioning or study the multiple user computation partitioning without regard of the constrained network bandwidth. We first formulate the problem, and then transform it into the classic Multi-class Multi-dimensional Knapsack Problem and develop an effective algorithm, namely Performance Function Matrix based Heuristic (PFM-H), to solve it. Comprehensive simulations show that our proposed algorithm outperforms the benchmark algorithms significantly in the average application performance.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114255602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
WA-Dataspaces: Exploring the Data Staging Abstractions for Wide-Area Distributed Scientific Workflows wa - datasespaces:探索广域分布式科学工作流的数据分段抽象
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.34
M. Aktaş, J. Montes, I. Rodero, M. Parashar
{"title":"WA-Dataspaces: Exploring the Data Staging Abstractions for Wide-Area Distributed Scientific Workflows","authors":"M. Aktaş, J. Montes, I. Rodero, M. Parashar","doi":"10.1109/ICPP.2017.34","DOIUrl":"https://doi.org/10.1109/ICPP.2017.34","url":null,"abstract":"Data staging has been shown to be very effective for supporting data intensive in-situ workflows and coupling of applications. Experimental sciences are increasingly becoming collaborative among geographically distributed teams, and include experimental instruments and HPC facilities. This new way of doing science poses new challenges due to data sizes, complexity of computation, and the use of wide area networks between couplings. In this paper, we explore how the staging abstraction can be extended to support such workflows. Specifically, we develop a NUMA-like abstraction that orchestrates multiple distributed local-area staging abstractions, and provides asynchronous data put/get semantics to enable data sharing across them. To mask data movement overhead and provide in-time data access, we propose the use of predictive prefetching approaches that leverage the iterative nature of the coupling. We evaluate our prototype implementation using a fusion workflow and show that our design can effectively and transparently support widearea coupled workflows. Additionally, results show that the use of prefetching techniques leads to significant gains in data access times of data that needs to be moved over the wide area network.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Boosting the Efficiency of HPCG and Graph500 with Near-Data Processing 用近数据处理提高HPCG和Graph500的效率
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.12
E. Vermij, Leandro Fiorin, C. Hagleitner, K. Bertels
{"title":"Boosting the Efficiency of HPCG and Graph500 with Near-Data Processing","authors":"E. Vermij, Leandro Fiorin, C. Hagleitner, K. Bertels","doi":"10.1109/ICPP.2017.12","DOIUrl":"https://doi.org/10.1109/ICPP.2017.12","url":null,"abstract":"HPCG and Graph500 can be regarded as the two most relevant benchmarks for high-performance computing systems. Existing supercomputer designs, however, tend to focus on floating-point peak performance, a metric less relevant for these two benchmarks, leaving resources underutilized, and resulting in little performance improvements, for these benchmarks, over time. In this work, we analyze the implementation of both benchmarks on a novel shared-memory near-data processing architecture. We study a number of aspects: 1. a system parameter design exploration, 2. software optimizations, and 3. the exploitation of unique architectural features like user-enhanced coherence as well as the exploitation of data-locality for inter near-data processor traffic.For the HPCG benchmark, we show a factor 2.5x application level speedup with respect to a CPU, and a factor 2.5x power-efficiency improvement with respect to a GPU. For the Graph500 benchmark, we show up to a factor 3.5x speedup with respect to a CPU. Furthermore, we show that, with many of the existing data-locality optimizations for this specific graph workload applied, local memory bandwidth is not the crucial parameter, and a high-bandwidth as well as low-latency interconnect are arguably more important, shining a new light on the near-data processing characteristics most relevant for this type of heavily optimized graph processing.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117197415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Favorable Block First: A Comprehensive Cache Scheme to Accelerate Partial Stripe Recovery of Triple Disk Failure Tolerant Arrays 有利块优先:一种加速三盘容错阵列部分分条恢复的综合缓存方案
2017 46th International Conference on Parallel Processing (ICPP) Pub Date : 2017-08-01 DOI: 10.1109/ICPP.2017.31
Luyu Li, Houxiang Ji, Chentao Wu, Jie Li, M. Guo
{"title":"Favorable Block First: A Comprehensive Cache Scheme to Accelerate Partial Stripe Recovery of Triple Disk Failure Tolerant Arrays","authors":"Luyu Li, Houxiang Ji, Chentao Wu, Jie Li, M. Guo","doi":"10.1109/ICPP.2017.31","DOIUrl":"https://doi.org/10.1109/ICPP.2017.31","url":null,"abstract":"With the development of cloud computing, disk arrays tolerating triple disk failures (3DFTs) are receiving more attention nowadays because they can provide high data reliability with low monetary cost. However, a challenging issue in these arrays is how to efficiently reconstruct the lost data, especially for partial stripe errors (e.g., sector and chunk errors). It is one of the most significant scenarios in practice. However, existing cache strategies are not efficient for partial stripe reconstruction in 3DFTs, which is because the complex relationships among data and parities are usually ignored during the recovery process.To address this problem, in this paper, we proposed a comprehensive cache policy called Favorable Block First (FBF), which can speed up the partial stripe reconstruction of 3DFTs. FBF investigates the relationships among parity chains via allocating various priorities of shared chunks. Thus in the recovery process, by giving higher priorities to the chunks which are shared by more parities chains, FBF can dynamically hold the significant data in buffer cache for partial stripe reconstruction. Obviously, it increases the cache hit ratio and reduces the reconstruction time. To demonstrate the effectiveness of FBF, we conduct several simulations via Disksim. The results show that, compared to typical recovery schemes by combining with classic cache policies (e.g., LRU, LFU and ARC), FBF improves hit ratio by up to 2.47 times and accelerates the reconstruction process by 14.90%, respectively.","PeriodicalId":392710,"journal":{"name":"2017 46th International Conference on Parallel Processing (ICPP)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127340923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信