2014 21st International Conference on High Performance Computing (HiPC)最新文献

筛选
英文 中文
Design and evaluation of parallel hashing over large-scale data 大规模数据并行哈希的设计与评估
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-20 DOI: 10.1109/HiPC.2014.7116909
Long Cheng, S. Kotoulas, Tomas E. Ward, G. Theodoropoulos
{"title":"Design and evaluation of parallel hashing over large-scale data","authors":"Long Cheng, S. Kotoulas, Tomas E. Ward, G. Theodoropoulos","doi":"10.1109/HiPC.2014.7116909","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116909","url":null,"abstract":"High-performance analytical data processing systems often run on servers with large amounts of memory. A common data structure used in such environment is the hash tables. This paper focuses on investigating efficient parallel hash algorithms for processing large-scale data. Currently, hash tables on distributed architectures are accessed one key at a time by local or remote threads while shared-memory approaches focus on accessing a single table with multiple threads. A relatively straightforward “bulk-operation” approach seems to have been neglected by researchers. In this work, using such a method, we propose a high-level parallel hashing framework, Structured Parallel Hashing, targeting efficiently processing massive data on distributed memory. We present a theoretical analysis of the proposed method and describe the design of our hashing implementations. The evaluation reveals a very interesting result - the proposed straightforward method can vastly outperform distributed hashing methods and can even offer performance comparable with approaches based on shared memory supercomputers which use specialized hardware predicates. Moreover, we characterize the performance of our hash implementations through extensive experiments, thereby allowing system developers to make a more informed choice for their high-performance applications.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123145178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Saving energy by exploiting residual imbalances on iterative applications 通过利用迭代应用程序上的剩余不平衡来节省能量
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116895
E. Padoin, M. Castro, L. Pilla, P. Navaux, J. Méhaut
{"title":"Saving energy by exploiting residual imbalances on iterative applications","authors":"E. Padoin, M. Castro, L. Pilla, P. Navaux, J. Méhaut","doi":"10.1109/HiPC.2014.7116895","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116895","url":null,"abstract":"The power consumption of High Performance Computing (HPC) systems is an increasing concern as large-scale systems grow in size and, consequently, consume more energy. In response to this challenge, we propose two variants of a new energy-aware load balancer that aim at reducing the energy consumption of parallel platforms running imbalanced scientific applications without degrading their performance. Our research combines dynamic load balancing with DVFS techniques in order to reduce the clock frequency of underloaded computing cores which experience some residual imbalance even after tasks are remapped. Experimental results with benchmarks and a real-world application presented energy savings of up to 32% with our fine-grained variant that performs per-core DVFS, and of up to 34% with our coarsegrained variant that performs per-chip DVFS.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115262717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Optimization of scan algorithms on multi- and many-core processors 多核和多核处理器扫描算法的优化
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116883
Qiao Sun, Chao Yang
{"title":"Optimization of scan algorithms on multi- and many-core processors","authors":"Qiao Sun, Chao Yang","doi":"10.1109/HiPC.2014.7116883","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116883","url":null,"abstract":"Scan is a basic building block widely utilized in many applications. With the emergence of multi-core and many-core processors, the study of highly scalable parallel scan algorithms becomes increasingly important. In this paper, we first propose a novel parallel scan algorithm based on the fine grain dynamic task scheduling in QUARK, and then derive a cache-friendly framework for any parallel scan kernel. The QUARK-scan is superior to the fastest available counterpart proposed by Zhang in 2012 and many other parallel scans in several aspects, including the greatly improved load balance and the substantially reduced number of global barriers. On the other hand, the cache-friendly framework helps in improving the cache line usage and is flexible to apply to any parallel scan kernel. A variety of optimization techniques such as SIMD vectorization, loop unrolling, adjacent synchronization and thread affinity are exploited in QUARKscan and the cache-friendly versions of both QUARK-scan and Zhang's scan. Experiments done on three typical multi- and many-core platforms indicate that the proposed QUARK-scan and the cache-friendly Zhang's scan are superior in different scenarios.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116967333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interface for heterogeneous kernels: A framework to enable hybrid OS designs targeting high performance computing on manycore architectures 异构内核接口:一个框架,用于实现针对多核架构的高性能计算的混合操作系统设计
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116885
Taku Shimosawa, Balazs Gerofi, Masamichi Takagi, Gou Nakamura, Tomoki Shirasawa, Yuji Saeki, M. Shimizu, A. Hori, Y. Ishikawa
{"title":"Interface for heterogeneous kernels: A framework to enable hybrid OS designs targeting high performance computing on manycore architectures","authors":"Taku Shimosawa, Balazs Gerofi, Masamichi Takagi, Gou Nakamura, Tomoki Shirasawa, Yuji Saeki, M. Shimizu, A. Hori, Y. Ishikawa","doi":"10.1109/HiPC.2014.7116885","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116885","url":null,"abstract":"Turning towards exascale systems and beyond, it has been widely argued that the currently available systems software is not going to be feasible due to various requirements such as the ability to deal with heterogeneous architectures, the need for systems level optimization targeting specific applications, elimination of OS noise, and at the same time, compatibility with legacy applications. To cope with these issues, a hybrid design of operating systems where light-weight specialized kernels can cooperate with a traditional OS kernel seems adequate, and a number of recent research projects are now heading into this direction. This paper presents Interface for Heterogeneous Kernels (IHK), a general framework enabling hybrid kernel designs in systems equipped with manycore processors and/or accelerators. IHK provides a range of capabilities, such as resource partitioning, management of heterogeneous OS kernels, as well as a low-level communication layer among the kernels. We describe IHK's interface and demonstrate its feasibility for hybrid kernel designs through executing various different lightweight OS kernels on top of it, which are specialized for certain types of applications. We use the Intel Xeon Phi, Intel's latest manycore coprocessor, as our experimental platform.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
A high performance broadcast design with hardware multicast and GPUDirect RDMA for streaming applications on Infiniband clusters 一种高性能广播设计,采用硬件组播和GPUDirect RDMA,适用于Infiniband集群上的流媒体应用
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116875
Akshay Venkatesh, H. Subramoni, Khaled Hamidouche, D. Panda
{"title":"A high performance broadcast design with hardware multicast and GPUDirect RDMA for streaming applications on Infiniband clusters","authors":"Akshay Venkatesh, H. Subramoni, Khaled Hamidouche, D. Panda","doi":"10.1109/HiPC.2014.7116875","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116875","url":null,"abstract":"Several streaming applications in the field of high performance computing are obtaining significant speedups in execution time by leveraging the raw compute power offered by modern GPGPUs. This raw compute power, coupled with the high network throughput offered by high performance interconnects such as InfiniBand (IB) are allowing streaming applications to scale to rapidly. A frequently used operation that constitutes to the execution of multi-node streaming applications is the broadcast operation where data from a single source is transmitted to multiple sinks, typically from a live data site. Although high performance networks like IB offer novel features like hardware based multicast to speed up the performance of the broadcast operation, their benefits have been limited to host based applications due to the inability of IB Host Channel Adapters (HCAs) to directly access the memory of the GPGPUs. This poses a significant performance bottleneck to high performance streaming applications that rely heavily on broadcast operations from GPU memories. The recently introduced GPUDirect RDMA feature alleviates this bottleneck by enabling IB HCAs to perform data transfers directly to / from GPU memory (bypassing host memory). Thus, it presents an attractive alternative to designing high performance broadcast operations for GPGPU based high performance streaming applications. In this work, we propose a novel method for fully utilizing GPUDirect RDMA and hardware multicast features in tandem to design a high performance broadcast operation for streaming applications. The experiments conducted with the proposed design show up 60% decrease in latency and 3X-4X improvement in a throughput benchmark compared to the naive scheme on 64 GPU nodes.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115186927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A fast implementation of MLR-MCL algorithm on multi-core processors MLR-MCL算法在多核处理器上的快速实现
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116888
Q. Niu, Pai-Wei Lai, S. M. Faisal, S. Parthasarathy, P. Sadayappan
{"title":"A fast implementation of MLR-MCL algorithm on multi-core processors","authors":"Q. Niu, Pai-Wei Lai, S. M. Faisal, S. Parthasarathy, P. Sadayappan","doi":"10.1109/HiPC.2014.7116888","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116888","url":null,"abstract":"Widespread use of stochastic flow based graph clustering algorithms, e.g. Markov Clustering (MCL), has been hampered by their lack of scalability and fragmentation of output. Multi-Level Regularized Markov Clustering (MLR-MCL) is an improvement over Markov Clustering (MCL), providing faster performance and better quality of clusters for large graphs. However, a closer look at MLR-MCL's performance reveals potential for further improvement. In this paper we present a fast parallel implementation of MLR-MCL algorithm via static work partitioning based on analysis of memory footprints. By parallelizing the most time consuming region of the sequential MLR-MCL algorithm, we report up to 10.43x (5.22x in average) speedup on CPU, using 8 datasets from SNAP and 3 PPI datasets. In addition, our algorithm can be adapted to perform general sparse matrix-matrix multiplication (SpGEMM), and our experimental evaluation shows up to 3.50x (1.92x in average) speedup on CPU, and up to 5.12x (2.20x in average) speedup on MIC, comparing to the SpGEMM kernel provided by Intel Math Kernel Library (MKL).","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126276383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Mixed-precision models for calculation of high-order virial coefficients on GPUs gpu上计算高阶维里系数的混合精度模型
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116898
Chao Feng, A. Schultz, V. Chaudhary, D. Kofke
{"title":"Mixed-precision models for calculation of high-order virial coefficients on GPUs","authors":"Chao Feng, A. Schultz, V. Chaudhary, D. Kofke","doi":"10.1109/HiPC.2014.7116898","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116898","url":null,"abstract":"The virial equation of state (VEOS) is a density expansion of the thermodynamic pressure with respect to an ideal-gas reference. Its coefficients can be computed from a molecular model, and become more expensive to calculate at higher order. In this paper, we use GPU to calculate the 8th, 9th and 10th virial coefficients of the Lennard-Jones (LJ) potential model by the Mayer Sampling Monte Carlo (MSMC) method and Wheatley's algorithm. Two mixed-precision models are proposed to overcome a potential precision limitation of current GPUs while maintaining the performance benefit. On the latest Kepler architecture GPU Tesla K40, an average speedup of 20 to 40 is achieved for these calculations.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"484 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125953449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matrix-matrix multiplication on a large register file architecture with indirection 矩阵-矩阵乘法对大寄存器文件结构具有间接性
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116709
D. Sreedhar, J. Derby, R. Montoye, C. Johnson
{"title":"Matrix-matrix multiplication on a large register file architecture with indirection","authors":"D. Sreedhar, J. Derby, R. Montoye, C. Johnson","doi":"10.1109/HiPC.2014.7116709","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116709","url":null,"abstract":"Dense matrix-matrix multiply is an important kernel in many high performance computing applications including the emerging deep neural network based cognitive computing applications. Graphical processing units (GPU) have been very successful in handling dense matrix-matrix multiply in a variety of applications. However, recent research has shown that GPUs are very inefficient in using the available compute resources on the silicon for matrix multiply in terms of utilization of peak floating point operations per second (FLOPS). In this paper, we show that an architecture with a large register file supported by “indirection ” can utilize the floating point computing resources on the processor much more efficiently. A key feature of our proposed in-line accelerator is a bank-based very-large register file, with embedded SIMD support. This processor-in-regfile (PIR) strategy is implemented as local computation elements (LCEs) attached to each bank, overcoming the limited number of register file ports. Because each LCE is a SIMD computation element, and all of them can proceed concurrently, the PIR approach constitutes a highly-parallel super-wide-SIMD device. We show that we can achieve more than 25% better performance than the best known results for matrix multiply using GPUs. This is achieved using far lesser floating point computing units and hence lesser silicon area and power. We also show that architecture blends well with the Strassen and Winograd matrix multiply algorithms. We optimize the selective data parallelism that the LCEs enable for these algorithms and study the area-performance trade-offs.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128314376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Xevolver: An XML-based code translation framework for supporting HPC application migration Xevolver:一个基于xml的代码转换框架,用于支持HPC应用程序迁移
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116902
H. Takizawa, S. Hirasawa, Yasuharu Hayashi, Ryusuke Egawa, Hiroaki Kobayashi
{"title":"Xevolver: An XML-based code translation framework for supporting HPC application migration","authors":"H. Takizawa, S. Hirasawa, Yasuharu Hayashi, Ryusuke Egawa, Hiroaki Kobayashi","doi":"10.1109/HiPC.2014.7116902","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116902","url":null,"abstract":"This paper proposes an extensible programming framework to separate platform-specific optimizations from application codes. The framework allows programmers to define their own code translation rules for special demands of individual systems, compilers, libraries, and applications. Code translation rules associated with user-defined compiler directives are defined in an external file, and the application code is just annotated by the directives. For code transformations based on the rules, the framework exposes the abstract syntax tree (AST) of an application code as an XML document to expert programmers. Hence, the XML document of an AST can be transformed using any XML-based technologies. Our case studies using real applications demonstrate that the framework is effective to separate platform-specific optimizations from application codes, and to incrementally improve the performance of an existing application without messing up the code.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129663521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Smart multi-task scheduling for OpenCL programs on CPU/GPU heterogeneous platforms OpenCL程序在CPU/GPU异构平台上的智能多任务调度
2014 21st International Conference on High Performance Computing (HiPC) Pub Date : 2014-12-01 DOI: 10.1109/HiPC.2014.7116910
Y. Wen, Zheng Wang, M. O’Boyle
{"title":"Smart multi-task scheduling for OpenCL programs on CPU/GPU heterogeneous platforms","authors":"Y. Wen, Zheng Wang, M. O’Boyle","doi":"10.1109/HiPC.2014.7116910","DOIUrl":"https://doi.org/10.1109/HiPC.2014.7116910","url":null,"abstract":"Heterogeneous systems consisting of multiple CPUs and GPUs are increasingly attractive as platforms for high performance computing. Such platforms are usually programmed using OpenCL which provides program portability by allowing the same program to execute on different types of device. As such systems become more mainstream, they will move from application dedicated devices to platforms that need to support multiple concurrent user applications. Here there is a need to determine when and where to map different applications so as to best utilize the available heterogeneous hardware resources. In this paper, we present an efficient OpenCL task scheduling scheme which schedules multiple kernels from multiple programs on CPU/GPU heterogeneous platforms. It does this by determining at runtime which kernels are likely to best utilize a device. We show that speedup is a good scheduling priority function and develop a novel model that predicts a kernel's speedup based on its static code structure. Our scheduler uses this prediction and runtime input data size to prioritize and schedule tasks. This technique is applied to a large set of concurrent OpenCL kernels. We evaluated our approach for system throughput and average turn-around time against competitive techniques on two different platforms: a Core i7/Nvidia GTX590 and a Core i7/AMD Tahiti 7970 platforms. For system throughput, we achieve, on average, a 1.21x and 1.25x improvement over the best competitors on the NVIDIA and AMD platforms respectively. Our approach reduces the turnaround time, on average, by at least 1.5x and 1.2x on the NVIDIA and AMD platforms respectively, when compared to alternative approaches.","PeriodicalId":337777,"journal":{"name":"2014 21st International Conference on High Performance Computing (HiPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129062894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 134
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信