2010 39th International Conference on Parallel Processing最新文献

筛选
英文 中文
Cyberaide onServe: Software as a Service on Production Grids Cyberaide onServe:生产网格上的软件即服务
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.47
Tobias Kurze, Lizhe Wang, G. Laszewski, J. Tao, M. Kunze, David Kramer, Wolfgang Karl
{"title":"Cyberaide onServe: Software as a Service on Production Grids","authors":"Tobias Kurze, Lizhe Wang, G. Laszewski, J. Tao, M. Kunze, David Kramer, Wolfgang Karl","doi":"10.1109/ICPP.2010.47","DOIUrl":"https://doi.org/10.1109/ICPP.2010.47","url":null,"abstract":"The Software as a Service (SaaS) methodology is a key paradigm of Cloud computing. In this paper, we focus on an interesting topic – to implement a Cloud computing functionality, the SaaS model, on existing production Grid infrastructures. In general, production Grids employ a Job-Submission-Execution (JSE) model with rigid access interfaces. In this paper we develop the Cyberaide onServe, a lightweight middleware with a virtual appliance. The Cyberaide onServe implements the SaaS methodology on production Grids by translating the SaaS model to the JSE model. The Cyberaide onServe virtual appliance is deployed on demand, hosts applications as Web services, accepts Web service invocations, and finally the Cyberaide onServe executes them on production Grids. We have deployed the Cyberaide onServe on the TeraGrid infrastructure and test results show Cyberaide onServe can provide the SaaS functionality with good performance.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122196477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploitation of Dynamic Communication Patterns through Static Analysis 通过静态分析开发动态通信模式
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.14
Robert Preissl, B. Supinski, M. Schulz, D. Quinlan, D. Kranzlmüller, T. Panas
{"title":"Exploitation of Dynamic Communication Patterns through Static Analysis","authors":"Robert Preissl, B. Supinski, M. Schulz, D. Quinlan, D. Kranzlmüller, T. Panas","doi":"10.1109/ICPP.2010.14","DOIUrl":"https://doi.org/10.1109/ICPP.2010.14","url":null,"abstract":"Collective operations can have a large impact on the performance of parallel applications. However, the ideal implementation of a particular collective communication often depends on both the application and the targeted machine structure. Our approach combines dynamic and static analysis techniques to identify common collective communication patterns expressed as point-to-point calls and transforms them into equivalent MPI collectives. We first detect potential collective communication patterns in runtime traces and associate them with the corresponding source code regions. If our static analysis verifies that the introduction of collectives is safe for any program flow, we then replace the original communication primitives with their collective counterpart. In this paper we introduce the necessary algorithms to determine the safety of these transformations and we demonstrate several use cases, including automatic use of new extensions to the MPI standard such as nonblocking collective operations. The use of dynamic analysis significantly reduces compile times, resulting in a speed-up of about 50 for source transformations of HPL due to more directed analysis capabilities and also dramatically decreases complexity of the underlying static analysis.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123160994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Integer Number Crunching on the Cell Processor 单元格处理器上的整数处理
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.59
Hsieh-Chung Chen, Chen-Mou Cheng, Shih-Hao Hung, Zong-Cing Lin
{"title":"Integer Number Crunching on the Cell Processor","authors":"Hsieh-Chung Chen, Chen-Mou Cheng, Shih-Hao Hung, Zong-Cing Lin","doi":"10.1109/ICPP.2010.59","DOIUrl":"https://doi.org/10.1109/ICPP.2010.59","url":null,"abstract":"We describe our implementation of the Elliptic Curve Method (ECM) of integer factorization on the Cell processor. ECM is the method of choice for finding medium-sized prime factors, e.g., between $2^{30}$ and $2^{100}$. A good ECM implementation is of paramount importance for evaluating the security of cryptosystems like RSA because it is a critical step in the modern versions of the Number Field Sieves (NFS), currently the most efficient cryptanalysis technique against RSA. We use ECM as a benchmark to understand how the performance of integer number crunching applications can benefit from several architectural design features of the Cell including wide arithmetic pipeline, auxiliary pipeline for handling managerial tasks, and large on-die memory per thread of execution. As a result, our ECM implementation on the PowerXCell~8i Cell processor outperforms all previously published implementations on other hardware platforms including graphics processing units (GPUs). For example, compared with the best published result on an NVIDIA GTX 295 graphics card, ours is more than three times faster on absolute basis. This is in spite of the fact that GPUs have greater raw number-crunching capability, not to mention that the Cell consumes less power and hence delivers much better performance per watt.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133685404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Efficient PageRank and SpMV Computation on AMD GPUs 基于AMD gpu的高效PageRank和SpMV计算
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.17
Tianji Wu, Bo Wang, Yi Shan, Feng Yan, Yu Wang, Ningyi Xu
{"title":"Efficient PageRank and SpMV Computation on AMD GPUs","authors":"Tianji Wu, Bo Wang, Yi Shan, Feng Yan, Yu Wang, Ningyi Xu","doi":"10.1109/ICPP.2010.17","DOIUrl":"https://doi.org/10.1109/ICPP.2010.17","url":null,"abstract":"Google's famous PageRank algorithm is widely used to determine the importance of web pages in search engines. Given the large number of web pages on the World Wide Web, efficient computation of PageRank becomes a challenging problem. We accelerated the power method for computing PageRank on AMD GPUs. The core component of the power method is the Sparse Matrix-Vector Multiplication (SpMV). Its performance is largely determined by the characteristics of the sparse matrix, such as sparseness and distribution of non-zero values. Based on careful analysis on the web linkage matrices, we design a fast and scalable SpMV routine with three passes, using a modified Compressed Sparse Row format. Our PageRank computation achieves 15x speedup on a Radeon 5870 Graphic Card compared with a PhenomII 965 CPU at 3.4GHz. Our method can easily adapt to large scale data sets. We also compare the performance of the same method on the OpenCL platform with our low-level implementation.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128793465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Massive Social Network Analysis: Mining Twitter for Social Good 大规模社会网络分析:挖掘Twitter的社会利益
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.66
David Ediger, Karl Jiang, E. J. Riedy, David A. Bader, Courtney Corley, R. Farber, William N. Reynolds
{"title":"Massive Social Network Analysis: Mining Twitter for Social Good","authors":"David Ediger, Karl Jiang, E. J. Riedy, David A. Bader, Courtney Corley, R. Farber, William N. Reynolds","doi":"10.1109/ICPP.2010.66","DOIUrl":"https://doi.org/10.1109/ICPP.2010.66","url":null,"abstract":"Social networks produce an enormous quantity of data. Facebook consists of over 400 million active users sharing over 5 billion pieces of information each month. Analyzing this vast quantity of unstructured data presents challenges for software and hardware. We present GraphCT, a Graph Characterization Toolkit for massive graphs representing social network data. On a 128-processor Cray XMT, GraphCT estimates the betweenness centrality of an artificially generated (R-MAT) 537 million vertex, 8.6 billion edge graph in 55 minutes and a real-world graph (Kwak, et al.) with 61.6 million vertices and 1.47 billion edges in 105 minutes. We use GraphCT to analyze public data from Twitter, a microblogging network. Twitter's message connections appear primarily tree-structured as a news dissemination system. Within the public data, however, are clusters of conversations. Using GraphCT, we can rank actors within these conversations and help analysts focus attention on a much smaller data subset.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123330157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 171
Parameterized Schedulability Analysis on Uniform Multiprocessors 统一多处理器的参数化可调度性分析
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.40
R. Pathan, Jan Jonsson
{"title":"Parameterized Schedulability Analysis on Uniform Multiprocessors","authors":"R. Pathan, Jan Jonsson","doi":"10.1109/ICPP.2010.40","DOIUrl":"https://doi.org/10.1109/ICPP.2010.40","url":null,"abstract":"This paper addresses global Rate-Monotonic (RM) scheduling of implicit-deadline periodic real-time tasks on uniform multiprocessor platforms. In particular, we propose new schedulability conditions that include a set of easily computable task-set parameters for achieving better system utilization while meeting the deadlines of all the tasks. First, an individual sufficient schedulability condition is derived for each task. Then, the collection of schedulability conditions for the tasks are condensed to provide two different simple sufficient schedulability conditions for the entire task system --- one for uniform multiprocessors, and one for unit-capacity multiprocessors, respectively. Finally, we show that our proposed simple rate-monotonic schedulability conditions for uniform and unit-capacity multiprocessors have higher worst-case system utilization than all other state-of-the-art simple schedulability conditions for global rate-monotonic scheduling of implicit-deadline tasks.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125385474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Management of Accelerated MapReduce Workloads in Heterogeneous Clusters 异构集群下MapReduce加速工作负载性能管理
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.73
Jordà Polo, David Carrera, Y. Becerra, Vicencc Beltran, J. Torres, E. Ayguadé
{"title":"Performance Management of Accelerated MapReduce Workloads in Heterogeneous Clusters","authors":"Jordà Polo, David Carrera, Y. Becerra, Vicencc Beltran, J. Torres, E. Ayguadé","doi":"10.1109/ICPP.2010.73","DOIUrl":"https://doi.org/10.1109/ICPP.2010.73","url":null,"abstract":"Next generation data centers will be composed of thousands of hybrid systems in an attempt to increase overall cluster performance and to minimize energy consumption. New programming models, such as MapReduce, specifically designed to make the most of very large infrastructures will be leveraged to develop massively distributed services. At the same time, data centers will bring an unprecedented degree of workload consolidation, hosting in the same infrastructure distributed services from many different users. In this paper we present our advancements in leveraging the Adaptive MapReduce Scheduler to meet user defined high level performance goals while transparently and efficiently exploiting the capabilities of hybrid systems. While the Adaptive Scheduler was already able to dynamically allocate resources to co-located MapReduce jobs based on their completion time goals, it was completely unaware of specific hardware capabilities. In our work we describe the changes introduced in the Adaptive Scheduler to enable it with hardware awareness and with the ability to co-schedule accelerable and non-accelerable jobs on the same heterogeneous MapReduce cluster, making the most of the underlying hybrid systems. The developed prototype is tested in a cluster of Cell/BE blades and relies on the use of accelerated and non-accelerated versions of the MapReduce tasks of different deployed applications to dynamically select the best version to run on each node. Decisions are made after workload composition and jobs' completion time goals. Results show that the augmented Adaptive Scheduler provides dynamic resource allocation across jobs, hardware affinity when possible, and is even able to spread jobs' tasks across accelerated and non-accelerated nodes in order to meet performance goals in extreme conditions. To our knowledge this is the first MapReduce scheduler and prototype that is able to manage high-level performance goals even in presence of hybrid systems and accelerable jobs.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126673691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Starling: Minimizing Communication Overhead in Virtualized Computing Platforms Using Decentralized Affinity-Aware Migration Starling:使用分散式亲和感知迁移最小化虚拟化计算平台的通信开销
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.30
Jason D. Sonnek, James Greensky, Robert Reutiman, A. Chandra
{"title":"Starling: Minimizing Communication Overhead in Virtualized Computing Platforms Using Decentralized Affinity-Aware Migration","authors":"Jason D. Sonnek, James Greensky, Robert Reutiman, A. Chandra","doi":"10.1109/ICPP.2010.30","DOIUrl":"https://doi.org/10.1109/ICPP.2010.30","url":null,"abstract":"Virtualization is being widely used in large-scale computing environments, such as clouds, data centers, and grids, to provide application portability and facilitate resource multiplexing while retaining application isolation. In many existing virtualized platforms, it has been found that the network bandwidth often becomes the bottleneck resource, causing both high network contention and reduced performance for communication and data-intensive applications. In this paper, we present a decentralized affinity-aware migration technique that incorporates heterogeneity and dynamism in network topology and job communication patterns to allocate virtual machines on the available physical resources. Our technique monitors network affinity between pairs of VMs and uses a distributed bartering algorithm, coupled with migration, to dynamically adjust VM placement such that communication overhead is minimized. Our experimental results running the Intel MPI benchmark and a scientific application on a 7-node Xen cluster show that we can get up to 42% improvement in the runtime of the application over a no-migration technique, while achieving up to 85% reduction in network communication cost. In addition, our technique is able to adjust to dynamic variations in communication patterns and provides both good performance and low network contention with minimal overhead.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121628277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 140
Improving Application Performance and Predictability Using Multiple Virtual Lanes in Modern Multi-core InfiniBand Clusters 在现代多核InfiniBand集群中使用多虚拟通道提高应用性能和可预测性
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.54
H. Subramoni, P. Lai, S. Sur, D. Panda
{"title":"Improving Application Performance and Predictability Using Multiple Virtual Lanes in Modern Multi-core InfiniBand Clusters","authors":"H. Subramoni, P. Lai, S. Sur, D. Panda","doi":"10.1109/ICPP.2010.54","DOIUrl":"https://doi.org/10.1109/ICPP.2010.54","url":null,"abstract":"Network congestion is an important factor affecting the performance of large scale jobs in supercomputing clusters, especially with the wide deployment of multi-core processors. The blocking nature of current day collectives makes such congestion a critical factor in their performance. On the other hand, modern interconnects like InfiniBand provide us with many novel features such as Virtual Lanes aimed at delivering better performance to end applications. Theoretical research in the field of network congestion indicate Head of Line (HoL) blocking as a common causes for congestion and the use of multiple virtual lanes as one of the ways to alleviate it. In this context, we make use of the multiple virtual lanes provided by the InfiniBand standard as a means to alleviate network congestion and thereby improve the performance of various high performance computing applications on modern multi-core clusters. We integrate our scheme into the MVAPICH2 MPI library. To the best of our knowledge, this is the first such implementation that takes advantage of the use of multiple virtual lanes at the MPI level. We perform various experiments at native InfiniBand, microbenchmark as well as at the application levels. The results of our experimental evaluation show that the use of multiple virtual lanes can improve the predictability of message arrival by up to 10 times in the presence of network congestion. Our microbenchmark level evaluation with multiple communication streams show that the use of multiple virtual lanes can improve the bandwidth / latency / message rate of medium sized messages by up to 13%. Through the use of multiple virtual lanes, we are also able to improve the performance of the Alltoall collective operation for medium message sizes by up to 20%. Performance improvement of up to 12% is also observed for Alltoall collective operation through segregation of traffic into multiple virtual lanes when multiple jobs compete for the same network resource. We also see that our scheme can improve the performance of collective operations used inside the CPMD application by 11% and the overall performance of the CPMD application itself by up to 6%.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"2280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130275701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Stack-on-Demand Execution Model for Elastic Computing 弹性计算的堆栈按需执行模型
2010 39th International Conference on Parallel Processing Pub Date : 2010-09-13 DOI: 10.1109/ICPP.2010.79
R. Ma, King Tin Lam, Cho-Li Wang, Chenggang Zhang
{"title":"A Stack-on-Demand Execution Model for Elastic Computing","authors":"R. Ma, King Tin Lam, Cho-Li Wang, Chenggang Zhang","doi":"10.1109/ICPP.2010.79","DOIUrl":"https://doi.org/10.1109/ICPP.2010.79","url":null,"abstract":"Cloud computing is all the rage these days; its confluence with mobile computing would bring an even more pervasive influence. Clouds per se are elastic computing infrastructure where mobile applications can offload or draw tasks in an on-demand push-pull manner. Lightweight and portable task migration support enabling better resource utilization and data access locality is the key for success of mobile cloud computing. Existing task migration mechanisms are however too coarse-grained and costly, offsetting the benefits from migration and hampering flexible task partitioning among the mobile and cloud resources. We propose a new computation migration technique called stack-on-demand (SOD) that exports partial execution states of a stack machine to achieve agile mobility, easing into small-capacity devices and flexible distributed execution in a multi-domain workflow style. Our design also couples SOD with a novel object faulting technique for efficient access to remote objects. We implement the SOD concept into a middleware system for transparent execution migration of Java programs. It is shown that SOD migration cost is pretty low, comparing to several existing migration mechanisms. We also conduct experiments with an iPhone handset to demonstrate the elasticity of SOD by which server-side heavyweight processes can run adaptively on the cell phone.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"602 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116341074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信