2011 IEEE International Conference on Cluster Computing最新文献

筛选
英文 中文
Exploring Fine-Grained Task-Based Execution on Multi-GPU Systems 探索多gpu系统上基于任务的细粒度执行
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.50
Long Chen, Oreste Villa, G. Gao
{"title":"Exploring Fine-Grained Task-Based Execution on Multi-GPU Systems","authors":"Long Chen, Oreste Villa, G. Gao","doi":"10.1109/CLUSTER.2011.50","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.50","url":null,"abstract":"Using multi-GPU systems, including GPU clusters, is gaining popularity in scientific computing. However, when using multiple GPUs concurrently, the conventional data parallel GPU programming paradigms, e.g., CUDA, cannot satisfactorily address certain issues, such as load balancing, GPU resource utilization, overlapping fine grained computation with communication, etc. In this paper, we present a fine-grained task-based execution framework for multi-GPU systems. By scheduling finer-grained tasks than what is supported in the conventional CUDA programming method among multiple GPUs, and allowing concurrent task execution on a single GPU, our framework provides means for solving the above issues and efficiently utilizing multi-GPU systems. Experiments with a molecular dynamics application show that, for nonuniform distributed workload, the solutions based on our framework achieve good load balance, and considerable performance improvement over other solutions based on the standard CUDA programming methodologies.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117150145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Automatic Task Re-organization in MapReduce MapReduce中的自动任务重组
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.44
Zhenhua Guo, M. Pierce, G. Fox, Mo Zhou
{"title":"Automatic Task Re-organization in MapReduce","authors":"Zhenhua Guo, M. Pierce, G. Fox, Mo Zhou","doi":"10.1109/CLUSTER.2011.44","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.44","url":null,"abstract":"MapReduce is increasingly considered as a useful parallel programming model for large-scale data processing. It exploits parallelism among execution of primitive map and reduce operations. Hadoop is an open source implementation of MapReduce that has been used in both academic research and industry production. However, its implementation strategy that one map task processes one data block limits the degree of concurrency and degrades performance because of inability to fully utilize available resources. In addition, its assumption that task execution time in each phase does not vary much does not always hold, which makes speculative execution useless. In this paper, we present mechanisms to dynamically split and consolidate tasks to cope with load balancing and break through the concurrency limit resulting from fixed task granularity. For single-job systems, two algorithms are proposed for circumstances where prior knowledge is known and unknown. For multi-job cases, we propose a modified shortest-job-first strategy, which minimizes job turnaround time theoretically when combined with task splitting. We compared the effectiveness of our approach to the default task scheduling strategy using both synthesized and trace-based workloads. Simulation results show that our approach improves performance significantly.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126008542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Performance of a Virtual Cluster in a General-Purpose Teaching Laboratory 通用教学实验室虚拟集群的性能研究
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.76
E. Johnson, Patrick Garrity, Timothy Yates, Richard A. Brown
{"title":"Performance of a Virtual Cluster in a General-Purpose Teaching Laboratory","authors":"E. Johnson, Patrick Garrity, Timothy Yates, Richard A. Brown","doi":"10.1109/CLUSTER.2011.76","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.76","url":null,"abstract":"Through virtualization, a Beowulf cluster running on the physical hardware and network of a general purpose teaching laboratory can provide significant computational power without compromisingâ€\"in fact increasingâ€\"the educational value of the lab machines. After describing our implementation of such a virtual cluster in a teaching lab, we analyze its performance by comparing benchmark results to those of an equivalent dedicated (native) cluster and investigate expedient improvements to our design.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125954574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
DARE: Adaptive Data Replication for Efficient Cluster Scheduling DARE:用于高效集群调度的自适应数据复制
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.26
Cristina L. Abad, Yi Lu, R. Campbell
{"title":"DARE: Adaptive Data Replication for Efficient Cluster Scheduling","authors":"Cristina L. Abad, Yi Lu, R. Campbell","doi":"10.1109/CLUSTER.2011.26","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.26","url":null,"abstract":"Placing data as close as possible to computation is a common practice of data intensive systems, commonly referred to as the data locality problem. By analyzing existing production systems, we confirm the benefit of data locality and find that data have different popularity and varying correlation of accesses. We propose DARE, a distributed adaptive data replication algorithm that aids the scheduler to achieve better data locality. DARE solves two problems, how many replicas to allocate for each file and where to place them, using probabilistic sampling and a competitive aging algorithm independently at each node. It takes advantage of existing remote data accesses in the system and incurs no extra network usage. Using two mixed workload traces from Face book, we show that DARE improves data locality by more than 7 times with the FIFO scheduler in Hadoop and achieves more than 85% data locality for the FAIR scheduler with delay scheduling. Turnaround time and job slowdown are reduced by 19% and 25%, respectively.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127098423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 169
A Framework for Data-Intensive Computing with Cloud Bursting 基于云爆发的数据密集型计算框架
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1145/2148600.2148604
Tekin Bicer, David Chiu, G. Agrawal
{"title":"A Framework for Data-Intensive Computing with Cloud Bursting","authors":"Tekin Bicer, David Chiu, G. Agrawal","doi":"10.1145/2148600.2148604","DOIUrl":"https://doi.org/10.1145/2148600.2148604","url":null,"abstract":"For many organizations, one attractive use of cloud resources can be through what is referred to as cloud bursting or the hybrid cloud. These refer to scenarios where an organization acquires and manages in-house resources to meet its base need, but can use additional resources from a cloud provider to maintain an acceptable response time during workload peaks. Cloud bursting has so far been discussed in the context of using additional computing resources from a cloud provider. However, as next generation applications are expected to see orders of magnitude increase in data set sizes, cloud resources can be used to store additional data after local resources are exhausted. In this paper, we consider the challenge of data analysis in a scenario where data is stored across a local cluster and cloud resources. We describe a software framework to enable data-intensive computing with cloud bursting, i.e., using a combination of compute resources from a local cluster and a cloud environment to perform Map-Reduce type processing on a data set that is geographically distributed. Our evaluation with three different applications shows that data-intensive computing with cloud bursting is feasible and scalable. Particularly, as compared to a situation where the data set is stored at one location and processed using resources at that end, the average slowdown of our system (using distributed but the same aggregate number of compute resources), is only 15.55%. Thus, the overheads due to global reduction, remote data retrieval, and potential load imbalance are quite manageable. Our system scales with an average speedup of 81% when the number of compute resources is doubled.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114339731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
ResourceExchange: Latency-Aware Scheduling in Virtualized Environments with High Performance Fabrics ResourceExchange:基于高性能架构的虚拟化环境中的延迟感知调度
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.14
A. Ranadive, Ada Gavrilovska, K. Schwan
{"title":"ResourceExchange: Latency-Aware Scheduling in Virtualized Environments with High Performance Fabrics","authors":"A. Ranadive, Ada Gavrilovska, K. Schwan","doi":"10.1109/CLUSTER.2011.14","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.14","url":null,"abstract":"Virtualized infrastructures have seen strong acceptance in data center systems and applications, but have not yet seen adoptance for latency-sensitive codes which require I/O to arrive predictability, or response times to be generated within certain timeliness guarantees. Examples of such applications include certain classes of parallel HPC codes, server systems performing phonecall or multimedia delivery, or financial services in electronic trading platforms, like ICE and CME. In this paper, we argue that the use of high-performance, VMM-bypass capable devices can help create the virtualized infrastructures needed for the latency-sensitive applications listed above. However, to enable consolidation, problems to be solved go beyond efficient I/O virtualization, and include dealing with the shared use of I/O and compute resource, in ways that minimize or eliminate interference. Toward this end, we describe ResEx -- a resource management approach for virtualized RDMA-based platforms which incorporates concepts from supply-demand theory and congestion pricing to dynamically control the allocation of CPU and I/O resources of guest VMs. ResEx and its mechanisms and abstractions allow multiple 'pricing policies' to be deployed on these types of virtualized platforms, including such which reduce interference and enhance isolation by identifying and taxing VMs responsible for resource congestion. While the main ideas behind ResEx are more general, the design presented in this paper is specific for InfiniBand RDMA-based virtualized platforms due to the use of asynchronous monitoring needed to determine the VMs' I/O usage, and the methods to establish the trading rate for the underlying CPU and I/O resources. The latter is particularly necessary since the hypervisor's only mechanism to control I/O usage is by making appropriate adjustments in the VM's CPU resources. The experimental evaluation of our solution uses InfiniBand platforms virtualized with the open source Xen hyper visor, and an RDMA-based latency-sensitive benchmark, BenchEx, based on a model of a financial trading platform. The results demonstrate the utility of the ResEx approach in making RDMA-based virtualized platforms more manageable and better suited for hosting even latency-sensitive workloads. ResEx can reduce the latency interference by as much as 30% in some cases as shown.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114690214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Reservation-Based Overbooking for HPC Clusters HPC集群基于预订的超额预订
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.74
Georg Birkenheuer, A. Brinkmann
{"title":"Reservation-Based Overbooking for HPC Clusters","authors":"Georg Birkenheuer, A. Brinkmann","doi":"10.1109/CLUSTER.2011.74","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.74","url":null,"abstract":"HPC environments are not only used in research and academia anymore, but are also becoming commercially available and successful. HPC resource providers, which offer HPC services over the Internet, have to ensure a very high utilization rate to be competitive and profitable. One solution to improve utilization is the use of overbooking. This paper shows an improved overbooking approach for HPC providers that serves this purpose. Resources are not assigned to a job until it actually starts. This enhances the scheduler's degree of freedom and therefore improves the overbooking performance. We evaluated the potential of this approach using real-world job traces. Given a sufficient expected demand, overbooking is applicable and provides additional profit.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116448435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Experimental and Numerical Study of the Effect of Geometric Parameters on Liquid Single-Phase Pressure Drop in Micro-Scale Pin-Fin Arrays 几何参数对微尺度鳍状阵列液体单相压降影响的实验与数值研究
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.91
Valerie Pezzullo, S. Voinier
{"title":"Experimental and Numerical Study of the Effect of Geometric Parameters on Liquid Single-Phase Pressure Drop in Micro-Scale Pin-Fin Arrays","authors":"Valerie Pezzullo, S. Voinier","doi":"10.1109/CLUSTER.2011.91","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.91","url":null,"abstract":"The purpose of this project was to study experimentally and numerically the effects of changing geometrical parameters on water single-phase pressure drop across arrays of micro-scale pin-fin heat sinks. An experimental study was performed on a cylindrical pin-fin heat sink at various flow rates and temperatures. Computational analysis was also performed on a cylindrical pin-fin array to study the effects of changing certain geometric parameters and to compare these results with previous experimental and numerical results. The results for the experimental method were not as expected due to malfunctioning equipment and the numerical results had to be compared to existing experimental data in order to validate the numerical model.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132361624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HEaRS: A Hierarchical Energy-Aware Resource Scheduler for Virtualized Data Centers 用于虚拟化数据中心的分层能源感知资源调度程序
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.60
Hui Chen, Meina Song, Junde Song, Ada Gavrilovska, K. Schwan
{"title":"HEaRS: A Hierarchical Energy-Aware Resource Scheduler for Virtualized Data Centers","authors":"Hui Chen, Meina Song, Junde Song, Ada Gavrilovska, K. Schwan","doi":"10.1109/CLUSTER.2011.60","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.60","url":null,"abstract":"With the increasing popularity of Internet-based cloud services, energy efficiency in large-scale Internet data centers has become important not only to curtail energy costs and alleviate environmental concern, but also because such systems can quickly reach the limits of power available to them. This paper investigates to what extent and how energy usage improvements through consolidation can benefit from taking into account the environmental influences and effects seen in data center systems. Toward that end, we present experimental results obtained in a fully instrumented, small scale data center and then use these results to propose a hierarchical energy-aware resource scheduler (HEaRS) for cluster workload placement and server provisioning, also considers the physical environment in which data center systems operate. Specifically, at the rack level, HEaRS tries to maintain a `thermal balance' across the rack to avoid hot spots and reduce cooling costs. At the chassis level, HEaRS utilizes the proportional plus integral controller to achieve a balance in the levels of usage of electrical current between the two power domains in the chassis, which helps the chassis reach its most energy efficient state. Finally, at server level, HEaRS can employ known methods like dynamic voltage and frequency scaling or core idling to reduce power consumption. This results in a hierarchical set of controllers that jointly, implement holistic solutions to energy-aware resource scheduling for an entire rack, and this hierarchical solution can then be further extended to entire data centers. Our initial experiment result show opportunities for gains, with up to 16% in energy usage compared to methods that are not aware of the physical environment and up to 15% improvements in application performance.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129944961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Design and Evaluation of Network Topology-/Speed- Aware Broadcast Algorithms for InfiniBand Clusters InfiniBand集群网络拓扑/速度感知广播算法的设计与评估
2011 IEEE International Conference on Cluster Computing Pub Date : 2011-09-26 DOI: 10.1109/CLUSTER.2011.43
H. Subramoni, K. Kandalla, Jérôme Vienne, S. Sur, B. Barth, K. Tomko, R. McLay, K. Schulz, D. Panda
{"title":"Design and Evaluation of Network Topology-/Speed- Aware Broadcast Algorithms for InfiniBand Clusters","authors":"H. Subramoni, K. Kandalla, Jérôme Vienne, S. Sur, B. Barth, K. Tomko, R. McLay, K. Schulz, D. Panda","doi":"10.1109/CLUSTER.2011.43","DOIUrl":"https://doi.org/10.1109/CLUSTER.2011.43","url":null,"abstract":"It is an established fact that the network topology can have an impact on the performance of scientific parallel applications. However, little work has been done to design an easy to use solution inside a communication library supporting a parallel programming model where the complexities of making the application performance network topology agnostic is hidden from the end user. Similarly, the rapid improvements in networking technology and speed are resulting in many commodity clusters becoming heterogeneous, with respect to networking speed. For example, switches and adapters belonging to different generations (SDR - 8 Gbps, DDR - 16 Gbps and QDR - 36 Gbps speeds in InfiniBand) are integrated into a single system. This leads to an additional challenge to make the communication library aware of the performance implications of heterogeneous link speeds. Accordingly, the communication library can perform optimizations taking link speed into account. In this paper, we propose a framework to automatically detect the topology and speed of an InfiniBand network and make it available to users through an easy to use interface. We also make design changes inside the MPI library to dynamically query this topology detection service and to form a topology model of the underlying network. We have redesigned the broadcast algorithm to take into account this network topology information and dynamically adapt the communication pattern to best fit the characteristics of the underlying network. To the best of our knowledge, this is the first such work for InfiniBand clusters. Our experimental results show that, for large homogeneous systems and large message sizes, we get up to 14% improvement in the latency of the broadcast operation using our proposed network topology-aware scheme over the default scheme at the micro-benchmark level. At the application level, the proposed framework delivers up to 8% improvement in total application run-time especially as job size scales up. The proposed network speed-aware algorithms are able to attain micro-benchmark performance on the heterogeneous SDR-DDR InfiniBand cluster to perform on par with runs on the DDR only portion of the cluster for small to medium sized messages. We also demonstrate that the network speed aware algorithms perform 70% to 100% better than the naive algorithms when both are run on the heterogeneous SDR-DDR InfiniBand cluster.","PeriodicalId":200830,"journal":{"name":"2011 IEEE International Conference on Cluster Computing","volume":"62 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128015587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信