2010 IEEE International Conference on Cluster Computing最新文献

筛选
英文 中文
TRACER: A Trace Replay Tool to Evaluate Energy-Efficiency of Mass Storage Systems TRACER:一种评估大容量存储系统能源效率的跟踪重放工具
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.40
Zhuo Liu, Fei Wu, X. Qin, C. Xie, Jian Zhou, Jianzong Wang
{"title":"TRACER: A Trace Replay Tool to Evaluate Energy-Efficiency of Mass Storage Systems","authors":"Zhuo Liu, Fei Wu, X. Qin, C. Xie, Jian Zhou, Jianzong Wang","doi":"10.1109/CLUSTER.2010.40","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.40","url":null,"abstract":"Improving energy efficiency of mass storage systems has become an important and pressing research issue in large HPC centers and data centers. New energy conservation techniques in storage systems constantly spring up; however, there is a lack of systematic and uniform way of accurately evaluating energy-efficient storage systems and objectively comparing a wide range of energy-saving techniques. This research presents a new integrated scheme, called TRACER, for evaluating energyefficiency of mass storage systems and judging energy-saving techniques. The TRACER scheme consists of a toolkit used to measure energy efficiency of storage systems as well as performance and energy metrics. In addition, TRACER contains a novel and accurate workload-control module to acquire power varying with workload modes and I/O load intensity. The workload generator in TRACER facilitates a block-level trace replay mechanism. The main goal of the workload-control module is to select a certain percentage (e.g., anywhere from 10% to 100%) of trace entries from a real-world I/O trace file uniformly and to replay filtered trace entries to reach any level of I/O load intensity. TRACER is experimentally validated on a general RAID5 enterprise disk array. Our experiments demonstrate that energy-efficient mass storage systems can be accurately evaluated on full scales by TRACER. We applied TRACER to investigate impacts of workload modes and load intensity on energy-efficiency of storage devices. This work shows that TRACER can enable storage system developers to evaluate energy efficiency designs for storage systems.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122090594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Performance Analysis of Multi-level Time Sharing Task Assignment Policies on Cluster-Based Systems 基于集群系统的多级分时任务分配策略性能分析
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.32
Malith Jayasinghe, Z. Tari, P. Zeephongsekul
{"title":"Performance Analysis of Multi-level Time Sharing Task Assignment Policies on Cluster-Based Systems","authors":"Malith Jayasinghe, Z. Tari, P. Zeephongsekul","doi":"10.1109/CLUSTER.2010.32","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.32","url":null,"abstract":"There is extensive evidence indicating that moderncomputer workloads exhibit highly variability in their processing requirements. Under such workloads, traditional task assignment policies do not perform well. Size-based policies perform significantly better than traditional policies under highly variable workloads. The main limitation of existing size-based policies though is that these have been targeted for batch computing systems. In this paper, we provide performance analysis of 3novel task assignment policies that are based on multi-level time sharing policy, namely MLMS (Multi-level Multi-server Task Assignment Policy), MLMS-M (Multi-level Multi-server Task Assignment Policy with Task Migration) and MLMS-M* (Multi-tier Multi-level Multi-server Task Assignment policy with Task Migration). These policies attempt to improve the performance first by giving preferential treatment to small tasks and second by reducing the task size variability in host queues. MLMS only reduces the variability of tasks locally, while MLMS-M and MLMS-M* utilise both local and global variance reduction mechanisms. MLMS outperforms existing size-based policies such as TAGS under specific workload conditions. MLMS-M outperforms TAGS under all the scenarios considered. MLMS-M*outperforms TAGS and MLMS-M under specific workload conditions and vice versa.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115470821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Reducing Communication Overhead in Large Eddy Simulation of Jet Engine Noise 在喷气发动机噪声大涡模拟中降低通信开销
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.31
Yingchong Situ, Lixia Liu, Chandra S. Martha, Matthew E. Louis, Zhiyuan Li, A. Sameh, G. Blaisdell, A. Lyrintzis
{"title":"Reducing Communication Overhead in Large Eddy Simulation of Jet Engine Noise","authors":"Yingchong Situ, Lixia Liu, Chandra S. Martha, Matthew E. Louis, Zhiyuan Li, A. Sameh, G. Blaisdell, A. Lyrintzis","doi":"10.1109/CLUSTER.2010.31","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.31","url":null,"abstract":"Computational aeroacoustics (CAA) has emerged as a tool to complement theoretical and experimental approaches for robust and accurate prediction of sound levels from aircraft airframes and engines. CAA, unlike computational fluid dynamics (CFD), involves the accurate prediction of small-amplitude acoustic fluctuations and their correct propagation to the far field. In that respect, CAA poses significant challenges for researchers because the computational scheme should have high accuracy, good spectral resolution, and low dispersion and diffusion errors. A high-order compact finite difference scheme, which is implicit in space, can be used for such simulations because it fulfills the requirements for CAA. Usually, this method is parallelized using a transposition scheme; however, that approach has a high communication overhead. In this paper, we discuss the use of a parallel tridiagonal linear system solver based on the truncated SPIKE algorithm for reducing the communication overhead in our large eddy simulations. We report experimental results collected on two parallel computing platforms.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133452282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Computing Contingency Statistics in Parallel: Design Trade-Offs and Limiting Cases 并行计算偶然性统计:设计权衡和限制情况
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.43
P. Pébay, D. Thompson, Janine Bennett
{"title":"Computing Contingency Statistics in Parallel: Design Trade-Offs and Limiting Cases","authors":"P. Pébay, D. Thompson, Janine Bennett","doi":"10.1109/CLUSTER.2010.43","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.43","url":null,"abstract":"Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and c2 independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115737380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Improving Parallel I/O Performance with Data Layout Awareness 利用数据布局感知改进并行I/O性能
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.35
Yong Chen, Xian-He Sun, R. Thakur, Huaiming Song, Hui Jin
{"title":"Improving Parallel I/O Performance with Data Layout Awareness","authors":"Yong Chen, Xian-He Sun, R. Thakur, Huaiming Song, Hui Jin","doi":"10.1109/CLUSTER.2010.35","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.35","url":null,"abstract":"Parallel applications can benefit greatly from massive computational capability, but their performance suffers from large latency of I/O accesses. The poor I/O performance has been attributed as a critical cause of the low sustained performance of parallel computing systems. In this study, we propose a data layout-aware optimization strategy to promote a better integration of the parallel I/O middleware and parallel file systems, two major components of the current parallel I/O systems, and to improve the data access performance. We explore the layout-aware optimization in both independent I/O and collective I/O, two primary forms of I/O in parallel applications. We illustrate that the layout-aware I/O optimization could improve the performance of current parallel I/O strategy effectively. The experimental results verify that the proposed strategy could improve parallel I/O performance by nearly 40% on average. The proposed layout-aware parallel I/O has a promising potential in improving the I/O performance of parallel systems.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116060063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
SHelp: Automatic Self-Healing for Multiple Application Instances in a Virtual Machine Environment SHelp:虚拟机环境中多个应用程序实例的自动自修复
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.18
Gang Chen, Hai Jin, Deqing Zou, B. Zhou, Weizhong Qiang, Gang Hu
{"title":"SHelp: Automatic Self-Healing for Multiple Application Instances in a Virtual Machine Environment","authors":"Gang Chen, Hai Jin, Deqing Zou, B. Zhou, Weizhong Qiang, Gang Hu","doi":"10.1109/CLUSTER.2010.18","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.18","url":null,"abstract":"When multiple instances of an application running on multiple virtual machines, an interesting problem is how to utilize the fault handling result from one application instance to heal the same fault occurred on other sibling instances, and hence to ensure high service availability in a cloud computing environment. This paper presents SHelp, a lightweight runtime system that can survive software failures in the framework of virtual machines. It applies weighted rescue points and error virtualization techniques to effectively make applications by-pass the faulty path. A two-level storage hierarchy is adopted in the rescue point database for applications running on different virtual machines to share error handling information to reduce the redundancy and to more effectively and quickly recover from future faults caused by the same bugs. A Linux prototype is implemented and evaluated using four web server applications that contain various types of bugs. Our experimental results show that SHelp can make server applications to recover from these bugs in just a few seconds with modest performance overhead.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"42 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122692265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Integration Experiences and Performance Studies of A COTS Parallel Archive System 一个COTS并行归档系统的集成经验与性能研究
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.23
Hsing-bung Chen, G. Grider, Cody Scott, Milton Turley, Aaron Torres, Kathy Sanchez, J. Bremer
{"title":"Integration Experiences and Performance Studies of A COTS Parallel Archive System","authors":"Hsing-bung Chen, G. Grider, Cody Scott, Milton Turley, Aaron Torres, Kathy Sanchez, J. Bremer","doi":"10.1109/CLUSTER.2010.23","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.23","url":null,"abstract":"Present and future Archive Storage Systems have been challenged to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have also been demanded to perform the same manner but at one or more orders of magnitude faster in performance. Archive systems continue to improve substantially comparable to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently, the number of extreme highly scalable parallel archive solutions is very limited especially for moving a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and an innovative software technology can bring new capabilities into a production environment for the HPC community. This solution is much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. We relay our experience of integrating a global parallel file system and a standard backup/archive product with an innovative parallel software code to construct a scalable and parallel archive storage system. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world’s first petaflop/s computing system, LANL’s Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems. Now this new Parallel Archive System is used on the LANL’s Turquoise Network","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114539233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Getting Rid of Coherency Overhead for Memory-Hungry Applications 消除内存消耗大的应用程序的一致性开销
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.14
Héctor Montaner, F. Silla, H. Fröning, J. Duato
{"title":"Getting Rid of Coherency Overhead for Memory-Hungry Applications","authors":"Héctor Montaner, F. Silla, H. Fröning, J. Duato","doi":"10.1109/CLUSTER.2010.14","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.14","url":null,"abstract":"Current commercial solutions intended to provide additional resources to an application being executed in a cluster usually aggregate processors and memory from different nodes. In this paper we present a 16-node prototype for a shared-memory cluster architecture that follows a different approach by decoupling the amount of memory available to an application from the processing resources assigned to it. In this way, we provide a new degree of freedom so that the memory granted to a process can be expanded with the memory from other nodes in the cluster without increasing the number of processors used by the program. This feature is especially suitable for memory-hungry applications that demand large amounts of memory but present a parallelization level that prevents them from using more cores than available in a single node. The main advantage of this approach is that an application can use more memory from other nodes without involving the processors, and caches, from those nodes. As a result, using more memory no longer implies increasing the coherence protocol overhead because the number of caches involved in the coherent domain has become independent from the amount of available memory. The prototype we present in this paper leverages this idea by sharing 128GB of memory among the cluster. Real executions show the feasibility of our prototype and its scalability.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130065242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Minimizing MPI Resource Contention in Multithreaded Multicore Environments 在多线程多核环境中最小化MPI资源争用
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.11
David Goodell, P. Balaji, Darius Buntinas, G. Dózsa, W. Gropp, Sameer Kumar, B. Supinski, R. Thakur
{"title":"Minimizing MPI Resource Contention in Multithreaded Multicore Environments","authors":"David Goodell, P. Balaji, Darius Buntinas, G. Dózsa, W. Gropp, Sameer Kumar, B. Supinski, R. Thakur","doi":"10.1109/CLUSTER.2010.11","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.11","url":null,"abstract":"With the ever-increasing numbers of cores per node in high-performance computing systems, a growing number of applications are using threads to exploit shared memory within a node and MPI across nodes. This hybrid programming model needs efficient support for multithreaded MPI communication. In this paper, we describe the optimization of one aspect of a multithreaded MPI implementation: concurrent accesses from multiple threads to various MPI objects, such as communicators, datatypes, and requests. The semantics of the creation, usage, and destruction of these objects implies, but does not strictly require, the use of reference counting to prevent memory leaks and premature object destruction. We demonstrate how a naive multithreaded implementation of MPI object management via reference counting incurs a significant performance penalty. We then detail two solutions that we have implemented in MPICH2 to mitigate this problem almost entirely, including one based on a novel garbage collection scheme. In our performance experiments, this new scheme improved the multithreaded messaging rate by up to 31% over the naive reference counting method.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124897654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Enforcing SLAs in Scientific Clouds 在科学云中实施sla
2010 IEEE International Conference on Cluster Computing Pub Date : 2010-09-20 DOI: 10.1109/CLUSTER.2010.42
Oliver Niehörster, A. Brinkmann, G. Fels, Jens Krüger, J. Simon
{"title":"Enforcing SLAs in Scientific Clouds","authors":"Oliver Niehörster, A. Brinkmann, G. Fels, Jens Krüger, J. Simon","doi":"10.1109/CLUSTER.2010.42","DOIUrl":"https://doi.org/10.1109/CLUSTER.2010.42","url":null,"abstract":"Software as a Service (SaaS) providers enable the on-demand use of software, which is an intriguing concept for business and scientific applications. Typically, service level agreements (SLAs) are specified between the provider and the user, defining the required quality of service (QoS). Today SLA aware solutions only exist for business applications. We present a general SaaS architecture for scientific software that offers an easy-to-use web interface. Scientists define their problem description, the QoS requirements and can access the results through this portal. Our algorithms autonomously test the feasibility of the SLA and, if accepted, guarantee its fulfillment. This approach is independent of the underlying cloud infrastructure and successfully deals with performance fluctuations of cloud instances. Experiments are done with a scientific application in private and public clouds and we also present the implementation of a high-performance computing (HPC) cloud dedicated for scientific applications.","PeriodicalId":152171,"journal":{"name":"2010 IEEE International Conference on Cluster Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127237723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信