2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)最新文献

筛选
英文 中文
Real-Time Scheduling Policy Selection from Queue and Machine States 从队列和机器状态选择实时调度策略
Luis Sant'Ana, Danilo Carastan-Santos, Daniel Cordeiro, R. Camargo
{"title":"Real-Time Scheduling Policy Selection from Queue and Machine States","authors":"Luis Sant'Ana, Danilo Carastan-Santos, Daniel Cordeiro, R. Camargo","doi":"10.1109/CCGRID.2019.00052","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00052","url":null,"abstract":"Task Scheduling in large-scale HPC platforms is normally accomplished with simple heuristics combined with a backfilling algorithm. Some strategies, such as the First-Come-First-Serve (FCFS) with backfilling, provide reasonable results in a variety of scenarios, including different HPC platforms and task set characteristics. But for each scenario, a different strategy might be the most appropriate for minimizing some metric, such as the average task waiting time or turnaround time. In this work, we present a real-time scheduling policy selection algorithm, which takes as input the running queue job characteristics and machine states. We evaluated the use of logistic regression and support-vector machines to perform the mapping from queue and machine state to selected scheduling policy. The machine learning algorithms are trained and evaluated using simulations configured using HPC platform traces. When selecting among 8 (eight) scheduling policies, we obtained an accuracy above 80%, when compared to the best selection. When simulating the online real-time selection of policies for a period of one year, we obtained a reduction in the mean queue waiting time of tasks of up to 40% over using FCFS and 10% over randomly selecting policies. Moreover, the method performed close the best possible selection of policies, with a maximum of 9% increase in the mean queue waiting time.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115310992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Intelligent, Adaptive, and Flexible Data Compression Framework 一个智能、自适应、灵活的数据压缩框架
H. Devarajan, Anthony Kougkas, Xian-He Sun
{"title":"An Intelligent, Adaptive, and Flexible Data Compression Framework","authors":"H. Devarajan, Anthony Kougkas, Xian-He Sun","doi":"10.1109/CCGRID.2019.00019","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00019","url":null,"abstract":"The data explosion phenomenon in modern applications causes tremendous stress on storage systems. Developers use data compression, a size-reduction technique, to address this issue. However, each compression library exhibits different strengths and weaknesses when considering the input data type and format. We present Ares, an intelligent, adaptive, and flexible compression framework which can dynamically choose a compression library for a given input data based on the type of the workload and provides an appropriate infrastructure to users to fine-tune the chosen library. Ares is a modular framework which unifies several compression libraries while allowing the addition of more compression libraries by the user. Ares is a unified compression engine that abstracts the complexity of using different compression libraries for each workload. Evaluation results show that under real-world applications, from both scientific and Cloud domains, Ares performed 2-6x faster than competitive solutions with a low cost of additional data analysis (i.e., overheads around 10%) and up to 10x faster against a baseline of no compression at all.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128017321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Exploiting CPU Voltage Margins to Increase the Profit of Cloud Infrastructure Providers 利用CPU电压余量增加云基础设施提供商的利润
Christos Kalogirou, Panos K. Koutsovasilis, C. Antonopoulos, Nikolaos Bellas, S. Lalis, S. Venugopal, Christian Pinto
{"title":"Exploiting CPU Voltage Margins to Increase the Profit of Cloud Infrastructure Providers","authors":"Christos Kalogirou, Panos K. Koutsovasilis, C. Antonopoulos, Nikolaos Bellas, S. Lalis, S. Venugopal, Christian Pinto","doi":"10.1109/CCGRID.2019.00044","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00044","url":null,"abstract":"Energy efficiency is a major concern for cloud computing, with CPUs accounting a significant fraction of datacenter nodes power consumption. CPU manufacturers introduce voltage margins to guarantee correct operation. However, these are unnecessarily wide for real-world execution scenarios, and translate to increased power consumption. In this paper, we investigate how such margins can be exploited by infrastructure operators, by selectively undervolting nodes, at the controlled risk of inducing failures and activating service-level agreement (SLA) violation penalties. We model the problem in a formal way, capturing the most important aspects that drive VM management and system configuration decisions. Then, we introduce XM-VFS policy that reduces infrastructure operator costs by reducing voltage margins, and compare it with the state-of-the-art which employs dynamic voltage-frequency scaling (DVFS) and workload consolidation. We perform simulations to quantify the cost reduction, considering the energy consumption and potential SLA violations. Our results show significant gains, up to 17.35% and 16.32% for the energy and cost reduction respectively. In our simulations, we use realistic assumptions for voltage margins, energy consumption and performance degradation of applications due to frequency scaling, based on the characterization of commercial Intel-and ARM-based machines. Our model and scheduling policy are generic and scalable.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130921858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
ePipe: Near Real-Time Polyglot Persistence of HopsFS Metadata ePipe: HopsFS元数据的近实时多语言持久性
Mahmoud Ismail, Mikael Ronström, Seif Haridi, J. Dowling
{"title":"ePipe: Near Real-Time Polyglot Persistence of HopsFS Metadata","authors":"Mahmoud Ismail, Mikael Ronström, Seif Haridi, J. Dowling","doi":"10.1109/CCGRID.2019.00020","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00020","url":null,"abstract":"Distributed OLTP databases are now used to manage metadata for distributed file systems, but they cannot also efficiently support complex queries or aggregations. To solve this problem, we introduce ePipe, a databus that both creates a consistent change stream for a distributed, hierarchical file system (HopsFS) and eventually delivers the correctly ordered stream with low latency to downstream clients. ePipe can be used to provide polyglot storage for file system metadata, allowing metadata queries to be handled by the most efficient engine for that query. For file system notifications, we show that ePipe achieves up to 56X throughput improvement over HDFS INotify and Trumpet with up to 3 orders of magnitude lower latency. For Spotify's Hadoop workload, we show that ePipe can replicate all file system changes from HopsFS to Elasticsearch with an average replication lag of only 330 ms.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127006283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Million-Core-Scalable Simulation of the Elastic Migration Algorithm on Sunway TaihuLight Supercomputer 弹性迁移算法在神威太湖之光超级计算机上的百万核可扩展仿真
L. Gan, Jingheng Xu, Xin Wang, Sihai Wu, Xiaohui Duan, Yuxuan Li, H. Fu, Guangwen Yang
{"title":"Million-Core-Scalable Simulation of the Elastic Migration Algorithm on Sunway TaihuLight Supercomputer","authors":"L. Gan, Jingheng Xu, Xin Wang, Sihai Wu, Xiaohui Duan, Yuxuan Li, H. Fu, Guangwen Yang","doi":"10.1109/CCGRID.2019.00016","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00016","url":null,"abstract":"Migration algorithm is one of the most essential methods in seismic application to image the underground geology, and to help scientists and researchers in geophysics exploration better understand the earth system. However, due to the desire in migration algorithm for covering lager region and acquiring better resolution, many tough challenges have to be tackled for current state-of-the-art computing systems. This work optimized and scaled the elastic migration algorithm onto the Sunway TaihuLight supercomputer, one of the most powerful systems of the world. Targeting at the major process, the reverse time migration (RTM) algorithm, a set of algorithmic, process-level, and thread-level optimizations is proposed, to significantly improve the performance (up to 163× speedup in time-to-solution) on Sunway CPU. Our design is successfully scaled to over two million cores (2,662,400 cores in total) on the Sunway TaihuLight supercomputer, with nearly ideal weak-scaling efficiency. The largest run is able to achieve a sustainable performance of processing over 859 billion cells per second.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122629892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Distributed MCMC Inference in Dirichlet Process Mixture Models Using Julia 基于Julia的Dirichlet过程混合模型的分布式MCMC推理
Or Dinari, A. Yu, O. Freifeld, John W. Fisher III
{"title":"Distributed MCMC Inference in Dirichlet Process Mixture Models Using Julia","authors":"Or Dinari, A. Yu, O. Freifeld, John W. Fisher III","doi":"10.1109/CCGRID.2019.00066","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00066","url":null,"abstract":"Due to the increasing availability of large data sets, the need for general-purpose massively-parallel analysis tools become ever greater. In unsupervised learning, Bayesian nonparametric mixture models, exemplified by the Dirichlet-Process Mixture Model (DPMM), provide a principled Bayesian approach to adapt model complexity to the data. Despite their potential, however, DPMMs have yet to become a popular tool. This is partly due to the lack of friendly software tools that can handle large datasets efficiently. Here we show how, using Julia, one can achieve efficient and easily-modifiable implementation of distributed inference in DPMMs. Particularly, we show how a recent parallel MCMC inference algorithm - originally implemented in C++ for a single multi-core machine - can be distributed efficiently across multiple multi-core machines using a distributed-memory model. This leads to speedups, alleviates memory and storage limitations, and lets us learn DPMMs from significantly larger datasets and of higher dimensionality. It also turned out that even on a single machine the proposed Julia implementation handles higher dimensions more gracefully (at least for Gaussians) than the original C++ implementation. Finally, we use the proposed implementation to learn a model of image patches and apply the learned model for image denoising. While we speculate that a highly-optimized distributed implementation in, say, C++ could have been faster than the proposed implementation in Julia, from our perspective as machine-learning researchers (as opposed to HPC researchers), the latter also offers a practical and monetary value due to the ease of development and abstraction level. Our code is publicly available at https://github.com/dinarior/dpmm subclusters.jl","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114238989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
TensorFlow on State-of-the-Art HPC Clusters: A Machine Learning use Case 最先进的高性能计算集群上的TensorFlow:一个机器学习用例
Guillem Ramirez-Gargallo, M. Garcia-Gasulla, F. Mantovani
{"title":"TensorFlow on State-of-the-Art HPC Clusters: A Machine Learning use Case","authors":"Guillem Ramirez-Gargallo, M. Garcia-Gasulla, F. Mantovani","doi":"10.1109/CCGRID.2019.00067","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00067","url":null,"abstract":"The recent rapid growth of the data-flow programming paradigm enabled the development of specific architectures, e.g., for machine learning. The most known example is the Tensor Processing Unit (TPU) by Google. Standard data-centers, however, still can not foresee large partitions dedicated to machine learning specific architectures. Within data-centers, the High-Performance Computing (HPC) clusters are highly parallel machines targeting a broad class of compute-intensive workflows, as such they can be used for tackling machine learning challenges. On top of this, HPC architectures are rapidly changing, including accelerators and instruction sets other than the classical x86 CPUs. In this blurry scenario, identifying which are the best hardware/software configurations to efficiently support machine learning workloads on HPC clusters is not trivial. In this paper, we considered the workflow of TensorFlow for image recognition. We highlight the strong dependency of the performance in the training phase on the availability of arithmetic libraries optimized for the underlying architecture. Following the example of Intel leveraging the MKL libraries for improving the TensorFlow performance, we plugged the Arm Performance Libraries into TensorFlow and tested on an HPC cluster based on Marvell ThunderX2 CPUs. Also, we performed a scalability study on three state-of-the-art HPC clusters based on different CPU architectures, x86 Intel Skylake, Arm-v8 Marvell ThunderX2, and PowerPC IBM Power9.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Optimizing Performance and Computing Resource Management of In-memory Big Data Analytics with Disaggregated Persistent Memory 基于分解持久内存的内存大数据分析性能优化与计算资源管理
Shouwei Chen, Wensheng Wang, Xueyang Wu, Zhen Fan, Kunwu Huang, Peiyu Zhuang, Yue Li, I. Rodero, M. Parashar, Dennis Weng
{"title":"Optimizing Performance and Computing Resource Management of In-memory Big Data Analytics with Disaggregated Persistent Memory","authors":"Shouwei Chen, Wensheng Wang, Xueyang Wu, Zhen Fan, Kunwu Huang, Peiyu Zhuang, Yue Li, I. Rodero, M. Parashar, Dennis Weng","doi":"10.1109/CCGRID.2019.00012","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00012","url":null,"abstract":"The performance of modern Big Data frameworks, e.g. Spark, depends greatly on high-speed storage and shuffling, which impose a significant memory burden on production data centers. In many production situations, the persistence and shuffling intensive applications can suffer a major performance loss due to lack of memory. Thus, the common practice is usually to over-allocate the memory assigned to the data workers for production applications, which in turn reduces overall resource utilization. One efficient way to address the dilemma between the performance and cost efficiency of Big Data applications is through data center computing resource disaggregation. This paper proposes and implements a system that incorporates the Spark Big Data framework with a novel in-memory distributed file system to achieve memory disaggregation for data persistence and shuffling. We address the challenge of optimizing performance at affordable cost by co-designing the proposed in-memory distributed file system with large-volume DIMM-based persistent memory (PMEM) and RDMA technology. The disaggregation design allows each part of the system to be scaled independently, which is particularly suitable for cloud deployments. The proposed system is evaluated in a production-level cluster using real enterprise-level Spark production applications. The results of an empirical evaluation show that the system can achieve up to a 3.5- fold performance improvement for shuffle-intensive applications with the same amount of memory, compared to the default Spark setup. Moreover, by leveraging PMEM, we demonstrate that our system can effectively increase the memory capacity of the computing cluster with affordable cost, with a reasonable execution time overhead with respect to using local DRAM only.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130642744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Privacy-Preserving Record Linkage with Spark 与Spark的隐私保护记录链接
O. Valkering, A. Belloum
{"title":"Privacy-Preserving Record Linkage with Spark","authors":"O. Valkering, A. Belloum","doi":"10.1109/CCGRID.2019.00058","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00058","url":null,"abstract":"Privacy considerations obligate careful and secure processing of personal data. This is especially true when personal data is linked against databases from other organizations. During such endeavours, privacy-preserving record linkage (PPRL) can be utilized to prevent needless exposure of sensitive information to other organizations. With the increase of personal data that is being gathered and analyzed, scalable PPRL capable of handling massive databases is much desired. In this work, we evaluate Apache Spark as an option to scale PPRL. Not only is it valuable to have a scalable PPRL implementation, but one based on the Spark would also be commonly deployable and could take advantage of further development of the ecosystem. Our results show that a PPRL solution based on Spark outperforms alternatives when it comes to handling multiple millions of records; can scale to dozens of nodes; and is on-par with regular record linkage implementations in terms of achieved results.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133916893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DCA-IO: A Dynamic I/O Control Scheme for Parallel and Distributed File Systems DCA-IO:并行和分布式文件系统的动态I/O控制方案
Sunggon Kim, A. Sim, Kesheng Wu, S. Byna, Teng Wang, Yongseok Son, Hyeonsang Eom
{"title":"DCA-IO: A Dynamic I/O Control Scheme for Parallel and Distributed File Systems","authors":"Sunggon Kim, A. Sim, Kesheng Wu, S. Byna, Teng Wang, Yongseok Son, Hyeonsang Eom","doi":"10.1109/CCGRID.2019.00049","DOIUrl":"https://doi.org/10.1109/CCGRID.2019.00049","url":null,"abstract":"In high-performance computing, storage is a shared resource and used by all users with many different application requirements and knowledge of storage. Consequently, the optimal storage configuration varies according to the I/O behavior of each application. While system logs are helpful resources in understanding the storage behavior, it is non-trivial for each user to analyze the logs and adjust complex configurations. Even for experienced users, it is difficult to understand the full stack of I/O systems and find the optimal configuration for the specific application. In this work, we analyzed the I/O activities of CORI which is an HPC system in National Energy Research Scientific Computing Center (NERSC). The result of our analysis shows that most users do not adjust storage configurations and use the default settings. Also, it shows that only a few applications are executed repeatedly in the HPC environment. Based on this result, we have developed DCA-IO, a dynamic distributed file system configuration adjustment algorithm, which utilizes system log information and widely adapted rules to adjust storage configurations automatically without any user intervention. DCA-IO utilizes existing system logs and does not require any modifications in code or an additional library. To demonstrate the effectiveness of DCA-IO, we have performed experiments using I/O kernels of the real applications in both isolated small-sized Lustre environment and CORI. Our experimental result shows that the use of our scheme can lead to improvements in the performance of HPC applications by up to 75% in an isolated environment and 50% in a real HPC environment without user intervention.","PeriodicalId":234571,"journal":{"name":"2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132747161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信