2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)最新文献

筛选
英文 中文
Rack-Scaling: An efficient rack-based redistribution method to accelerate the scaling of cloud disk arrays 机架缩放:一种高效的基于机架的重新分配方法,可加速云磁盘阵列的缩放
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00098
Zhehan Lin, Hanchen Guo, Chentao Wu, Jie Li, Guangtao Xue, M. Guo
{"title":"Rack-Scaling: An efficient rack-based redistribution method to accelerate the scaling of cloud disk arrays","authors":"Zhehan Lin, Hanchen Guo, Chentao Wu, Jie Li, Guangtao Xue, M. Guo","doi":"10.1109/IPDPS49936.2021.00098","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00098","url":null,"abstract":"In cloud storage systems, disk arrays are widely used because of their high reliability and low monetary cost. Due to the burst of I/O in sprinting computing scenarios (i.e. online retailer services on Black Friday or Cyber Monday), large scale cloud storage systems such as AWS S3 and GFS need to afford 10XI/O workloads. Therefore, rack level scaling for cloud disk arrays becomes urgent for sprinting services. Although several existing methods, such as Round-Robin(RR) and Scale-RS, are proposed to accelerate the scaling processes, the efficiencies of these approaches are limited. It is because that the cross-rack data migrations are ill-considered in their designs. To address the above problem, in this paper, we propose Rack-Scaling, a novel data redistribution method to accelerate rack level scaling process in cloud storage systems. The basic idea of Rack-Scaling is migrating appropriate data blocks within and among racks to achieve a uniform data distribution while minimizing the cross-rack migration, which costs more than intra-rack migration. We conduct simulations via Disksim and we also implement Rack-Scaling on Hadoop to demonstrate the effectiveness of Rack-Scaling. The results show that, compared to typical methods such as Round-Robin (RR), Semi-RR, Scale-RS and BDR, Rack-Scaling reduces the number of I/O operations and the data amount of cross-rack transmission by up to 90.4% and 99.9%, respectively, and speeds up the scaling by up to 8.77X.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"11 18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124236555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Max-Stretch Minimization on an Edge-Cloud Platform 边缘云平台上的最大伸缩最小化
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00086
A. Benoit, Redouane Elghazi, Y. Robert
{"title":"Max-Stretch Minimization on an Edge-Cloud Platform","authors":"A. Benoit, Redouane Elghazi, Y. Robert","doi":"10.1109/IPDPS49936.2021.00086","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00086","url":null,"abstract":"We consider the problem of scheduling independent jobs that are generated by processing units at the edge of the network. These jobs can either be executed locally, or sent to a centralized cloud platform that can execute them at greater speed. Such edge-generated jobs may come from various applications, such as e-health, disaster recovery, autonomous vehicles or flying drones. The problem is to decide where and when to schedule each job, with the objective to minimize the maximum stretch incurred by any job. The stretch of a job is the ratio of the time spent by that job in the system, divided by the minimum time it could have taken if the job was alone in the system. We formalize the problem and explain the differences with other models that can be found in the literature. We prove that minimizing the max-stretch is NP-complete, even in the simpler instance with no release dates (all jobs are known in advance). This result comes from the proof that minimizing the max-stretch with homogeneous processors and without release dates is NP-complete, a complexity problem that was left open before this work. We design several algorithms to propose efficient solutions to the general problem, and we conduct simulations based on real platform parameters to evaluate the performance of these algorithms.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131852370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
From Parallelization to Customization – Challenges and Opportunities 从并行化到定制化——挑战与机遇
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00077
J. Cong
{"title":"From Parallelization to Customization – Challenges and Opportunities","authors":"J. Cong","doi":"10.1109/IPDPS49936.2021.00077","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00077","url":null,"abstract":"With large-scale deployment of FPGAs in both private and public clouds in the past a few years, customizable computing is transitioning from advanced research into mainstream computing. In this talk, I shall first showcase a few big data and machine learning applications that benefit significantly from customization. Next, I shall discuss the challenges of FPGA programming for the efficient accelerator designs, which presents a significant barrier to many software programmers, despite the recent advances in high-level synthesis. Then, I shall highlight our recent progress on automated compilation for customized archictectures, such as systolic arrays, stencils, and more general CPPs (composable parallel and pipelined) architectures. I shall also present our ongoing work on HeteroCL, a highly productive multi-paradigm programming framework targeting accelerator-rich heterogeneous architectures, and is being used as a focal point to integrate various optimizaiton techniques and support high-level domain-specific languages (DSL) such as Halide and Pytorch. Our goal is to “demacratize customizable computing” so that most (if not all) software programmers can design optimized accelerators on FPGAs.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116539992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder 基于条件变分自编码器的联邦学习检测恶意模型更新
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00075
Zhipin Gu, Yuexiang Yang
{"title":"Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder","authors":"Zhipin Gu, Yuexiang Yang","doi":"10.1109/IPDPS49936.2021.00075","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00075","url":null,"abstract":"In federated learning, the central server combines local model updates from the clients in the network to create an aggregated model. To protect clients’ privacy, the server is designed to have no visibility into how these updates are generated. The nature of federated learning makes detecting and defending against malicious model updates a challenging task. Unlike existing works that struggle to defend against Byzantine clients, the paper considers defending against targeted model poisoning attack in the federated learning setting. The adversary aims to reduce the model performance on targeted subtasks while maintaining the main task’s performance. This paper proposes Fedcvae, a robust and unsupervised federated learning framework where the central server uses conditional variational autoencoder to detect and exclude malicious model updates. Since the reconstruction error of malicious updates is much larger than that of benign ones, it can be used as an anomaly score. We formulate a dynamic threshold of reconstruction error to differentiate malicious updates from normal ones based on this idea. Fedcvae is tested with extensive experiments on IID and non-IID federated benchmarks, showing a competitive performance over existing aggregation methods under Byzantine attack and targeted model poisoning attack.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Distributed-Memory k-mer Counting on GPUs gpu上的分布式内存k-mer计数
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00061
Israt Nisa, P. Pandey, Marquita Ellis, L. Oliker, A. Buluç, K. Yelick
{"title":"Distributed-Memory k-mer Counting on GPUs","authors":"Israt Nisa, P. Pandey, Marquita Ellis, L. Oliker, A. Buluç, K. Yelick","doi":"10.1109/IPDPS49936.2021.00061","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00061","url":null,"abstract":"A fundamental step in many bioinformatics computations is to count the frequency of fixed-length sequences, called k-mers, a problem that has received considerable attention as an important target for shared memory parallelization. With datasets growing at an exponential rate, distributed memory parallelization is becoming increasingly critical. Existing distributed memory k-mer counters do not take advantage of GPUs for accelerating computations. Additionally, they do not employ domain-specific optimizations to reduce communication volume in a distributed environment. In this paper, we present the first GPU-accelerated distributed-memory parallel k-mer counter. We evaluate the communication volume as the major bottleneck in scaling k-mer counting to multiple GPU-equipped compute nodes and implement a supermer-based optimization to reduce the communication volume and to enhance scalability. Our empirical analysis examines the balance of communication to computation on a state-of-the-art system, the Summit supercomputer at Oak Ridge National Lab. Results show overall speedups of up to two orders of magnitude with GPU optimization over CPU-based k mer counters. Furthermore, we show an additional 1.5$times$ speedup using the supermer-based communication optimization.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131220797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimal Task Assignment for Heterogeneous Federated Learning Devices 异构联邦学习设备的最优任务分配
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00074
L. Pilla
{"title":"Optimal Task Assignment for Heterogeneous Federated Learning Devices","authors":"L. Pilla","doi":"10.1109/IPDPS49936.2021.00074","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00074","url":null,"abstract":"Federated Learning provides new opportunities for training machine learning models while respecting data privacy. This technique is based on heterogeneous devices that work together to iteratively train a model while never sharing their own data. Given the synchronous nature of this training, the performance of Federated Learning systems is dictated by the slowest devices, also known as stragglers. In this paper, we investigate the problem of minimizing the duration of Federated Learning rounds by controlling how much data each device uses for training. We formulate this as a makespan minimization problem with identical, independent, and atomic tasks that have to be assigned to heterogeneous resources with non-decreasing cost functions, while also respecting lower and upper limits of tasks per resource. Based on this formulation, we propose a polynomial-time algorithm named OLAR and prove that it provides optimal schedules. We evaluate OLAR in an extensive series of experiments using simulation that includes comparisons to other algorithms from the state of the art, and new extensions to them. Our results indicate that OLAR provides optimal solutions with a small execution time. They also show that the presence of lower and upper limits of tasks per resource erase any benefits that suboptimal heuristics could provide in terms of algorithm execution time.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133663718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Tale of Two C’s: Convergence and Composability 两个C的故事:收敛和可组合性
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/ipdps49936.2021.00001
I. Altintas
{"title":"A Tale of Two C’s: Convergence and Composability","authors":"I. Altintas","doi":"10.1109/ipdps49936.2021.00001","DOIUrl":"https://doi.org/10.1109/ipdps49936.2021.00001","url":null,"abstract":"Cyberinfrastructure is everywhere in diverse forms in service of applications in science, business and society. From IoT to extreme-scale computing data and computing have never been so distributed with the potential for real-time integration into these applications. The common theme to these applications, mostly composed of (big) data-integrated workloads, is their need to run in specialized environments for reasons such as on-demand or 24x7 nature of the tasks they are performing, and difficulties regarding their portability, latency, privacy, and performance optimization. Moreover, in many data-driven scientific applications, there is a need for heterogeneous integration of tasks requiring specialized computing capabilities with traditional high-throughput computing or high-performance computing tasks. Although some key middleware technologies enabled demonstration of standalone heterogeneous applications, such integration requires expertise convergence from a large group of people in very specialized settings. There are still many challenges for streamlined, scalable, repeatable, responsible, and explainable integration of data-integrated applications. Key opportunities for further innovations include intelligent systems and automated workflow management software that can compose and steer dynamic applications that can adapt to changing conditions in a data-driven fashion while integrating many tools to explore, analyze and utilize data. This talk will discuss some examples for data-integrated applications, describe emerging systems that enabled these applications, and overview our recent research to enable composable applications including a convergence application development methodology, intelligent middleware, and workflow composition.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132904056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Astra: Autonomous Serverless Analytics with Cost-Efficiency and QoS-Awareness Astra:具有成本效益和质量意识的自主无服务器分析
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00085
Jananie Jarachanthan, Li Chen, Fei Xu, Bo Li
{"title":"Astra: Autonomous Serverless Analytics with Cost-Efficiency and QoS-Awareness","authors":"Jananie Jarachanthan, Li Chen, Fei Xu, Bo Li","doi":"10.1109/IPDPS49936.2021.00085","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00085","url":null,"abstract":"With the ability to simplify the code deployment with one-click upload and lightweight execution, serverless computing has emerged as a promising paradigm with increasing popularity. However, there remain open challenges when adapting data-intensive analytics applications to the serverless context, in which users of serverless analytics encounter with the difficulty in coordinating computation across different stages and provisioning resources in a large configuration space. This paper presents our design and implementation of Astra, which configures and orchestrates serverless analytics jobs in an autonomous manner, while taking into account flexibly-specified user requirements. Astra relies on the modeling of performance and cost which characterizes the intricate interplay among multi-dimensional factors (e.g., function memory size, degree of parallelism at each stage). We formulate an optimization problem based on user-specific requirements towards performance enhancement or cost reduction, and develop a set of algorithms based on graph theory to obtain optimal job execution. We deploy Astra in the AWS Lambda platform and conduct real-world experiments over three representative benchmarks with different scales. Results demonstrate that Astra can achieve the optimal execution decision for serverless analytics, by improving the performance of 21% to 60% under a given budget constraint, and resulting in a cost reduction of 20% to 80% without violating performance requirement, when compared with three baseline configuration algorithms.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132210151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Adaptive Spatially Aware I/O for Multiresolution Particle Data Layouts 多分辨率粒子数据布局的自适应空间感知I/O
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00063
W. Usher, Xuan Huang, Steve Petruzza, Sidharth Kumar, S. Slattery, S. Reeve, Feng Wang, Christopher R. Johnson, Valerio Pascucci
{"title":"Adaptive Spatially Aware I/O for Multiresolution Particle Data Layouts","authors":"W. Usher, Xuan Huang, Steve Petruzza, Sidharth Kumar, S. Slattery, S. Reeve, Feng Wang, Christopher R. Johnson, Valerio Pascucci","doi":"10.1109/IPDPS49936.2021.00063","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00063","url":null,"abstract":"Large-scale simulations on nonuniform particle distributions that evolve over time are widely used in cosmology, molecular dynamics, and engineering. Such data are often saved in an unstructured format that neither preserves spatial locality nor provides metadata for accelerating spatial or attribute subset queries, leading to poor performance of visualization tasks. Furthermore, the parallel I/O strategy used typically writes a file per process or a single shared file, neither of which is portable or scalable across different HPC systems. We present a portable technique for scalable, spatially aware adaptive aggregation that preserves spatial locality in the output. We evaluate our approach on two supercomputers, Stampede2 and Summit, and demonstrate that it outperforms prior approaches at scale, achieving up to $2.5 times$ faster writes and reads for nonuniform distributions. Furthermore, the layout written by our method is directly suitable for visual analytics, supporting low-latency reads and attribute-based filtering with little overhead.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133828702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving checkpointing intervals by considering individual job failure probabilities 通过考虑单个作业失败概率来改进检查点间隔
2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Pub Date : 2021-05-01 DOI: 10.1109/IPDPS49936.2021.00038
Alvaro Frank, Manuel Baumgartner, Reza Salkhordeh, A. Brinkmann
{"title":"Improving checkpointing intervals by considering individual job failure probabilities","authors":"Alvaro Frank, Manuel Baumgartner, Reza Salkhordeh, A. Brinkmann","doi":"10.1109/IPDPS49936.2021.00038","DOIUrl":"https://doi.org/10.1109/IPDPS49936.2021.00038","url":null,"abstract":"Checkpointing is a popular resilience method in HPC and its efficiency highly depends on the choice of the checkpoint interval. Standard analytical approaches optimize intervals for big, long-running jobs that fail with high probability, while they are unable to minimize checkpointing overheads for jobs with a low or medium probability of failing. Nevertheless, our analysis of batch traces of four HPC systems shows that these jobs are extremely common.We therefore propose an iterative checkpointing algorithm to compute efficient intervals for jobs with a medium risk of failure. The method also supports big and long-running jobs by converging to the results of various traditional methods for these. We validated our algorithm using batch system simulations including traces from four HPC systems and compared it to five alternative checkpoint methods. The evaluations show up to 40% checkpoint savings for individual jobs when using our method, while improving checkpointing costs of complete HPC systems between 2.8% and 24.4% compared to the best alternative approach.","PeriodicalId":372234,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121776113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信