2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)最新文献

筛选
英文 中文
Serverless Approach to Sensitivity Analysis of Computational Models 计算模型敏感性分析的无服务器方法
P. Kica, Magdalena Otta, K. Czechowicz, Karol Zajac, P. Nowakowski, A. Narracott, I. Halliday, M. Malawski
{"title":"Serverless Approach to Sensitivity Analysis of Computational Models","authors":"P. Kica, Magdalena Otta, K. Czechowicz, Karol Zajac, P. Nowakowski, A. Narracott, I. Halliday, M. Malawski","doi":"10.1109/CCGrid57682.2023.00064","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00064","url":null,"abstract":"Digital twins are virtual representations of physical objects or systems used for the purpose of analysis, most often via computer simulations, in many engineering and scientific disciplines. Recently, this approach has been introduced to computational medicine, within the concept of Digital Twin in Healthcare (DTH). Such research requires verification and validation of its models, as well as the corresponding sensitivity analysis and uncertainty quantification (VVUQ). From the computing perspective, VVUQ is a computationally intensive process, as it requires numerous runs with variations of input parameters. Researchers often use high-performance computing (HPC) solutions to run VVUQ studies where the number of parameter combinations can easily reach tens of thousands. However, there is a viable alternative to HPC for a substantial subset of computational models - serverless computing. In this paper we hypothesize that using the serverless computing model can be a practical and efficient approach to selected cases of running VVUQ calculations. We show this on the example of the EasyVVUQ library, which we extend by providing support for many serverless services. The resulting library - CloudVVUQ - is evaluated using two real-world applications from the computational medicine domain adapted for serverless execution. Our experiments demonstrate the scalability of the proposed approach.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126244870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting the Performance-Cost Trade-off of Applications Across Multiple Systems 预测跨多个系统的应用程序的性能-成本权衡
Amir Nassereldine, Safaa Diab, M. Baydoun, Kenneth Leach, M. Alt, D. Milojicic, I. E. Hajj
{"title":"Predicting the Performance-Cost Trade-off of Applications Across Multiple Systems","authors":"Amir Nassereldine, Safaa Diab, M. Baydoun, Kenneth Leach, M. Alt, D. Milojicic, I. E. Hajj","doi":"10.1109/CCGrid57682.2023.00029","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00029","url":null,"abstract":"In modern computing environments, users may have multiple systems accessible to them such as local clusters, private clouds, or public clouds. This abundance of choices makes it difficult for users to select the system and configuration for running an application that best meet their performance and cost objectives. To assist such users, we propose a prediction tool that predicts the full performance-cost trade-off space of an application across multiple systems. Our tool runs and profiles a submitted application on a small number of configurations from some of the systems, and uses that information to predict the application's performance on all configurations in all systems. The prediction models are trained offline with data collected from running a large number of applications on a wide variety of configurations. Notable aspects of our tool include: providing different scopes of prediction with varying online profiling requirements, automating the selection of the small number of configurations and systems used for online profiling, performing online profiling using partial runs thereby make predictions for applications without running them to completion, employing a classifier to distinguish applications that scale well from those that scale poorly, and predicting the sensitivity of applications to interference from other users. We evaluate our tool using 69 data analytics and scientific computing benchmarks executing on three different single-node CPU systems with 8–9 configurations each and show that it can achieve low prediction error with modest profiling overhead.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132800376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PFSL: Personalized & Fair Split Learning with Data & Label Privacy for thin clients PFSL:个性化和公平的分割学习与数据和标签隐私瘦客户端
Manas Wadhwa, Gagan Raj Gupta, Ashutosh Sahu, Rahul Saini, Vidhi Mittal
{"title":"PFSL: Personalized & Fair Split Learning with Data & Label Privacy for thin clients","authors":"Manas Wadhwa, Gagan Raj Gupta, Ashutosh Sahu, Rahul Saini, Vidhi Mittal","doi":"10.1109/CCGrid57682.2023.00043","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00043","url":null,"abstract":"The traditional framework of federated learning (FL) requires each client to re-train their models in every iteration, making it infeasible for resource-constrained mobile devices to train deep-learning (DL) models. Split learning (SL) provides an alternative by using a centralized server to offload the computation of activations and gradients for a subset of the model but suffers from problems of slow convergence and lower accuracy. In this paper, we implement PFSL, a new framework of distributed split learning where a large number of thin clients perform transfer learning in parallel, starting with a pre-trained DL model without sharing their data or labels with a central server. We implement a lightweight step of personalization of client models to provide high performance for their respective data distributions. Furthermore, we evaluate performance fairness amongst clients under a work fairness constraint for various scenarios of non-i.i.d. data distributions and unequal sample sizes. Our accuracy far exceeds that of current SL algorithms and is very close to that of centralized learning on several real-life benchmarks. It has a very low computation cost compared to FL variants and promises to deliver the full benefits of DL to extremely thin, resource-constrained clients.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"55 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114002083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Scavenger: A Cloud Service For Optimizing Cost and Performance of ML Training 清道夫:优化机器学习培训成本和性能的云服务
S. Tyagi, Prateek Sharma
{"title":"Scavenger: A Cloud Service For Optimizing Cost and Performance of ML Training","authors":"S. Tyagi, Prateek Sharma","doi":"10.1109/CCGrid57682.2023.00045","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00045","url":null,"abstract":"Cloud computing platforms can provide the compu-tational resources required for training large machine learning models such as deep neural networks. While the pay-as-you- go nature of cloud virtual machines (VMs) makes it easy to spin-up large clusters for training models, it can also lead to ballooning costs. The 100s of virtual machine sizes provided by cloud platforms also makes it extremely challenging to select the “right” cloud cluster configuration for training. Furthermore, the training time and cost of distributed model training is highly sensitive to the cluster configurations, and presents a large and complex tradeoff-space. In this paper, we develop principled and practical techniques for optimizing the training time and cost of distributed ML model training on the cloud. Our key insight is that both the parallel and statistical efficiency must be considered when selecting the optimum job configuration parameters such as the number of workers and the batch size. By combining conventional parallel scaling concepts and new insights into SGD noise, we develop models for estimating the time and cost on different cluster configurations. Using the repetitive nature of training and our performance models, our Scavenger cloud service can search for optimum cloud configurations in a black-box, online manner. Our approach reduces training times by 2 x and costs by more than 50 %. Our performance models are accurate to within 2 %, and our search imposes only a 10% overhead compared to an ideal oracle- based approach.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130251762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with Clustered Aggregation and Knowledge DIStilled Regularization CADIS:用聚类聚合和知识蒸馏正则化处理联邦学习中的聚类倾斜非iid数据
Nang Hung Nguyen, Duc Long Nguyen, Trong Bang Nguyen, T. Nguyen, H. Pham, Truong Thao Nguyen, Phi-Le Nguyen
{"title":"CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with Clustered Aggregation and Knowledge DIStilled Regularization","authors":"Nang Hung Nguyen, Duc Long Nguyen, Trong Bang Nguyen, T. Nguyen, H. Pham, Truong Thao Nguyen, Phi-Le Nguyen","doi":"10.1109/CCGrid57682.2023.00032","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00032","url":null,"abstract":"Federated learning enables edge devices to train a global model collaboratively without exposing their data. Despite achieving outstanding advantages in computing efficiency and privacy protection, federated learning faces a significant challenge when dealing with non-IID data, i.e., data generated by clients that are typically not independent and identically distributed. In this paper, we tackle a new type of Non-IID data, called cluster-skewed non-IID, discovered in actual data sets. The cluster-skewed non-IID is a phenomenon in which clients can be grouped into clusters with similar data distributions. By performing an in-depth analysis of the behavior of a classification model's penultimate layer, we introduce a metric that quantifies the similarity between two clients' data distributions without violating their privacy. We then propose an aggregation scheme that guarantees equality between clusters. In addition, we offer a novel local training regularization based on the knowledge-distillation technique that reduces the overfitting problem at clients and dramatically boosts the training scheme's performance. We theoretically prove the superiority of the proposed aggregation over the benchmark FedAvg. Extensive experimental results on both standard public datasets and our in-house real-world dataset demonstrate that the proposed approach improves accuracy by up to 16% compared to the FedAvg algorithm.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taming Metadata-intensive HPC Jobs Through Dynamic, Application-agnostic QoS Control 通过动态的、与应用无关的QoS控制来驯服元数据密集型HPC作业
Ricardo Macedo, Mariana Miranda, Y. Tanimura, J. Haga, Amit Ruhela, Stephen Lien Harrell, R. T. Evans, J. Pereira, J. Paulo
{"title":"Taming Metadata-intensive HPC Jobs Through Dynamic, Application-agnostic QoS Control","authors":"Ricardo Macedo, Mariana Miranda, Y. Tanimura, J. Haga, Amit Ruhela, Stephen Lien Harrell, R. T. Evans, J. Pereira, J. Paulo","doi":"10.1109/CCGrid57682.2023.00015","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00015","url":null,"abstract":"Modern I/O applications that run on HPC infrastructures are increasingly becoming read and metadata intensive. However, having multiple applications submitting large amounts of metadata operations can easily saturate the shared parallel file system's metadata resources, leading to overall performance degradation and I/O unfairness. We present PADLL, an application and file system agnostic storage middleware that enables QoS control of data and metadata workflows in HPC storage systems. It adopts ideas from Software-Defined Storage, building data plane stages that mediate and rate limit POSIX requests submitted to the shared file system, and a control plane that holistically coordinates how all I/O workflows are handled. We demonstrate its performance and feasibility under multiple QoS policies using synthetic benchmarks, real-world applications, and traces collected from a production file system. Results show that PADLL can enforce complex storage QoS policies over concurrent metadata-aggressive jobs, ensuring fairness and prioritization.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128981439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The SPEC-RG Reference Architecture for The Compute Continuum 计算连续体的SPEC-RG参考体系结构
Matthijs Jansen, Auday Al-Dulaimy, A. Papadopoulos, A. Trivedi, A. Iosup
{"title":"The SPEC-RG Reference Architecture for The Compute Continuum","authors":"Matthijs Jansen, Auday Al-Dulaimy, A. Papadopoulos, A. Trivedi, A. Iosup","doi":"10.1109/CCGrid57682.2023.00051","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00051","url":null,"abstract":"As the next generation of diverse workloads like autonomous driving and augmented/virtual reality evolves, computation is shifting from cloud-based services to the edge, leading to the emergence of a cloud-edge compute continuum. This continuum promises a wide spectrum of deployment opportunities for workloads that can leverage the strengths of cloud (scalable infrastructure, high reliability) and edge (energy efficient, low latencies). Despite its promises, the continuum has only been studied in silos of various computing models, thus lacking strong end-to-end theoretical and engineering foundations for computing and resource management across the continuum. Consequently, devel-opers resort to ad hoc approaches to reason about performance and resource utilization of workloads in the continuum. In this work, we conduct a first-of-its-kind systematic study of various computing models, identify salient properties, and make a case to unify them under a compute continuum reference architecture. This architecture provides an end-to-end analysis framework for developers to reason about resource management, workload distribution, and performance analysis. We demonstrate the utility of the reference architecture by analyzing two popular continuum workloads, deep learning and industrial IoT. We have developed an accompanying deployment and benchmarking framework and first-order analytical model for quantitative reasoning of continuum workloads. The framework is open-sourced and available at https://github.com/atlarge-research/continuum.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126223055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Implementing Reinforcement Learning Datacenter Congestion Control in NVIDIA NICs 在NVIDIA网卡中实施强化学习数据中心拥塞控制
Benjamin Fuhrer, Yuval Shpigelman, Chen Tessler, Shie Mannor, Gal Chechik, E. Zahavi, Gal Dalal
{"title":"Implementing Reinforcement Learning Datacenter Congestion Control in NVIDIA NICs","authors":"Benjamin Fuhrer, Yuval Shpigelman, Chen Tessler, Shie Mannor, Gal Chechik, E. Zahavi, Gal Dalal","doi":"10.1109/CCGrid57682.2023.00039","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00039","url":null,"abstract":"As communication protocols evolve, datacenter network utilization increases. As a result, congestion is more frequent, causing higher latency and packet loss. Combined with the increasing complexity of workloads, manual design of congestion control (CC) algorithms becomes extremely difficult. This calls for the development of AI approaches to replace the human effort. Unfortunately, it is currently not possible to deploy AI models on network devices due to their limited computational capabilities. Here, we offer a solution to this problem by building a computationally-light solution based on a recent reinforcement learning CC algorithm [1, RL-CC]. We reduce the inference time of RL-CC by x500 by distilling its complex neural network into decision trees. This transformation enables real-time inference within the μ-sec decision-time requirement, with a negligible effect on quality. We deploy the transformed policy on NVIDIA NICs in a live cluster. Compared to popular CC algorithms used in production, RL-CC is the only method that performs well on all benchmarks tested over a large range of number of flows. It balances multiple metrics simultaneously: bandwidth, latency, and packet drops. These results suggest that data-driven methods for CC are feasible, challenging the prior belief that handcrafted heuristics are necessary to achieve optimal performance.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116169261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Heterogeneous Federated Learning using Dynamic Model Pruning and Adaptive Gradient 基于动态模型剪枝和自适应梯度的异构联邦学习
Sixing Yu, P. Nguyen, Ali Anwar, A. Jannesari
{"title":"Heterogeneous Federated Learning using Dynamic Model Pruning and Adaptive Gradient","authors":"Sixing Yu, P. Nguyen, Ali Anwar, A. Jannesari","doi":"10.1109/CCGrid57682.2023.00038","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00038","url":null,"abstract":"Federated Learning (FL) has emerged as a new paradigm for training machine learning models distributively without sacrificing data security and privacy. Learning models on edge devices such as mobile phones is one of the most common use cases for FL. However, Non-identical independent distributed (non-IID) data in edge devices easily leads to training failures. Especially, over-parameterized machine learning models can easily be over-fitted on such data, hence, resulting in inefficient federated learning and poor model performance. To overcome the over-fitting issue, we proposed an adaptive dynamic pruning approach for FL, which can dynamically slim the model by dropping out unimportant parameters, hence, preventing over-fittings. Since the machine learning model's parameters react differently for different training samples, adaptive dynamic pruning will evaluate the salience of the model's parameter according to the input training sample, and only retain the salient parameter's gradients when doing back-propagation. We performed comprehensive experiments to evaluate our approach. The results show that our approach by removing the redundant parameters in neural networks can significantly reduce the over-fitting issue and greatly improves the training efficiency. In particular, when training the ResNet-32 on CIFAR-10, our approach reduces the communication cost by 57%. We further demonstrate the inference acceleration capability of the proposed algorithm. Our approach reduces up to 50% FLOPs inference of DNNs on edge devices while maintaining the model's quality.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124789846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Steering Committee Members 督导委员会成员
N. MiahMd
{"title":"Steering Committee Members","authors":"N. MiahMd","doi":"10.1109/ccgrid57682.2023.00010","DOIUrl":"https://doi.org/10.1109/ccgrid57682.2023.00010","url":null,"abstract":"Lauren Donofrio, Michigan Department of the Attorney General—Co-chair, ex-officio Michael Moody, Michigan Department of the Attorney General—Co-chair, ex-officio Kwafo Adarkwa, ITC Holdings George Andraos, Ford Motor Company Jim Ault, Michigan Electric and Gas Association Chrissy Beckwith, Semco Energy Mathias Bell, Opower Greg Bergtold, Dow Chemical Company Craig Borr, Wolverine Power Cooperative Laura Chappelle, Energy Michigan Greg Clark, Indiana Michigan Power James Clift. Michigan Environmental Council Dan Dundas, Senate Majority Policy Office Anand Gangadharan, NOVI Energy Jason Geer, Michigan Chamber of Commerce Brandon Hofmeister, Consumers Energy John LaMacchia, Michigan Municipal League Greg Poulos, EnerNOC Jean Redfield, NextEnergy Don Stanczak, DTE Energy Jill Steiner, The Cadmus Group Andrew Vermeesch, Michigan Farm Bureau Jim Weeks, Michigan Municipal Electric Association Liesl Clark, Michigan Energy Innovation Business Jeffrey Wiggins, House Republican Policy Council Office","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128989970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信