IEEE Cloud Computing最新文献

筛选
英文 中文
Fast and Efficient Performance Tuning of Microservices 快速高效的微服务性能调优
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00067
V. Mostofi, Diwakar Krishnamurthy, M. Arlitt
{"title":"Fast and Efficient Performance Tuning of Microservices","authors":"V. Mostofi, Diwakar Krishnamurthy, M. Arlitt","doi":"10.1109/CLOUD53861.2021.00067","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00067","url":null,"abstract":"The microservice architecture is being increasingly adopted. Microservices often rely on containerization technology, facilitating agile development and permitting flexible deployment on cloud platforms. Many microservice applications are interactive. Consequently, there is a need for pre-deployment performance tuning techniques to ensure that an application will meet its end user response time requirements post-deployment. Additionally, the tuning process should be efficient, i.e., allocate just enough resources to minimize costs in cloud-based deployments. Furthermore, the tuning process needs to be fast to facilitate agile deployments. We design and evaluate a technique called MOAT (Microservice Application Performance Tuner) that embodies these requiremenis. MOAT conducts iterative performance tests to determine resource allocations for the individual microservices in an application for any given workload. It exploits a novel optimization technique that identifies resource allocations while requiring only a limited number of performance tests to explore the tuning space. Validation using an experimental system shows that MOAT outperforms a competing approach based on Bayesian optimization in terms of both solution speed and resource allocation efficiency.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"40 1","pages":"515-520"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91010809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Federated or Split? A Performance and Privacy Analysis of Hybrid Split and Federated Learning Architectures 联合还是分裂?混合分离和联邦学习体系结构的性能和隐私分析
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00038
Valeria Turina, Zongshun Zhang, Flavio Esposito, I. Matta
{"title":"Federated or Split? A Performance and Privacy Analysis of Hybrid Split and Federated Learning Architectures","authors":"Valeria Turina, Zongshun Zhang, Flavio Esposito, I. Matta","doi":"10.1109/CLOUD53861.2021.00038","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00038","url":null,"abstract":"Mobile phones, wearable devices, and other sensors produce every day a large amount of distributed and sensitive data. Classical machine learning approaches process these large datasets usually on a single machine, training complex models to obtain useful predictions. To better preserve user and data privacy and at the same time guarantee high performance, distributed machine learning techniques such as Federated and Split Learning have been recently proposed. Both of these distributed learning architectures have merits but also drawbacks. In this work, we analyze such tradeoffs and propose a new hybrid Federated Split Learning architecture, to combine the benefits of both in terms of efficiency and privacy. Our evaluation shows how Federated Split Learning may reduce the computational power required for each client running a Federated Learning and enable Split Learning parallelization while maintaining a high prediction accuracy with unbalanced datasets during training. Furthermore, FSL provides a better accuracy-privacy tradeoff in specific privacy approaches compared to Parallel Split Learning.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"134 3 1","pages":"250-260"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91076337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Origami Inference: Private Inference Using Hardware Enclaves 折纸推理:使用硬件enclave的私有推理
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00021
Krishnagiri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramanian, M. Annavaram
{"title":"Origami Inference: Private Inference Using Hardware Enclaves","authors":"Krishnagiri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramanian, M. Annavaram","doi":"10.1109/CLOUD53861.2021.00021","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00021","url":null,"abstract":"This work presents Origami, a framework which provides privacy-preserving inference for large deep neural network (DNN) models through a combination of enclave execution, cryptographic blinding, interspersed with accelerator-based computation. Origami partitions the ML model into multiple partitions. The first partition receives the encrypted user input within an SGX enclave. The enclave decrypts the input and then applies cryptographic blinding to the input data and the model parameters. The layer computation is offloaded to a GPU/CPU and the computed output is returned to the enclave, which decodes the computation on noisy data using the unblinding factors privately stored within SGX. This process may be repeated for each DNN layer, as has been done in prior work Slalom. However, the overhead of blinding and unblinding the data is a limiting factor to scalability. Origami relies on the empirical observation that the feature maps after the first several layers can not be used, even by a powerful conditional GAN adversary to reconstruct input. Hence, Origami dynamically switches to executing the rest of the DNN layers directly on an accelerator. We empirically demonstrate that using Origami, a conditional GAN adversary, even with an unlimited inference budget, cannot reconstruct the input. Compared to running the entire VGG-19 model within SGX, Origami inference improves the performance of private inference from 11x while using Slalom to 15. 1x.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"18 1","pages":"78-84"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85051227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
NL2Vul: Natural Language to Standard Vulnerability Score for Cloud Security Posture Management NL2Vul:云安全态势管理的自然语言到标准漏洞评分
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00073
Muhammed Fatih Bulut, Jinho Hwang
{"title":"NL2Vul: Natural Language to Standard Vulnerability Score for Cloud Security Posture Management","authors":"Muhammed Fatih Bulut, Jinho Hwang","doi":"10.1109/CLOUD53861.2021.00073","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00073","url":null,"abstract":"Cloud Security Posture Management (CSPM) tools have been gaining popularity to automate, monitor and visualize the security posture of multi-cloud environments. The foundation to assess the risk lies on being able to analyze each vulnerability and quantify its risk. However, the number of vulnerabilities in National Vulnerability Database (NVD) has skyrocketed in recent years and surpassed 144K as of late 2020. The current standard vulnerability tracking system relies mostly on human-driven efforts. Besides, open-source libraries do not necessarily follow the standards of vulnerability reporting set by CVE and NIST, but rather use Github issues for reporting. In this paper, we propose a framework, NL2Vul, to measure score of vulnerabilities with minimal human efforts. NL2Vul makes use of deep neural networks to train on descriptions of software vulnerabilities from NVD and predicts vulnerability scores. To flexibly expand the trained NVD model for different data sources that are being used to evaluate the risk posture in CSPM, NL2Vul uses transfer learning for quick re-training. We have evaluated NL2Vul with vanilla NVD, public Github issues of open source projects, and compliance technology specification documents.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"25 1","pages":"566-571"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79617032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polaris Scheduler: Edge Sensitive and SLO Aware Workload Scheduling in Cloud-Edge-IoT Clusters Polaris Scheduler:云边缘物联网集群中边缘敏感和SLO感知的工作负载调度
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00034
Stefan Nastic, Thomas W. Pusztai, A. Morichetta, Victor Casamayor-Pujol, S. Dustdar, D. Vij, Ying Xiong
{"title":"Polaris Scheduler: Edge Sensitive and SLO Aware Workload Scheduling in Cloud-Edge-IoT Clusters","authors":"Stefan Nastic, Thomas W. Pusztai, A. Morichetta, Victor Casamayor-Pujol, S. Dustdar, D. Vij, Ying Xiong","doi":"10.1109/CLOUD53861.2021.00034","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00034","url":null,"abstract":"Application workload scheduling in hybrid Cloud-Edge-IoT infrastructures has been extensively researched over the last years. The recent trend of containerizing application workloads, both in the cloud and on the edge, has further fueled the need for more advanced scheduling solutions in these hybrid infrastructures. Unfortunately, most of the current approaches are not fully sensitive to the edge properties and also lack adequate support for Service Level Objective (SLO) awareness. Previously, we introduced software defined gateways (SDGs), which enable managing novel edge resources at scale. At the same time Kubernetes was initially released. In spite of not being specifically developed for the edge, Kubernetes implements many of the design principles introduced by our SDGs, making it suitable for building SDG extensions on top of it. In this paper we present Polaris Scheduler - a novel scheduling framework, which enables edge sensitive and SLO aware scheduling in the Cloud-Edge-IoT Continuum. Polaris Scheduler is being developed as a part of Linux Foundation's Centaurus project. We discuss the main research challenges, the approach, and the vision of SLO aware edge sensitive scheduling.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"131 1","pages":"206-216"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79619421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Deep Reinforcement Learning Approach to Resource Management in Hybrid Clouds Harnessing Renewable Energy and Task Scheduling 基于可再生能源和任务调度的混合云资源管理的深度强化学习方法
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00037
Jie Zhao, M. A. Rodriguez, R. Buyya
{"title":"A Deep Reinforcement Learning Approach to Resource Management in Hybrid Clouds Harnessing Renewable Energy and Task Scheduling","authors":"Jie Zhao, M. A. Rodriguez, R. Buyya","doi":"10.1109/CLOUD53861.2021.00037","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00037","url":null,"abstract":"The use of cloud computing for delivering application services over the Internet has gained rapid traction. Since the beginning of the COVID-19 global pandemic, the work from home scheme and increased business presence online have created more demand for computing resources. Many enterprises and organizations are expanding their private data centres and utilizing hybrid or multi-cloud environments for their IT infrastructure. Because of the ever-increasing demand for computing resources, energy consumption and carbon emission have become a pressing issue. Renewable energy sources have been recognized as clean and sustainable alternatives to fossil-fuel based brown energy. However, due to the intermittent nature of availability of renewable energy sources, it brings many challenges to automatically and efficiently schedule tasks under renewable energy constraints and deadlines. Task scheduling with traditional heuristic algorithms are not able to adapt quickly with changing energy availability and stochastic task arrival. In this regard, this work aims at building a novel scheduling policy with deep reinforcement learning, which automatically applies scheduling techniques like workload shifting and cloud -bursting in a geographically distributed hybrid multi-cloud environment consists of multiple private and public clouds. Our primary goals are maximizing renewable energy utilization and avoiding deadline constraint violations. We also introduce user configurable hyper-parameters to enable multi-objective scheduling on cloud cost, makespan and utilization. Our experiment results show that the proposed scheduling approach can achieve the aforementioned objectives dynamically to varying renewable energy availability.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"30 1","pages":"240-249"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83080513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Performance Evaluation of Asynchronous FaaS 异步FaaS的性能评估
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00028
David Balla, M. Maliosz, Csaba Simon
{"title":"Performance Evaluation of Asynchronous FaaS","authors":"David Balla, M. Maliosz, Csaba Simon","doi":"10.1109/CLOUD53861.2021.00028","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00028","url":null,"abstract":"Function as a Service (FaaS) is a novel but dynamically emerging field of cloud computing. The majority of the leading cloud service providers have their own FaaS platforms, however, the open source community has embraced this technology, therefore an increasing number of FaaS alternatives can be deployed for on-premise use-cases. FaaS systems support both synchronous and asynchronous function invocations. In this paper we examine the differences in performance and billing between the two invocation types in OpenFaaS, Kubeless, Fission and Knative by using a simple function chain containing echo functions and a more complex image and natural language processing chain, implemented in Python3. We also present our solution, the implementation of asynchronous function invocations by using the Redis key-value store. Finally, we show how asynchronous function invocations avoid the negative effects on the billing when functions are cold-started.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"1 1","pages":"147-156"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81559439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Run Wild: Resource Management System with Generalized Modeling for Microservices on Cloud Run Wild:面向云上微服务的资源管理系统
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00079
Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Marcelo Amaral
{"title":"Run Wild: Resource Management System with Generalized Modeling for Microservices on Cloud","authors":"Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Marcelo Amaral","doi":"10.1109/CLOUD53861.2021.00079","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00079","url":null,"abstract":"Microservice architecture competes with the traditional monolithic design by offering benefits of agility, flexibility, reusability resilience, and ease of use. Nevertheless, due to the increase in internal communication complexity, care must be taken for resource-usage scaling in harmony with placement scheduling, and request balancing to prevent cascading performance degradation across microservices. We prototype Run Wild, a resource management system that controls all mechanisms in the microservice-deployment process covering scaling, scheduling, and balancing to optimize for desirable performance on the dynamic cloud driven by an automatic, united, and consistent deployment plan. In this paper, we also highlight the significance of co-location aware metrics on predicting the resource usage and computing the deployment plan. We conducted experiments with an actual cluster on the IBM Cloud platform. RunWild reduced the 90th percentile response time by 11% and increased average throughput by 10% with more than 30% lower resource usage for widely used autoscaling benchmarks on Kubernetes clusters.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"48 1","pages":"609-618"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90998712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AI-Assisted Security Controls Mapping for Clouds Built for Regulated Workloads 人工智能辅助的安全控制映射为受管制的工作负载构建的云
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/CLOUD53861.2021.00027
Vikas Agarwal, Roy Bar-Haim, Lilach Eden, Nisha Gupta, Yoav Kantor, Arun Kumar
{"title":"AI-Assisted Security Controls Mapping for Clouds Built for Regulated Workloads","authors":"Vikas Agarwal, Roy Bar-Haim, Lilach Eden, Nisha Gupta, Yoav Kantor, Arun Kumar","doi":"10.1109/CLOUD53861.2021.00027","DOIUrl":"https://doi.org/10.1109/CLOUD53861.2021.00027","url":null,"abstract":"Data privacy, security and compliance concerns prevent many enterprises from migrating their critical applications to public cloud infrastructure. To address this, cloud providers offer specialized clouds for heavily regulated industries, which implement prescribed security standards. A critical step in the migration process is to ensure that the customer's security requirements are fully met by the cloud provider. With a few hundreds of services in a typical cloud provider's infrastructure, this becomes a non-trivial task. Few tens to hundreds of security checks exposed by each applicable service need to be matched with several hundreds to thousands of security controls from the customer. Mapping customer's controls to cloud provider's control set is done manually by experts, a process that often takes months to complete, and needs to be repeated with every new customer. Moreover, these mappings have to be re-evaluated following regulatory or business changes, as well as cloud infrastructure upgrades. We present an AI-assisted system for mapping security controls, which drastically reduces the number of candidates a human expert needs to consider, allowing substantial speed-up of the mapping process. We empirically compare several controls mapping models, and show that hierarchical classification using fine-tuned Transformer networks works best. Overall, our empirical results demonstrate that the system performs well on real-world data.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"69 1","pages":"136-146"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86401976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
2021 IEEE 14th International Conference on Cloud Computing 2021 IEEE第14届云计算国际会议
IEEE Cloud Computing Pub Date : 2021-09-01 DOI: 10.1109/cloud18303.2011
{"title":"2021 IEEE 14th International Conference on Cloud Computing","authors":"","doi":"10.1109/cloud18303.2011","DOIUrl":"https://doi.org/10.1109/cloud18303.2011","url":null,"abstract":"","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88553746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信