{"title":"Test-Based Least Privilege Discovery on Cloud Infrastructure as Code","authors":"Ryo Shimizu, Hideyuki Kanuka","doi":"10.1109/CloudCom49646.2020.00007","DOIUrl":"https://doi.org/10.1109/CloudCom49646.2020.00007","url":null,"abstract":"Infrastructure as Code (IaC) for cloud is an important practice due to its efficient and reproducible provisioning of cloud environments. On a cloud IaC definition (template), developers need to manage permissions for each cloud services as well as a desired cloud environment. To minimize the risk of cyber-attacks, retaining least privilege, i.e., giving a minimum set of permissions, on IaC templates is important and widely regarded as best practice. However, discovering least privilege on a target IaC template at one time is an error-prone and burdensome task for developers. One reason is that some actions of a cloud service implicitly use other services and require corresponding permissions, which are hard to recognize without actual executions on the cloud and burden the development process with iterations of permission setting and provisioned result checking. In this paper, we present a technique to automatically discover least privilege. Our method incrementally finds the least privilege by the iteration of testing on the cloud and (re)configuring permissions on the basis of test results. We conducted case studies and found that our approach can identify least privilege on Amazon Web Services within a practical time. Our experiments also show that the proposed algorithm can reduce the number of test executions, which directly affects the time and cost on cloud to determine least privilege, by 69.3% and 39.8% compared with the random and heuristic methods, respectively, on average.","PeriodicalId":401135,"journal":{"name":"2020 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121414721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FussyCache: A Caching Mechanism for Emerging Storage Hierarchies","authors":"Jit Gupta, K. Kant, Ayman Abouelwafa","doi":"10.1109/CloudCom49646.2020.00010","DOIUrl":"https://doi.org/10.1109/CloudCom49646.2020.00010","url":null,"abstract":"In this paper, we propose a novel caching mechanism, called FussyCache, that differs from the traditional DRAM caching mechanisms that automatically cache a data block when it is requested. Instead, FussyCache evaluates each requested data block for its caching eligibility, and reads ineligible blocks directly from the device each time. We show that the Fussy-Cache performs substantially better than the traditional caching algorithms and its performance increases with the storage device speed. In particular, for the first generation Intel Optane based storage, FussyCache provides 25–30% reduction in the average access latency as compared to the native caching mechanism such as plain LRU. We also observe close to 15-20% improvement in performance even for a mainstream TLC SSD. Furthermore, the FussyCache design includes two mechanisms that allow for its easy deployment in any environment: (a) a self-monitoring stage that reverts it to a normal LRU when partial caching is not beneficial, and (b) a training phase that automatically tunes the configurable parameters for the deployment environment.","PeriodicalId":401135,"journal":{"name":"2020 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114336929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mulugeta Ayalew Tamiru, Johan Tordsson, E. Elmroth, G. Pierre
{"title":"An Experimental Evaluation of the Kubernetes Cluster Autoscaler in the Cloud","authors":"Mulugeta Ayalew Tamiru, Johan Tordsson, E. Elmroth, G. Pierre","doi":"10.1109/CloudCom49646.2020.00002","DOIUrl":"https://doi.org/10.1109/CloudCom49646.2020.00002","url":null,"abstract":"Despite the abundant research in cloud autoscaling, autoscaling in Kubernetes, arguably the most popular cloud platform today, is largely unexplored. Kubernetes' Cluster Autoscaler can be configured to select nodes either from a single node pool (CA) or from multiple node pools (CA-NAP). We evaluate and compare these configurations using two representative applications and workloads on Google Kubernetes Engine (GKE). We report our results using monetary cost and standard autoscaling performance metrics (under- and over-provisioning accuracy, under- and over-provisioning timeshare, instability of elasticity and deviation from the theoretical optimal autoscaler) endorsed by the SPEC Cloud Group. We show that, overall, CA-NAP outperforms CA and that autoscaling performance depends mainly on the composition of the workload. We compare our results with those of the related work and point out further configuration tuning opportunities to improve performance and cost-saving.","PeriodicalId":401135,"journal":{"name":"2020 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115404580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ReLeaSER: A Reinforcement Learning Strategy for Optimizing Utilization Of Ephemeral Cloud Resources","authors":"Mohamed Handaoui, Jean-Emile Dartois, Jalil Boukhobza, Olivier Barais, Laurent d'Orazio","doi":"10.1109/CloudCom49646.2020.00009","DOIUrl":"https://doi.org/10.1109/CloudCom49646.2020.00009","url":null,"abstract":"Cloud data center capacities are over-provisioned to handle demand peaks and hardware failures which leads to low resources' utilization. One way to improve resource utilization and thus reduce the total cost of ownership is to offer unused resources (referred to as ephemeral resources) at a lower price. However, reselling resources needs to meet the expectations of its customers in terms of Quality of Service. The goal is so to maximize the amount of reclaimed resources while avoiding SLA penalties. To achieve that, cloud providers have to estimate their future utilization to provide availability guarantees. The prediction should consider a safety margin for resources to react to unpredictable workloads. The challenge is to find the safety margin that provides the best trade-off between the amount of resources to reclaim and the risk of SLA violations. Most state-of-the-art solutions consider a fixed safety margin for all types of metrics (e.g., CPU, RAM). However, a unique fixed margin does not consider various workloads variations over time which may lead to SLA violations or/and poor utilization. In order to tackle these challenges, we propose ReLeaSER, a Reinforcement Learning strategy for optimizing the ephemeral resources' utilization in the cloud. ReLeaSER dynamically tunes the safety margin at the host-level for each resource metric. The strategy learns from past prediction errors (that caused SLA violations). Our solution reduces significantly the SLA violation penalties on average by $mathbf{2.7}times$ and up to $mathbf{3.4}times$. It also improves considerably the CPs' potential savings by 27.6% on average and up to 43.6%.","PeriodicalId":401135,"journal":{"name":"2020 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133965814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}