{"title":"Kubitect - a Solution for On-premise Cluster Deployment","authors":"Din Music, C. Fortuna","doi":"10.1109/UCC56403.2022.00049","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00049","url":null,"abstract":"Deploying compute clusters is easy and user-friendly when using modern public cloud solutions, however alternative open source solutions for smaller, non-enterprise setups, such as hobby projects, home labs, or small and micro enterprises are currently missing. In this paper, we identify challenges for on-premise environments and propose Kubitect- a solution that allows anyone to set up their own on-premises private cloud that can run containerized applications and is easy to maintain. Unlike the existing state of the art, the proposed solution enables the creation of clusters on multiple physical hosts using a single declarative configuration file that can be easily understood and versioned. The paper describes the user and automated workflows of Kubitect, elaborates on the design choices for implementation and provides a qualitative and quantitative evaluation. The high-availability of on on-premises cluster deployment is also validated.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123449424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the CloudAM Workshop Chairs","authors":"","doi":"10.1109/ucc56403.2022.00076","DOIUrl":"https://doi.org/10.1109/ucc56403.2022.00076","url":null,"abstract":"","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127362181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erfan Sharafzadeh, Alireza Sanaee, Peng Huang, G. Antichi, S. Ghorbani
{"title":"Understanding Microquanta Process Scheduling for Cloud Applications","authors":"Erfan Sharafzadeh, Alireza Sanaee, Peng Huang, G. Antichi, S. Ghorbani","doi":"10.1109/UCC56403.2022.00050","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00050","url":null,"abstract":"Process schedulers are responsible for arbitrating CPU resources among services. Unfortunately, traditional sched-ulers, working at millisecond scale and characterized by strict priority schemes, are no longer suitable to meet increasingly stringent and diverse requirements imposed by many workloads. Recognizing this aspect, the research community has recently proposed new schedulers operating at microsecond granularity. This work studies microsecond-scale schedulers and policies for data center applications from the configuration versus perfor-mance standpoint. We demonstrate that for the best performance, workload-dependent parameter tuning is fundamental. Specifically, even a slight misconfiguration can also lead to 110% higher tail latency with respect to its best-case scenario. Our results call for a new set of process scheduling schemes that are workload-aware.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125455892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use of multilevel resource clustering for service placement in fog computing environments","authors":"Helberth Borelli, F. Costa, Sérgio T. Carvalho","doi":"10.1109/UCC56403.2022.00063","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00063","url":null,"abstract":"The fog is part of the infrastructure that makes up the Internet and can be considered an extension of the cloud. It is a layer between the cloud and the edge, and its topology comprises mostly heterogeneous and resource-limited computing nodes. Because of its proximity to edge devices, the fog is often mentioned as a solution to deploy services with stringent latency requirements. Moreover, the characteristics of the fog are appropriate for the deployment of services with lower computational demands, meaning that lean service models, such as microservices, are an appropriate match. Regarding the placement of services in the fog, an effective technique refers to the use of resource clustering, which organizes resources according to their properties. A common approach has been the use of the geographical location of resources to build clusters that favor the placement of services closer to clients. In this paper, we refine this approach by proposing multilevel clustering, adding to geolocation-based clustering the use of feature-based resource clustering. Thus, besides increasing proximity, we can also improve the matching between service requirements and the characteristics of resources, further improving the performance of applications. We evaluate the approach using iFogSim 2, and the results show performance improvements in excess of 20% in both application flow time and service placement time when compared to pure geolocation-based clustering.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114771704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Miller, John Panneerselvam, Lu Liu, N. Antonopoulos
{"title":"An Ensemble Neural Model for Classification of LADA Diabetes Case, Control and Variable Importance","authors":"A. Miller, John Panneerselvam, Lu Liu, N. Antonopoulos","doi":"10.1109/UCC56403.2022.00041","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00041","url":null,"abstract":"LADA Diabetes is a complex disease, but often dismissed as a potential individual disease within its own right. A comprehensive understanding of previously unknown aspects of LADA diabetes has the potential to not only ascertain a greater comprehension of LADA but also can assist the classification of Type 1 and Type 2 diabetes, as LADA characterises the attributes of both Type 1 and Type 2 diabetes. This paper proposes a novel heterogeneous ensemble model comprising of Neural network with Feature Extraction, Neural network alongside Multilayer Perceptron with Multiple Layers with the intention of classifying LADA diabetes along with weighting the importance of conventional variables including family history, age, gender, BMI, cholesterol level, and waist size in the classification. These conventional variables are analysed based on the aforementioned three-algorithm ensemble stack, and the entire architecture is tuned for optimal classification performance. The proposed novel ensemble stack delivers a reliable prediction accuracy in the identification of case, control, and variable importance. Performance evaluation of the proposed ensemble model based on statistics such as ROC/AUC curve, precision and recall demonstrated a higher predictive accuracy of 92.00%, sensitivity of 91.77%, and specificity of 92.23% alongside a precision of 92.23%, recall at 91.79% and an F1 score of 92.02%, ultimately outperforming well-known classical classification models. Further analysis has determined waist as an important and influential variable in the classification process, whereby a 100% association of LADA diabetes with waist is exhibited.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"360 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126695906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Huang, Yunwen Bai, FengE Li, Xiaoning Ding, Qian Chen, D. Vij, Peng Du, Ying Xiong
{"title":"Arktos: A Hyperscale Cloud Infrastructure for Building Distributed Cloud","authors":"Ying Huang, Yunwen Bai, FengE Li, Xiaoning Ding, Qian Chen, D. Vij, Peng Du, Ying Xiong","doi":"10.1109/UCC56403.2022.00022","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00022","url":null,"abstract":"Scalability and management cost in cloud computing are few of the top challenges for the cloud providers and large enterprises. In this paper, we present Arktos, a cloud infrastructure platform for managing large-scale compute clusters and running millions of application instances as containers and/or virtual machines (VM). Arktos is envisioned as a stepping-stone from current “ single-region” focused cloud infrastructure towards next generation distributed infrastructure in the public and/or private cloud environments. We present details related to the Arktos system architecture and features, important design decisions, and the results and analysis of the performance benchmark testing. Arktos achieves high scalability by partitioning its architecture into two independent components, the resource partition (RP) and the tenant workload partition (TP), with each component scaling independently. Our performance testing using a benchmark tool demonstrates that Arktos with just two RPs and two TPs system setting can already manage a cluster of 50K compute nodes and is able to run 1.5 million workload containers with 5 times system throughput (QPS)1 compared with an existing container management system. Three key characteristics differentiate Arktos from other open source cloud platforms such as OpenStack and Kubernetes. Firstly, Arktos architecture is a truly scalable architecture that supports a very large cluster by scaling to more RPs and TPs in the system, Secondly, it unifies the runtime infrastructure to run and manage both VM and container applications natively, therefore eliminating the cost of managing separate technology stacks for VMs and containers. Lastly, Arktos has a unique “ virtual cluster” style multi-tenancy design that provides both strong tenancy isolation, including network isolation and transparent resource view.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130902115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Microservices vs Serverless Deployment in AWS: A Case Study with an Image Processing Application","authors":"Raju Shrestha, Beebu Nisha","doi":"10.1109/ucc56403.2022.00033","DOIUrl":"https://doi.org/10.1109/ucc56403.2022.00033","url":null,"abstract":"Microservices and serverless are two major architectures used today for deploying cloud-native applications. There are ongoing debates concerning which of these two architectures to use for deploying a given application. In this paper, we have done a case study with an image processing application deployed in Amazon AWS. The study showed that serverless perform better in terms of performance, stability, cost, ease of deployment, and security, whereas microservices show superiority with lower memory use, and better controllability and visibility.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133852450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Cognitive Self-Management of IoT-Edge-Cloud Continuum based on User Intents","authors":"Hui Song, A. Soylu, D. Roman","doi":"10.1109/UCC56403.2022.00055","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00055","url":null,"abstract":"Elasticity of the computing continuum with on demand availability allows for automated provisioning and release of computing resources as needed; however, this self management capability is severely limited due to the lack of knowledge on historical and timely resource utilisation and means for stakeholders to express their needs in a high-level manner. In this paper, we introduce and discuss a new concept – intent-based cognitive continuum for sustainable elasticity.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124470755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robustness via Elasticity Accelerators for the IoT-Edge-Cloud Continuum","authors":"Hong Linh Truong, K. Magoutis","doi":"10.1109/UCC56403.2022.00052","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00052","url":null,"abstract":"This paper presents a novel framework to enhance programmability of the IoT-edge-cloud continuum and to accommodate rapid change through software composition and elastic adaptation using appropriate reusable runtime components, techniques, and languages (which we collectively term Accelerators) to optimize resources and services. We particularly pinpoint elasticity as a driver for increased robustness, an ability we abbreviate as $mathrm{R}_{mathrm{V}}$E. $mathrm{R}_{mathrm{V}}$E Accelerators aim at enabling end-to-end programming, configuration, monitoring, and optimization in the IoT-edge-cloud continuum for emerging swarm applications and services, with the ability to coordinate domain-specific elasticity capabilities in a cross-layered manner. $mathrm{R}_{mathrm{V}}mathrm{E}$ Accelerators will enable developers to easily program adaptive edge-cloud functionality through different layers, while cloud and edge solution providers can coordinate policies across layers to support robust, resilient systems at low effort and cost for their customers.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125508907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting the Potential of the Edge-Cloud Continuum with Self-distributing Systems","authors":"Roberto Rodrigues Filho, L. Bittencourt, F. Costa","doi":"10.1109/UCC56403.2022.00046","DOIUrl":"https://doi.org/10.1109/UCC56403.2022.00046","url":null,"abstract":"The Edge-Cloud Continuum offers a wide range of adaptive deployment settings for modern applications. However, in order to exploit the full potential of the edge-cloud infrastructure and platforms, applications have to be carefully crafted to be stateless and self-contained in small services or functions, i.e., the opposite of the classic stateful monolithic applications. In this paper, we explore an alternative approach that allows stateful single applications to also exploit the full potential of the edge-cloud continuum. We explore the concept of Self-distributing Systems(SDS) as a general approach for code offloading and as an elastic application-level mechanism for performance scale-out on the edge-cloud continuum. Our preliminary results indicate that SDS enables enough flexibility for applications to fully explore the edge-cloud resource mixture. Particularly, we describe our state management strategies for stateful code mobility; explore SDS as a general mechanism to exploit horizontal scaling on the cloud; and examine SDS as a general code offloading mechanism to move code from edge to cloud, showing the scenarios where our approach enables applications to positively exploit the edgecloud continuum for better performance.","PeriodicalId":203244,"journal":{"name":"2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121147790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}