Bogdan-Costel Mocanu, Vlad Mureean, M. Mocanu, V. Cristea
{"title":"Cloud live streaming System based on Auto-adaptive Overlay for Cyber Physical Infrastructure","authors":"Bogdan-Costel Mocanu, Vlad Mureean, M. Mocanu, V. Cristea","doi":"10.1145/2962564.2962571","DOIUrl":"https://doi.org/10.1145/2962564.2962571","url":null,"abstract":"According to Mark Zuckerberg's speech, at the Samsung S7 launch in February 2016, the age of massive data streaming, especially massive video streaming, is here. At the beginning of the year 2000, most people used to share and search almost only text. Then, they started to be interested in sharing and searching images and, in the latter years, videos with different formats and resolutions. One of the emerging technologies of this decade is Virtual Reality (VR) through which people can share resources in a more exciting and enjoyable manner. If a decade ago people used to only read about how to do things, now they see it being done in videos. Expectations are that, in the not so distant future, they will be able to visualize and experience it through the power of VR. But, new technologies come with new challenges. Video streaming, especially for VR, generates a great amount of data. Techniques used until now need to evolve or lead the way for better ones. Centralized approaches for big data live streaming are no longer appropriate. Peer-to-Peer networks are more suitable due to their decentralized nature and auto-adaptive property. The aim of this paper is to analyze and evaluate the performances of the SPIDER Peer-to-Peer Overlay in the context of live video streaming for two Cloud use cases. The first scenario entitled CyberWater aims to create an e-platform for sustainable water resources with a high focus on pollution phenomena. The second scenario, called ClueFarm, is a Cloud service-based system for quality business development in the farming sector. The experimental results presented in this paper focus on the amount of bandwidth needed in booth test scenarios and emphasize the advantages of the SPIDER Peer-to-Peer Overlay.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125986778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Chilipirea, Alexandru Constantin, D. Popa, Octavian Crintea, C. Dobre
{"title":"Cloud Elasticity: going beyond demand as user load","authors":"C. Chilipirea, Alexandru Constantin, D. Popa, Octavian Crintea, C. Dobre","doi":"10.1145/2962564.2962570","DOIUrl":"https://doi.org/10.1145/2962564.2962570","url":null,"abstract":"Cloud computing systems have become not only popular, but extensively used. They are supported and exploited by both industry and academia. Cloud providers have diversified and so did the software offered by their systems. Infrastructure as a Service (IaaS) clouds are now available from single virtual machine use cases, such as a personal server, to specialized high performance or machine learning engines. This popularity has been brought by the low-cost and risk-free feature of renting computing resources instead of buying them, in a large, one-time investment. Furthermore, clouds permit their clients the use of elasticity. Elasticity is the most relevant feature of cloud computing. It refers to the clients' ability to easily change the number of rented resources in a live environment. This permits the entire system to handle differences in load. Most cloud clients serve web applications or services to third parties. In these cases, load differences can be correlated to the number of users for the service and elasticity is used to handle differences in what is called user load. Most of the scientific literature approaches elasticity looking solely at user load. To give a clearer understanding, the majority of cloud frameworks in use today work as follows: they start a number of worker nodes, and tasks are assigned to them for execution. Only when the user load changes, the number of workers is adjusted, if any. In this paper, we propose an alternative approach, where the number of workers depends on the actual requirements coming from the different execution steps of an application. We show such an idea can be achieved for several workflows from different fields and that it can bring significant benefits to execution time and cost.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116659713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","authors":"Florin Pop, R. Prodan","doi":"10.1145/2962564","DOIUrl":"https://doi.org/10.1145/2962564","url":null,"abstract":"","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121908985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Gossip-Based Dynamic Virtual Machine Consolidation Strategy for Large-Scale Cloud Data Centers","authors":"S. Masoumzadeh, H. Hlavacs","doi":"10.1145/2962564.2962565","DOIUrl":"https://doi.org/10.1145/2962564.2962565","url":null,"abstract":"Dynamic virtual machine consolidation strategies refer to a number of resource management algorithms aiming at finding the right balance between energy consumption and SLA violations by using live migration techniques in virtualized cloud data centers. Most strategies found in the literature typically focus on centralized approaches, with a single management node responsible for VM placement. These approaches suffer from poor scalability, as the management node may become a performance bottleneck if the number of physical and virtual machines grows. In this paper we propose a fully decentralized dynamic virtual machine consolidation strategy on top of an unstructured P2P network of physical host nodes and investigate the performance of the strategy in terms of energy consumption, average CPU utilization, performance degradation due to overloading, performance degradation due to migration and total number of sleep nodes inside data center. The experimental results show that the proposed P2P strategy can achieve a global efficiency in terms of energy and performance very close to a centralized approach while assuring scalability due to increasing the number of hosts in data center.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129846156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance of Approximate Causal Consistency for Partially Replicated Systems","authors":"T. Hsu, A. Kshemkalyani","doi":"10.1145/2962564.2962572","DOIUrl":"https://doi.org/10.1145/2962564.2962572","url":null,"abstract":"Causal consistency is one of the widely used consistency models in wide-area replicated systems due to highly scalable semantics. Partial replication is a replication mechanism that emphasizes a better network capacity utilization. However, it has a challenge of higher meta-data overhead and processing complexity in communication. Algorithm Approx-Opt-Track has been proposed to reduce meta-data size, however, by risking that causal consistency might be violated. In an effort to bridge this gap and reconcile the trade-off between them, we present the analytic data to show the performance of Approx-Opt-Track. We also give simulation results to show the potential benefits of Approx-Opt-Track, under almost the same guarantees as causal consistency, at a smaller cost. The results indicate that partial replication is a potentially viable alternative to full replication for implementing causal consistency.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"78 s346","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120835236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computation Offloading from Mobile Devices: Can Edge Devices Perform Better Than the Cloud?","authors":"A. Bhattacharya, Pradipta De","doi":"10.1145/2962564.2962569","DOIUrl":"https://doi.org/10.1145/2962564.2962569","url":null,"abstract":"Mobile devices like smartphones can augment their low-power processors by offloading portions of mobile applications to cloud servers. However, offloading to cloud data centers has a high network latency. To mitigate the problem of network latency, recently offloading to computing resources lying within the user's premises, such as network routers, tablets or laptop has been proposed. In this paper, we determine the devices whose processors have sufficient power to act as servers for computation offloading. We perform trace-driven simulation of SPECjvm2008 benchmarks to study the performance using different hardware. Our simulation shows that offloading to current state-of-the-art processors of user devices can improve performance of mobile applications. We find that offloading to user's own laptop reduces finish time of benchmark applications by 10%, compared to offloading to a commercial cloud server.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134549189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"I2oT: Inexactness in IoT","authors":"A. Banerjee, H. Paul, A. Mukherjee","doi":"10.1145/2962564.2962567","DOIUrl":"https://doi.org/10.1145/2962564.2962567","url":null,"abstract":"Recent research on inexact computing shows promising results for improved energy utilization for resource hungry applications across different layers of the execution stack. The general philosophy of inexact computing is to trade-off correctness within acceptable limits with the premise of improved energy utilization. In this paper, we explore this philosophy in the context of a heterogeneous Internet-of-Things (IoT) architecture for application execution. We consider an application workflow, comprising of a set of methods with their possible inexact lightweight variants, a deadline for completion, and a multi-tiered IoT compute architecture (e.g. mobile device, gateway, cloud, etc.). Our methodology produces a time-optimized execution solution that assigns each method, with an appropriate variant (the exact one or any of its inexact realizations), to an appropriate computing layer such that the deadline is met with quality as best as possible. We present experimental results to demonstrate the efficacy of our proposal on two real-life case studies.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Impact on the Performance of Co-running Virtual Machines in a Virtualized Environment","authors":"G. Torres, Chen Liu","doi":"10.1145/2962564.2962573","DOIUrl":"https://doi.org/10.1145/2962564.2962573","url":null,"abstract":"The success of cloud computing technologies heavily depends on the underlying hardware as well as the system software support for virtualization. As hardware resources become more abundant with each technology generation, the complexity of managing the resources of computing systems has increased dramatically. Past research has demonstrated that contention for shared resources in modern multi-core multithreaded microprocessors (MMMP) can lead to poor and unpredictable performance. In this paper we conduct a performance degradation study targeting virtualized environment. Firstly, we present our findings of the possible impact on the performance of virtual machines (VMs) when managed by the default Linux scheduler as regular host processes. Secondly, we study how the performance of virtual machines can be affected by different ways of co-scheduling at the host level. Finally, we conduct a correlation study in which we strive to determine which hardware event(s) can be used to identify performance degradation of the VMs and the applications running within. Our experimental results show that if not managed carefully, the performance degradation of individual VMs can be as high as 135%. We believe that low-level hardware information collected at runtime can be used to assist the host scheduler in managing co-running virtual machines in order to alleviate contention for resources, therefore reducing performance degradation of individual VMs as well as improving the overall system throughput.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"56 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132640641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theodoros Soultanopoulos, Stelios Sotiriadis, E. Petrakis, C. Amza
{"title":"Internet of Things data management in the cloud for Bluetooth Low Energy (BLE) devices","authors":"Theodoros Soultanopoulos, Stelios Sotiriadis, E. Petrakis, C. Amza","doi":"10.1145/2962564.2962568","DOIUrl":"https://doi.org/10.1145/2962564.2962568","url":null,"abstract":"The use of wearable sensors and their connectivity to Internet offers significant benefits for storing sensing data that could be utilized intelligently for multiple purpose applications such as for monitoring purposes in healthcare domain. This work presents an Internet of Things (IoT) gateway service taking advantage of modern mobile devices and their capabilities to communicate with wearable Bluetooth low energy (BLE) sensors so data could be forwarded to the cloud on the fly and on real time. The service transforms a mobile platform (such as a smartphone) to a gateway allowing continuous and fast communication of data that is forwarded from the device to the cloud on demand or automatically for an automated decision making. Its features include (a) use of an internal processing mechanism for the BLE sensor signals and defines the way in which data is send to the cloud, (b) dynamic service as it has the ability to recognize new BLE sensors properties by easily adapting the data model according to a dynamic schema and (c) universal BLE devices capability that are registered automatically and are monitored on the fly while it keeps historical data that could be integrated into meaningful business intelligence. Building upon principles of service oriented design, the service takes full advantage of cloud services for processing potential big data streams produced by an ever increasing number of users and sensors. The contribution of this work is on the IoT data transmission rate that is averagely calculated to 128 milliseconds and in the experimental section we discuss that this is significantly low for real time data.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116900236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modelling the Scalability of Real-Time Online Interactive Applications on Clouds","authors":"Dominik Meiländer, S. Gorlatch","doi":"10.1145/2962564.2962566","DOIUrl":"https://doi.org/10.1145/2962564.2962566","url":null,"abstract":"We address the scalability of Real-Time Online Interactive Applications (ROIA) on Clouds. Popular examples of ROIA include, e.g., multi-player online computer games, simulation-based e-learning, and training in real-time virtual environments. Cloud computing allows to combine ROIA's high demands on QoE (Quality of Experience) with the requirement of efficient and economic utilization of computation and network resources. We propose a generic scalability model for ROIA on Clouds that monitors the application performance at runtime and predicts the load-balancing decisions: by weighting the potential benefits of particular load-balancing actions against the time and resources overhead of them, our model recommends, whether and how often to redistribute workload or add/remove Cloud resources when the number of users changes. We describe how the scalability is modelled w.r.t. to two kinds of resources -- computation (CPU) and communication (network) -- and how we combine these models together. We experimentally evaluate the quality of our combined model using a challenging multi-player shooter game as a use case.","PeriodicalId":235870,"journal":{"name":"Proceedings of the Third International Workshop on Adaptive Resource Management and Scheduling for Cloud Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121636635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}