Maryleen Uluaku Amaizu;Muhammad K. Ali;Ashiq Anjum;Lu Liu;Antonio Liotta;Omer Rana
{"title":"Edge-Enhanced QoS Aware Compression Learning for Sustainable Data Stream Analytics","authors":"Maryleen Uluaku Amaizu;Muhammad K. Ali;Ashiq Anjum;Lu Liu;Antonio Liotta;Omer Rana","doi":"10.1109/TSUSC.2023.3252039","DOIUrl":null,"url":null,"abstract":"Existing Cloud systems involve large volumes of data streams being sent to a centralised data centre for monitoring, storage and analytics. However, migrating all the data to the cloud is often not feasible due to cost, privacy, and performance concerns. However, Machine Learning (ML) algorithms typically require significant computational resources, hence cannot be directly deployed on resource-constrained edge devices for learning and analytics. Edge-enhanced compressive offloading becomes a sustainable solution that allows data to be compressed at the edge and offloaded to the cloud for further analysis, reducing bandwidth consumption and communication latency. The design and implementation of a learning method for discovering compression techniques that offer the best QoS for an application is described. The approach uses a novel modularisation approach that maps features to models and classifies them for a range of Quality of Service (QoS) features. An automated QoS-aware orchestrator has been designed to select the best autoencoder model in real-time for compressive offloading in edge-enhanced clouds based on changing QoS requirements. The orchestrator has been designed to have diagnostic capabilities to search appropriate parameters that give the best compression. A key novelty of this work is harnessing the capabilities of autoencoders for edge-enhanced compressive offloading based on portable encodings, latent space splitting and fine-tuning network weights. Considering how the combination of features lead to different QoS models, the system is capable of processing a large number of user requests in a given time. The proposed hyperparameter search strategy (over the neural architectural space) reduces the computational cost of search through the entire space by up to 89%. When deployed on an edge-enhanced cloud using an Azure IoT testbed, the approach saves up to 70% data transfer costs and takes 32% less time for job completion. It eliminates the additional computational cost of decompression, thereby reducing the processing cost by up to 30%.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"8 3","pages":"448-464"},"PeriodicalIF":3.0000,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10064236/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing Cloud systems involve large volumes of data streams being sent to a centralised data centre for monitoring, storage and analytics. However, migrating all the data to the cloud is often not feasible due to cost, privacy, and performance concerns. However, Machine Learning (ML) algorithms typically require significant computational resources, hence cannot be directly deployed on resource-constrained edge devices for learning and analytics. Edge-enhanced compressive offloading becomes a sustainable solution that allows data to be compressed at the edge and offloaded to the cloud for further analysis, reducing bandwidth consumption and communication latency. The design and implementation of a learning method for discovering compression techniques that offer the best QoS for an application is described. The approach uses a novel modularisation approach that maps features to models and classifies them for a range of Quality of Service (QoS) features. An automated QoS-aware orchestrator has been designed to select the best autoencoder model in real-time for compressive offloading in edge-enhanced clouds based on changing QoS requirements. The orchestrator has been designed to have diagnostic capabilities to search appropriate parameters that give the best compression. A key novelty of this work is harnessing the capabilities of autoencoders for edge-enhanced compressive offloading based on portable encodings, latent space splitting and fine-tuning network weights. Considering how the combination of features lead to different QoS models, the system is capable of processing a large number of user requests in a given time. The proposed hyperparameter search strategy (over the neural architectural space) reduces the computational cost of search through the entire space by up to 89%. When deployed on an edge-enhanced cloud using an Azure IoT testbed, the approach saves up to 70% data transfer costs and takes 32% less time for job completion. It eliminates the additional computational cost of decompression, thereby reducing the processing cost by up to 30%.