{"title":"PPEC: A Privacy-Preserving, Cost-Effective Incremental Density Peak Clustering Analysis on Encrypted Outsourced Data","authors":"Haomiao Yang;ZiKang Ding;Ruiheng Lu;Kunlan Xiang;Hongwei Li;Dakui Wu","doi":"10.1109/TCC.2025.3541749","DOIUrl":"https://doi.org/10.1109/TCC.2025.3541749","url":null,"abstract":"Call detail records (CDRs) provide valuable insights into user behavior, which are instrumental for telecom companies in optimizing network coverage and service quality. However, while cloud computing facilitates clustering analysis on a vast scale of CDR data, it introduces privacy risks. The challenge lies in striking a balance between efficiency, security, and cost-effectiveness in privacy-preserving algorithms. To tackle this issue, we propose a privacy-preserving and cost-effective incremental density peak clustering scheme. Our approach leverages homomorphic encryption and order-preserving encryption to enable direct computations and clustering on encrypted data. Moreover, it employs reaching definition analysis to optimize the execution flow of static tasks, pinpointing the optimal junctures for transitioning between the two types of encryption to reduce communication overhead. Furthermore, our scheme utilizes a game theory-based verification strategy to ascertain the accuracy of the results. This methodology can be effectively deployed on the Ethereum blockchain via smart contracts. A comprehensive security analysis confirms that our scheme upholds both privacy and data integrity. Experimental evaluations substantiate the clustering accuracy, communication load, and computational efficiency of our scheme, thereby validating its viability in real-world applications.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"485-497"},"PeriodicalIF":5.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaiwei Mo;Wei Lin;Jiaxun Lu;Chun Jason Xue;Yunfeng Shao;Hong Xu
{"title":"GHPFL: Advancing Personalized Edge-Based Learning Through Optimized Bandwidth Utilization","authors":"Kaiwei Mo;Wei Lin;Jiaxun Lu;Chun Jason Xue;Yunfeng Shao;Hong Xu","doi":"10.1109/TCC.2025.3540023","DOIUrl":"https://doi.org/10.1109/TCC.2025.3540023","url":null,"abstract":"Federated learning (FL) is increasingly adopted to combine knowledge from clients in training without revealing their private data. In order to improve the performance of different participants, personalized FL has recently been proposed. However, considering the non-independent and identically distributed (non-IID) data and limited bandwidth at clients, the model performance could be compromised. In reality, clients near each other often tend to have similar data distributions. In this work, we train the personalized edge-based model in the client-edge-server FL. While considering the differences in data distribution, we fully utilize the limited bandwidth resources. To make training efficient and accurate at the same time, An intuitive idea is to learn as much useful knowledge as possible from other edges and reduce the accuracy loss incurred by non-IID data. Therefore, we devise Grouping Hierarchical Personalized Federated Learning (GHPFL). In this framework, each edge establishes physical connections with multiple clients, while the server physically connects with edges. It clusters edges into groups and establishes client-edge logical connections for synchronization. This is based on data similarities that the nodes actively identify, as well as the underlying physical topology. We perform a large-scale evaluation to demonstrate GHPFL’s benefits over other schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"473-484"},"PeriodicalIF":5.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144232043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayoub Ben-Ameur;Andrea Araldo;Tijani Chahed;György Dán
{"title":"Cache Allocation in Multi-Tenant Edge Computing: An Online Model-Based Reinforcement Learning Approach","authors":"Ayoub Ben-Ameur;Andrea Araldo;Tijani Chahed;György Dán","doi":"10.1109/TCC.2025.3538158","DOIUrl":"https://doi.org/10.1109/TCC.2025.3538158","url":null,"abstract":"We consider a Network Operator (NO) that owns Edge Computing (EC) resources, virtualizes them and lets third party Service Providers (SPs) run their services, using the allocated slice of resources. We focus on one specific resource, i.e., cache space, and on the problem of how to allocate it among several SPs in order to minimize the backhaul traffic. Due to confidentiality guarantees, the NO cannot observe the nature of the traffic of SPs, which is encrypted. Allocation decisions are thus challenging, since they must be taken solely based on observed monitoring information. Another challenge is that not all the traffic is cacheable. We propose a data-driven cache allocation strategy, based on Reinforcement Learning (RL). Unlike most RL applications, in which the decision policy is learned offline on a simulator, we assume no previous knowledge is available to build such a simulator. We thus apply RL in an <italic>online</i> fashion, i.e., the model and the policy are learned by directly perturbing and monitoring the actual system. Since perturbations generate spurious traffic, we thus need to limit perturbations. This requires learning to be extremely efficient. To this aim, we devise a strategy that learns an approximation of the cost function, while interacting with the system. We then use such an approximation in a Model-Based RL (MB-RL) to speed up convergence. We prove analytically that our strategy brings cache allocation boundedly close to the optimum and stably remains in such an allocation. We show in simulations that such convergence is obtained within few minutes. We also study its fairness, its sensitivity to several scenario characteristics and compare it with a method from the state-of-the-art.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 2","pages":"459-472"},"PeriodicalIF":5.3,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiawei Tan;Zhuo Tang;Wentong Cai;Wen Jun Tan;Xiong Xiao;Jiapeng Zhang;Yi Gao;Kenli Li
{"title":"A Cost-Aware Operator Migration Approach for Distributed Stream Processing System","authors":"Jiawei Tan;Zhuo Tang;Wentong Cai;Wen Jun Tan;Xiong Xiao;Jiapeng Zhang;Yi Gao;Kenli Li","doi":"10.1109/TCC.2025.3538512","DOIUrl":"https://doi.org/10.1109/TCC.2025.3538512","url":null,"abstract":"Stream processing is integral to edge computing due to its low-latency attributes. Nevertheless, variability in user group sizes and disparate computing capabilities of edge devices necessitate frequent operator migrations within the stream. Moreover, intricate dependencies among stream operators often obscure the detection of potential bottleneck operators until an identified bottleneck is migrated in the stream. To address this, we propose a Cost-Aware Operator Migration (CAOM) scheme. The CAOM scheme incorporates a bottleneck operator detection mechanism that directly identifies all bottleneck operators based on task running metrics. This approach avoids multiple consecutive operator migrations in complex tasks, reducing the number of task interruptions caused by operator migration. Moreover, CAOM takes into account the temporal variance in operator migration costs. By factoring in the fluctuating data generation rate from data sources at different time intervals, CAOM selects the optimal start time for operator migration to minimize the amount of accumulated data during task interruptions. Finally, we implemented CAOM on Apache Flink and evaluated its performance using the WordCount and Nexmark applications. Our experiments show that CAOM effectively reduces the number of necessary operator migrations in tasks with complex topologies and decreases the latency overhead associated with operator migration compared to state-of-the-art schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"441-454"},"PeriodicalIF":5.3,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenli He;Ying Guo;Xiaolong Zhai;Mingxiong Zhao;Wei Zhou;Keqin Li
{"title":"Joint Computation Offloading and Resource Allocation in Mobile-Edge Cloud Computing: A Two-Layer Game Approach","authors":"Zhenli He;Ying Guo;Xiaolong Zhai;Mingxiong Zhao;Wei Zhou;Keqin Li","doi":"10.1109/TCC.2025.3538090","DOIUrl":"https://doi.org/10.1109/TCC.2025.3538090","url":null,"abstract":"Mobile-Edge Cloud Computing (MECC) plays a crucial role in balancing low-latency services at the edge with the computational capabilities of cloud data centers (DCs). However, many existing studies focus on single-provider settings or limit their analysis to interactions between mobile devices (MDs) and edge servers (ESs), often overlooking the competition that occurs among ESs from different providers. This article introduces an innovative two-layer game framework that captures independent self-interested competition among MDs and ESs, providing a more accurate reflection of multi-vendor environments. Additionally, the framework explores the influence of cloud-edge collaboration on ES competition, offering new insights into these dynamics. The proposed model extends previous research by developing algorithms that optimize task offloading and resource allocation strategies for both MDs and ESs, ensuring the convergence to Nash equilibrium in both layers. Simulation results demonstrate the potential of the framework to improve resource efficiency and system responsiveness in multi-provider MECC environments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"411-428"},"PeriodicalIF":5.3,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developments on the “Machine Learning as a Service for High Energy Physics” Framework and Related Cloud Native Solution","authors":"Luca Giommi;Daniele Spiga;Mattia Paladino;Valentin Kuznetsov;Daniele Bonacorsi","doi":"10.1109/TCC.2025.3535793","DOIUrl":"https://doi.org/10.1109/TCC.2025.3535793","url":null,"abstract":"Machine Learning (ML) techniques have been successfully used in many areas of High Energy Physics (HEP) and will play a significant role in the success of upcoming High-Luminosity Large Hadron Collider (HL-LHC) program at CERN. An unprecedented amount of data at the exascale will be collected by LHC experiments in the next decade, and this effort will require novel approaches to train and use ML models. The work presented in this paper is focused on the developments of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTPs calls. These pipelines are executed by using MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. In particular, new features implemented on the framework will be presented as well as updates on the architecture of an existing prototype of the MLaaS4HEP cloud service will be provided. This solution includes two OAuth2 proxy servers as authentication/authorization layer, a MLaaS4HEP server, an XRootD proxy server for enabling access to remote ROOT data, and the TensorFlow as a Service (TFaaS) service in charge of the inference phase.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"429-440"},"PeriodicalIF":5.3,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verifiable Encrypted Image Retrieval With Reversible Data Hiding in Cloud Environment","authors":"Mingyue Li;Yuting Zhu;Ruizhong Du;Chunfu Jia","doi":"10.1109/TCC.2025.3535937","DOIUrl":"https://doi.org/10.1109/TCC.2025.3535937","url":null,"abstract":"With growing numbers of users outsourcing images to cloud servers, privacy-preserving content-based image retrieval (CBIR) is widely studied. However, existing privacy-preserving CBIR schemes have limitations in terms of low search accuracy and efficiency due to the use of unreasonable index structures or retrieval methods. Meanwhile, existing result verification schemes do not consider the privacy of verification information. To address these problems, we propose a new secure verification encrypted image retrieval scheme. Specifically, we design an additional homomorphic bitmap index structure by using a pre-trained CNN model with modified fully connected layers to extract image feature vectors and organize them into a bitmap. It makes the extracted features more representative and robust compared to manually designed features, and only performs vector addition during the search process, improving search efficiency and accuracy. Moreover, we design a reversible data hiding (RDH) technique with color images, which embeds the verification information into the least significant bits of the encrypted image pixels to improve the security of the verification information. Finally, we analyze the security of our scheme against chosen-plaintext attacks (CPA) in the security analysis and demonstrate the effectiveness of our scheme on two real-world datasets (i.e., COCO and Flickr-25 k) through experiments.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"397-410"},"PeriodicalIF":5.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PiCoP: Service Mesh for Sharing Microservices in Multiple Environments Using Protocol-Independent Context Propagation","authors":"Hiroya Onoe;Daisuke Kotani;Yasuo Okabe","doi":"10.1109/TCC.2025.3531954","DOIUrl":"https://doi.org/10.1109/TCC.2025.3531954","url":null,"abstract":"Continuous integration and continuous delivery require many production-like environments in a cluster for testing, staging, debugging, and previewing. In applications built on microservice architecture, sharing common microservices in multiple environments is an effective way to reduce resource consumption. Previous methods extend application layer protocols like HTTP and gRPC to propagate contexts including environment identifiers and to route requests. However, microservices also use other protocols such as MySQL, Redis, Memcached, and AMQP, and extending each protocol requires lots of effort to implement the extensions. This paper proposes PiCoP, a framework to share microservices in multiple environments by propagating contexts and routing requests independently of application layer protocols. PiCoP provides a protocol that propagates contexts by appending them to the front of each TCP byte stream and constructs a service mesh that uses the protocol to route requests. We design the protocol to make it easy to instrument into a system. We demonstrate that PiCoP can reduce resource usage and that it applies to a real-world application, enabling the sharing of microservices in multiple environments using any application layer protocol.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"383-396"},"PeriodicalIF":5.3,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Su;Wenhao Fan;Qingcheng Meng;Penghui Chen;Yuan'an Liu
{"title":"Joint Adaptive Aggregation and Resource Allocation for Hierarchical Federated Learning Systems Based on Edge-Cloud Collaboration","authors":"Yi Su;Wenhao Fan;Qingcheng Meng;Penghui Chen;Yuan'an Liu","doi":"10.1109/TCC.2025.3530681","DOIUrl":"https://doi.org/10.1109/TCC.2025.3530681","url":null,"abstract":"Hierarchical federated learning shows excellent potential for communication-computation trade-offs and reliable data privacy protection by introducing edge-cloud collaboration. Considering non-independent and identically distributed data distribution among devices and edges, this article aims to minimize the final loss function under time and energy budget constraints by optimizing the aggregation frequency and resource allocation jointly. Although there is no closed-form expression relating the final loss function to optimization variables, we divide the hierarchical federated learning process into multiple cloud intervals and analyze the convergence bound for each cloud interval. Then, we transform the initial problem into one that can be adaptively optimized in each cloud interval. We propose an adaptive hierarchical federated learning process, termed as AHFLP, where we determine edge and cloud aggregation frequency for each cloud interval based on estimated parameters, and then the CPU frequency of devices and wireless channel bandwidth allocation can be optimized in each edge. Simulations are conducted under different models, datasets and data distributions, and the results demonstrate the superiority of our proposed AHFLP compared with existing schemes.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"369-382"},"PeriodicalIF":5.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy-Aware Offloading of Containerized Tasks in Cloud Native V2X Networks","authors":"Estela Carmona-Cejudo;Francesco Iadanza","doi":"10.1109/TCC.2025.3529245","DOIUrl":"https://doi.org/10.1109/TCC.2025.3529245","url":null,"abstract":"In cloud-native environments, executing vehicle-to-everything (V2X) tasks in edge nodes close to users significantly reduces service end-to-end latency. Containerization further reduces resource and time consumption, and, subsequently, application latency. Since edge nodes are typically resource and energy-constrained, optimizing offloading decisions and managing edge energy consumption is crucial. However, the offloading of containerized tasks has not been thoroughly explored from a practical implementation perspective. This paper proposes an optimization framework for energy-aware offloading of V2X tasks implemented as Kubernetes pods. A weighted utility function is derived based on cumulative pod response time, and an edge-to-cloud offloading decision algorithm (ECODA) is proposed. The system's energy cost model is derived, and a closed-loop repeated reward-based mechanism for CPU adjustment is presented. An energy-aware (EA)-ECODA is proposed to solve the offloading optimization problem while adjusting CPU usage according to energy considerations. Simulations show that ECODA and EA-ECODA outperform first-in, first-served (FIFS) and EA-FIFS in terms of utility, average pod response time, and resource usage, with low computational complexity. Additionally, a real testbed evaluation of a vulnerable road user application demonstrates that ECODA outperforms Kubernetes vertical scaling in terms of service-level delay. Moreover, EA-ECODA significantly improves energy usage utility.","PeriodicalId":13202,"journal":{"name":"IEEE Transactions on Cloud Computing","volume":"13 1","pages":"336-350"},"PeriodicalIF":5.3,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}