{"title":"Intelligent Transaction Generation Control for Permissioned Blockchain-Based Services","authors":"Dongsun Kim;Sinwoong Yun;Sungho Lee;Jemin Lee;Dusit Niyato","doi":"10.1109/TSC.2025.3528318","DOIUrl":"10.1109/TSC.2025.3528318","url":null,"abstract":"Since the permissioned blockchain technology has been proposed to ensure data integrity in distributed systems, the low throughput and high latency have been recognized as major issues. In some applications, the data, available later than allowed time, can be useless, so the effective throughput is newly considered, defined as the average number of transactions per second, committed within the required latencies. For maximizing the effective throughput, we propose a novel intelligent transaction generation control (i-TGC) method to determine the transaction generation for each client. To improve performance in the dynamic environment of blockchain services based on real-time information, we employ the reinforcement learning (RL) for the i-TGC algorithm. Our experiment results show the i-TGC outperforms the probabilistic transaction generation control (p-TGC), which generates transactions randomly with the optimal probability that maximizes the effective throughput. We also verify the performance of the i-TGC for various environments with different block sizes, block generation timeout, traffic patterns, and the number of clients. The i-TGC can be a way to accelerate the usage of the permissioned blockchain for latency-sensitive services.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"828-838"},"PeriodicalIF":5.5,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingren Wang;Yuchuan Zhao;Yi Zhang;Yiwen Zhang;Shuiguang Deng;Yun Yang
{"title":"Federated Contrastive Learning for Cross-Domain Recommendation","authors":"Qingren Wang;Yuchuan Zhao;Yi Zhang;Yiwen Zhang;Shuiguang Deng;Yun Yang","doi":"10.1109/TSC.2025.3528325","DOIUrl":"10.1109/TSC.2025.3528325","url":null,"abstract":"Conventional cross-domain recommendation models, which require centrally collecting varieties of original data from users, usually meet a challenge that users are reluctant to provide their real feedbacks because of privacy concerns. Thus, federated learning is incorporated into cross-domain recommendation, since it aggregates parameters of local models trained on user sides to train a global recommendation model, instead of centralized data collection. However, the deviations between the global model and local ones, which are caused by users’ data with non-independent and identical distributions, significantly challenge existing federated learning-based models in terms of alleviating data sparsity and cold-start problems. This article proposes a novel end-to-end federated contrastive learning-based model towards cross-domain recommendation, namely <inline-formula><tex-math>${{Fed-CLR}}$</tex-math></inline-formula>. It first uses an inference model to characterize interaction distributions of users in source domain(s), then reconstructs historical interactions of users in target domain(s) with a generative model, and finally performs federated contrastive learning at model level (including inner-model and inter-model) to help reduce deviations between the global model and local ones. Particularly, a constraint mechanism, namely <inline-formula><tex-math>${{Con-Mec}}$</tex-math></inline-formula>, is proposed to achieve consistency reinforcement from the aspect of inner- and inter-models. The experimental results on three real-world datasets not only show that <inline-formula><tex-math>${{Fed-CLR}}$</tex-math></inline-formula> outperforms the state-of-the-art peers, but also demonstrate that <inline-formula><tex-math>${{Fed-CLR}}$</tex-math></inline-formula> achieves a faster convergence speed than classical federated learning-based models.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"812-827"},"PeriodicalIF":5.5,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142986193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Availability and Reliability of Core Networks (4G/5G) From a Deployment Standpoint","authors":"Priyatosh Mandal","doi":"10.1109/TSC.2025.3528332","DOIUrl":"10.1109/TSC.2025.3528332","url":null,"abstract":"4G LTE core networks or service based 5G core networks may be created as a set of virtual network functions (VNFs) i.e., network service (NS). The NS can be deployed with the use of cloud computing platform. A virtual machine (VM) with specialized software is denoted as VNF. In this present work, via mathematical modelling, we derive the NS availability considering the placement of core network nodes in a single virtual machine (SVM) as well as in multiple virtual machines (MVM). We consider the failure perspective of host node, VMs, and core network nodes in the availability analysis. We also look at NS reliability in terms of the placement of VNFs of NS in SVM as well as in MVM. After that, we examine the availability and reliability of SVM based NS and MVM based NS. Then, we compare the availability as well as the reliability considering SVM based NS and MVM based NS. Comparison results show that an SVM based NS deployment can lead to a more than 14% gain in availability and more than 100% gain in reliability with respect to an MVM based NS deployment.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"647-659"},"PeriodicalIF":5.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Customized Genetic Algorithm for SLA-Aware Service Provisioning in Infrastructure-Less Vehicular Cloud Networks","authors":"Farhoud Jafari Kaleibar;Marc St-Hilaire;Masoud Barati","doi":"10.1109/TSC.2025.3528317","DOIUrl":"10.1109/TSC.2025.3528317","url":null,"abstract":"Vehicular Ad-hoc Networks (VANETs) and in-vehicle networks offer complementary perspectives on Intelligent Transportation Systems (ITS), enabling communication between vehicles and within individual vehicles, respectively. While VANETs focus on vehicle-to-vehicle communication, the growing demand for dynamic resource sharing and data processing across a fleet of vehicles highlights the need for Vehicular Cloud Networks (VCNs). VCNs, despite their lack of fixed infrastructure and the continuous mobility of vehicles, provide a promising solution for improving resource management and data sharing, making them critical for achieving efficient Service Level Agreements (SLAs) in infrastructure-less environments. This article addresses these challenges by employing a hierarchical clustering technique and proposing a novel mathematical formulation for resource provisioning in infrastructure-less vehicular clouds. The formulation considers diverse criteria, including provider and requester mobility, data volume, and service delay tolerance, to ensure SLA adherence. A customized genetic algorithm is used to solve the maximization problem, incorporating a grouping mechanism for efficient problem solving. Simulations using the NS2 network simulator and the IBM CPLEX optimization tool validate the feasibility of the proposed approach and demonstrate its superior performance compared to the other methods.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"499-512"},"PeriodicalIF":5.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LogNotion: Highlighting Massive Logs to Assist Human Reading and Decision Making","authors":"Guojun Chu;Jingyu Wang;Tao Sun;Qi Qi;Haifeng Sun;Zirui Zhuang;Jianxin Liao","doi":"10.1109/TSC.2025.3528327","DOIUrl":"10.1109/TSC.2025.3528327","url":null,"abstract":"Massive logs contain crucial information about the working status of software systems, which contributes to anomaly detection and troubleshooting. For engineers, it is a laborious task to manually inspect raw logs to know the system running status, and therefore an automated log summarization tool can be helpful. However, due to the specificity of logs in terms of grammar, vocabulary and semantics, existing natural language-based methods cannot perform well in log analysis. To address these issues, we propose LogNotion, a general log summarization framework that highlights the log messages to assist human reading and decision making. We first explore the role played by triplets in log analysis, and propose a triplet extraction method based on sequence tagging and component alignment, in which the specificity of logs is fully taken into account. Then, we propose an unsupervised log summarization method to extract both regular and noteworthy information based on triplets. Comprehensive experiments are conducted on seven real-world log datasets and the results show that LogNotion improves the average ROUGE-1 by 0.26, recall by 0.12, and compression ratio by 2.13%, compared to state-of-the-art log summarization tools. The helpfulness, readability and generalizability are also verified through human evaluation and cross-dataset tests.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"940-953"},"PeriodicalIF":5.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Reinforcement Learning for Mobility-Aware Digital Twin Migrations in Edge Computing","authors":"Yuncan Zhang;Luying Wang;Weifa Liang","doi":"10.1109/TSC.2025.3528331","DOIUrl":"10.1109/TSC.2025.3528331","url":null,"abstract":"The past decade witnessed an explosive growth on the number of IoT devices (objects/suppliers), including portable mobile devices, autonomous vehicles, sensors and intelligence appliances. To realize the digital representations of objects, Digital Twins (DTs) are key enablers to provide real-time monitoring, behavior simulations and predictive decisions for objects. On the other hand, Mobile Edge Computing (MEC) has been envisioned as a promising paradigm to provide delay-sensitive services for mobile users (consumers) at the network edge, e.g., real-time healthcare, AR/VR, online gaming, smart cities, and so on. In this paper, we study a novel DT migration problem for high quality service provisioning in an MEC network with the mobility of both suppliers and consumers for a finite time horizon, with the aim to minimize the sum of the accumulative DT synchronization cost of all suppliers and the total service cost of all consumers requesting for different DT services. To this end, we first show that the problem is NP-hard, and formulate an integer linear programming solution to the offline version of the problem. We then develop a Deep Reinforcement Learning (DRL) algorithm for the DT migration problem, by considering the system dynamics and heterogeneity of different resource consumptions, mobility traces of both suppliers and consumers, and workloads of cloudlets. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"704-717"},"PeriodicalIF":5.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuxin Liu;Qingyong Deng;Zhiwen Zeng;Anfeng Liu;Zhetao Li
{"title":"A Hybrid Optimization Framework for Age of Information Minimization in UAV-Assisted MCS","authors":"Yuxin Liu;Qingyong Deng;Zhiwen Zeng;Anfeng Liu;Zhetao Li","doi":"10.1109/TSC.2025.3528339","DOIUrl":"10.1109/TSC.2025.3528339","url":null,"abstract":"UAVs-enabled Mobile Crowdsensing (UMCS) has gained considerable attention recently, but it is challenging to meet the data collection needs of the entire city using only the UAV with limited energy. Furthermore, how to effectively minimize Age-of-Information (AoI) and ensure data quality has not been well solved in previous studies. Therefore, this paper proposes a hybrid optimization framework for AoI minimization, which recruits massive distributed workers as the main force for data collection, while the UAV acts as a data collection collaborator and is more inclined to fly to the SNs that cannot establish connections with workers, To mitigate the potential security threats incurred by dishonest workers of the MCS system, we first provide a Greedy-based Multi-worker Task Assignment (GMTA) strategy, aiming to assign more urgent data collection tasks to reliable workers under workload constraints. Then, we propose a Deep-Reinforcement-Learning-based Global AoI Minimization (DRL-GAM) strategy for the UAV path planning to find a set of optimal actions to minimize the global AoI. Based on the real dataset, our simulation experiments show that compared with traditional strategies, our DRL-GAM strategy can reduce the global AoI by an average of 6.49%<inline-formula><tex-math>$sim$</tex-math></inline-formula>68.21% in various network sizes, and is more stable for the average standard deviation is only 51.75% of other strategies.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"527-542"},"PeriodicalIF":5.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TF-DDRL: A Transformer-Enhanced Distributed DRL Technique for Scheduling IoT Applications in Edge and Cloud Computing Environments","authors":"Zhiyu Wang;Mohammad Goudarzi;Rajkumar Buyya","doi":"10.1109/TSC.2025.3528346","DOIUrl":"10.1109/TSC.2025.3528346","url":null,"abstract":"With the continuous increase of IoT applications, their effective scheduling in edge and cloud computing has become a critical challenge. The inherent dynamism and stochastic characteristics of edge and cloud computing, along with IoT applications, necessitate solutions that are highly adaptive. Currently, several centralized Deep Reinforcement Learning (DRL) techniques are adapted to address the scheduling problem. However, they require a large amount of experience and training time to reach a suitable solution. Moreover, many IoT applications contain multiple interdependent tasks, imposing additional constraints on the scheduling problem. To overcome these challenges, we propose a Transformer-enhanced Distributed DRL scheduling technique, called TF-DDRL, to adaptively schedule heterogeneous IoT applications. This technique follows the Actor-Critic architecture, scales efficiently to multiple distributed servers, and employs an off-policy correction method to stabilize the training process. In addition, Prioritized Experience Replay (PER) and Transformer techniques are introduced to reduce exploration costs and capture long-term dependencies for faster convergence. Extensive results of practical experiments show that TF-DDRL, compared to its counterparts, significantly reduces response time, energy consumption, monetary cost, and weighted cost by up to 60%, 51%, 56%, and 58%, respectively.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"1039-1053"},"PeriodicalIF":5.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142961594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MedShield: A Fast Cryptographic Framework for Private Multi-Service Medical Diagnosis","authors":"Fuyi Wang;Jinzhi Ouyang;Xiaoning Liu;Lei Pan;Leo Yu Zhang;Robin Doss","doi":"10.1109/TSC.2025.3526369","DOIUrl":"10.1109/TSC.2025.3526369","url":null,"abstract":"The substantial progress in privacy-preserving machine learning (PPML) facilitates outsourced medical computer-aided diagnosis (MedCADx) services. However, existing PPML frameworks primarily concentrate on enhancing the efficiency of prediction services, without exploration into diverse medical services such as medical segmentation. In this article, we propose <monospace>MedShield</monospace>, a pioneering cryptographic framework for diverse MedCADx services (i.e., multi-service, including medical imaging prediction and segmentation). Based on a client-server (two-party) setting, <monospace>MedShield</monospace> efficiently protects medical records and neural network models without fully outsourcing. To execute multi-service securely and efficiently, our technical contributions include: 1) optimizing computational complexity of matrix multiplications for linear layers at the expense of free additions/subtractions; 2) introducing a secure most significant bit protocol with crypto-friendly activations to enhance the efficiency of non-linear layers; 3) presenting a novel layer for upscaling low-resolution feature maps to support multi-service scenarios in practical MedCADx. We conduct a rigorous security analysis and extensive evaluations on benchmarks (MNIST and CIFAR-10) and real medical records (breast cancer, liver disease, COVID-19, and bladder cancer) for various services. Experimental results demonstrate that <monospace>MedShield</monospace> achieves up to <inline-formula><tex-math>$2.4times$</tex-math></inline-formula>, <inline-formula><tex-math>$4.3times$</tex-math></inline-formula>, and <inline-formula><tex-math>$2times$</tex-math></inline-formula> speed up for MNIST, CIFAR-10, and medical datasets, respectively, compared with prior work when conducting prediction services. For segmentation services, <monospace>MedShield</monospace> preserves the precision of the unprotected version, showing a 1.23% accuracy improvement.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"954-968"},"PeriodicalIF":5.5,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142936236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Reinforcement Learning for Scheduling Applications in Serverless and Serverful Hybrid Computing Environments","authors":"Anupama Mampage;Shanika Karunasekera;Rajkumar Buyya","doi":"10.1109/TSC.2024.3520864","DOIUrl":"10.1109/TSC.2024.3520864","url":null,"abstract":"Serverless computing has gained popularity as a novel cloud execution model for applications in recent times. Businesses constantly try to leverage this new paradigm to add value to their revenue streams. The serverless eco-system accommodates many application domains successfully. However, its inherent properties such as cold start delays and relatively high per unit charges appear as a shortcoming for certain application workloads, when compared to a traditional Virtual Machine (VM) based execution scenario. A few research works exist, that study how serverless computing could be used to mitigate the challenges in a VM based cluster environment, for certain applications. In contrast, this work proposes a generalized framework for determining which workloads are best able to reap benefits of a serverless computing environment. In essence, we present a potential hybrid scheduling solution for exploiting the benefits of both a serverless and a VM based serverful computing environment. Our proposed framework leverages the actor-critic based deep reinforcement learning architecture coupled with the proximal policy optimization technique, in determining the best scheduling decision for workload executions. Extensive experiments conducted demonstrate the effectiveness of such a solution, in terms of user cost and application performance, with improvements of up to 44% and 11% respectively.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 2","pages":"718-728"},"PeriodicalIF":5.5,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142936237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}