Na Guo;Cong Liu;Qi Mo;Jian Cao;Chun Ouyang;Xixi Lu;Qingtian Zeng
{"title":"Business Process Remaining Time Prediction Based on Incremental Event Logs","authors":"Na Guo;Cong Liu;Qi Mo;Jian Cao;Chun Ouyang;Xixi Lu;Qingtian Zeng","doi":"10.1109/TSC.2025.3562338","DOIUrl":"10.1109/TSC.2025.3562338","url":null,"abstract":"Predictive Process Monitoring (PPM) aims to predict the future state of running process instances to enable timely interventions to mitigate potential risks. As one of the most fundamental tasks in PPM, process remaining time prediction focuses on preventing timeout occurrences. While various deep learning-based approaches have been developed for this purpose, they often rely on pre-established static prediction models and struggle to maintain accurate predictions when the process undergoes dynamic changes, such as an expanding sales channels. To tackle this challenge, this paper proposes an incremental process remaining time prediction framework by continuously updating the prediction model based on an incremental event log. Specifically, a feature selection strategy is first introduced to extract effective features from event logs. Leveraging effective features can significantly improve the prediction quality by capturing the changes in process information. Then, three incremental log-based updating mechanisms, including period-based updating, quantity-based updating, and concept-drift-based updating, along with a reconstruction strategy, are proposed to dynamically adjust the prediction model in response to business changes. Finally, LSTM, Transformer, and Auto-encoder models are adapted and integrated into the proposed framework. The approach has been implemented and publicly released. Experimental evaluation using nine real-life event logs demonstrate that the proposed framework and its three instantiations (i.e., LSTM-based, Transformer-based, and Auto-encoder-based ones) outperform state-of-the-art techniques in terms of prediction accuracy.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1308-1320"},"PeriodicalIF":5.5,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143884566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuang Wang;He Zhang;Tianxing Wu;Yueyou Zhang;Wei Emma Zhang;Quan Z. Sheng
{"title":"Electricity Cost Minimization for Multi-Workflow Allocation in Geo-Distributed Data Centers","authors":"Shuang Wang;He Zhang;Tianxing Wu;Yueyou Zhang;Wei Emma Zhang;Quan Z. Sheng","doi":"10.1109/TSC.2025.3562325","DOIUrl":"10.1109/TSC.2025.3562325","url":null,"abstract":"Worldwide, Geo-distributed Data Centers (GDCs) provide computing and storage services for massive workflow applications, resulting in high electricity costs that vary depending on geographical locations and time. How to reduce electricity costs while satisfying the deadline constraints of workflow applications is important in GDCs, which is determined by the execution time of servers, power, and electricity price. Determining the completion time of workflows with different server frequencies can be challenging, especially in scenarios with heterogeneous computing resources in GDCs. Moreover, the electricity price is also different in geographical locations and may change dynamically. To address these challenges, we develop a geo-distributed system architecture and propose an Electricity Cost aware Multiple Workflows Scheduling algorithm (ECMWS) for servers of GDCs with fixed frequency and power. ECMWS comprises four stages, namely workflow sequencing, deadline partitioning, task sequencing, and resource allocation where two graph embedding models and a policy network are constructed to solve the Markov Decision Process (MDP). After statistically calibrating parameters and algorithm components over a comprehensive set of workflow instances, the proposed algorithms are compared with the state-of-the-art methods over two types of workflow instances. The experimental results demonstrate that our proposed algorithm significantly outperforms other algorithms, achieving an improvement of over 15% while maintaining an acceptable computational time.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1397-1411"},"PeriodicalIF":5.5,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143862346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An End-to-End Deep Learning QoS Prediction Model Based on Temporal Context and Feature Fusion","authors":"Peiyun Zhang;Jiajun Fan;Yutong Chen;Wenjun Huang;Haibin Zhu;Qinglin Zhao","doi":"10.1109/TSC.2025.3562324","DOIUrl":"10.1109/TSC.2025.3562324","url":null,"abstract":"Existing end-to-end quality of service (QoS) prediction methods based on deep learning often use one-hot encodings as features, which are input into neural networks. It is difficult for the networks to learn the information that is conducive to prediction. Aiming at the above problem, an end-to-end deep learning QoS prediction model based on a temporal context and feature fusion is proposed. In the proposed model, three blocks are designed for QoS prediction. Firstly, a user-service encoding conversion block is designed to convert the one-hot encodings of users and services into the latent features of users and services, which can make full use of the data in sparse matrices. Then a time feature extraction block is designed to extract time features based on the time-varying characteristics of QoS values. Finally, the time features are fused with the latent features of users and services to predict QoS values. The experimental results show that on existing datasets, the proposed model has better prediction accuracy than other advanced methods in response time and throughput.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1232-1244"},"PeriodicalIF":5.5,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143858030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HGDRec:Next POI Recommendation Based on Hypergraph Neural Network and Diffusion Model","authors":"Yinchen Pan;Jun Zeng;Ziwei Wang;Haoran Tang;Junhao Wen;Min Gao","doi":"10.1109/TSC.2025.3562352","DOIUrl":"10.1109/TSC.2025.3562352","url":null,"abstract":"In recent years, next Point-of-Interest (POI) recommendation is essential for many location-based services, aiming to predict the most likely POI a user will visit next. Current research employs graph-based and sequential methods, which have significantly improved performance. However, there are still limitations: numerous methods overlook the fact that user intent is constantly changing and complex. Furthermore, prior studies have seldom addressed spatiotemporal correlations while considering differences in user behavior patterns. Additionally, implicit feedback contains noise. To address these issues, we propose a recommender model named HGDRec for the next POI recommendation. Specifically, we introduce an approach for extracting trajectory intent by integrating multi-dimensional trajectory representations to achieve a multi-level understanding of user trajectories. Then, by analyzing users’ long trajectories, we construct global hypergraph structures across spatiotemporal regions to comprehensively capture user behavior patterns. Additionally, to further optimize trajectory intent representation, we employ a feature optimization method based on the improved diffusion model. Extensive experiments on three real-world datasets validate the superiority of HGDRec over the state-of-the-art methods.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1445-1458"},"PeriodicalIF":5.5,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Battery Swapping Tour Optimization Problem in Dockless Electric Bike Sharing Service Systems With Distance-Aware User Incentives","authors":"Chun-An Yang, Shih-Chieh Chen, Jian-Jhih Kuo, Yi-Hsuan Peng, Yu-Wen Chen, Ming-Jer Tsai","doi":"10.1109/tsc.2025.3562337","DOIUrl":"https://doi.org/10.1109/tsc.2025.3562337","url":null,"abstract":"","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"28 1","pages":""},"PeriodicalIF":8.1,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengbing Zhou;Qiuyan Li;Mingyuan Cai;Chengzhong Xu;Yang Wang
{"title":"Towards Hybrid Architectures for Big Data Analytics: Insights From Spark-MPI Integration","authors":"Mengbing Zhou;Qiuyan Li;Mingyuan Cai;Chengzhong Xu;Yang Wang","doi":"10.1109/TSC.2025.3562342","DOIUrl":"10.1109/TSC.2025.3562342","url":null,"abstract":"High-Performance Data Analytics (HPDA) combines high-performance computing (HPC) with data analytics to uncover patterns and insights in dual-intensive applications that are both data-intensive and compute-intensive. Traditional Big Data frameworks and HPC technologies often struggle to address these demands independently, prompting researchers to explore their integration. Spark, known for its efficient in-memory computing with RDDs, and MPI, a foundational standard in HPC, are prominent candidates for such integration. This survey explores the integration of Spark and MPI for HPDA, highlighting their potential for unified data processing and computation. We first classify application workloads and review the characteristics and limitations of traditional frameworks. Then, we analyze the challenges and requirements of integrated architectures, focusing on the specific implementations of typical middleware-level architectures. Through comparative analysis, we highlight their advantages and limitations. Finally, we present application examples, outline key challenges and future research directions, and briefly discuss progress in integration approaches for other technology combinations.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1852-1868"},"PeriodicalIF":5.5,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating Deep Spiking Q-Network Into Hypergame-Theoretic Deceptive Defense for Mitigating Malware Propagation in Edge Intelligence-Enabled IoT Systems","authors":"Yizhou Shen;Carlton Shepherd;Chuadhry Mujeeb Ahmed;Shigen Shen;Shui Yu","doi":"10.1109/TSC.2025.3562355","DOIUrl":"10.1109/TSC.2025.3562355","url":null,"abstract":"Internet of Things (IoT) systems are susceptible to compromise due to malware propagation, leading to the data breach and information theft. In this paper, we propose a proactive deception-oriented hypergame-theoretic malware propagation-mitigation (DHMPM) model between IoT nodes and edge devices under asymmetric information in edge intelligence (EI)-enabled IoT systems. We then explore malware-propagated deceptive defense strategies based on deep reinforcement learning. Specifically, IoT nodes and edge devices continually adjust their strategies based on obtained utilities under beliefs perceived by uncertainties from the game environment and system dynamics. Built upon the proposed game DHMPM, we next apply spiking neural networks (SNNs) into deep Q-network to form hypergame-theoretic deep spiking Q-network (HGDSQN), practically converging to the optimal malware-propagated deceptive defense strategy in EI-enabled IoT systems. Such SNNs can simulate biological brains with the pulse communication mechanism and break through the bottleneck of temporal processing in traditional models with deep neural networks, realizing intelligent decision-making and real-time malware defense. We eventually perform experimental simulations that assess the effect of attack arrival probability and learning rate on the optimal learning strategy selection, demonstrating the effectiveness of the proposed HGDSQN algorithm.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1487-1499"},"PeriodicalIF":5.5,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lianrong Chen;Mingdong Tang;Naidan Mei;Fenfang Xie;Guo Zhong;Qiang He
{"title":"Implicit Supervision-Assisted Graph Collaborative Filtering for Third-Party Library Recommendation","authors":"Lianrong Chen;Mingdong Tang;Naidan Mei;Fenfang Xie;Guo Zhong;Qiang He","doi":"10.1109/TSC.2025.3562349","DOIUrl":"10.1109/TSC.2025.3562349","url":null,"abstract":"Third-party libraries (TPLs) play a crucial role in software development. Utilizing TPL recommender systems can aid software developers in promptly finding useful TPLs. A number of TPL recommendation approaches have been proposed and among them graph neural network (GNN)-based recommendation is attracting the most attention. However, GNN-based approaches generate node representations through multiple convolutional aggregations, which is prone to introducing noise, resulting in the over-smoothing issue. In addition, due to the high sparsity of labelled data, node representations may be biased in real-world scenarios. To address these issues, this paper presents a TPL recommendation method named Implicit Supervision-assisted Graph Collaborative Filtering (ISGCF). Specifically, it takes the App-TPL interaction relationships as input and employs a popularity-debiased method to generate denoised App and TPL graphs. This reduces the noise introduced during graph convolution and alleviates the over-smoothing issue. It also employs a novel implicitly-supervised loss function to exploit the labelled data to learn enhanced node representations. Extensive experiments on a large-scale real-world dataset demonstrate that ISGCF achieves a significant performance advantage over other state-of-the-art TPL recommendation methods in Recall, NDCG and MAP. The experiments also validate the superiority of ISGCF in mitigating the over-smoothing problem.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1459-1471"},"PeriodicalIF":5.5,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LEAGAN: A Decentralized Version-Control Framework for Upgradeable Smart Contracts","authors":"Gulshan Kumar;Rahul Saha;Mauro Conti;William Johnston Buchanan","doi":"10.1109/TSC.2025.3562323","DOIUrl":"10.1109/TSC.2025.3562323","url":null,"abstract":"Smart contracts are integral to decentralized systems like blockchains and enable the automation of processes through programmable conditions. However, their immutability, once deployed, poses challenges when addressing errors or bugs. Existing solutions, such as proxy contracts, facilitate upgrades while preserving application integrity. Yet, proxy contracts bring issues such as storage constraints and proxy selector clashes - along with complex inheritance management. This article introduces a novel upgradeable smart contract framework with version control, named ”decentraLized vErsion control and updAte manaGement in upgrAdeable smart coNtracts (LEAGAN).” LEAGAN is the first decentralized updatable smart contract framework that employs data separation with Incremental Hash (IH) and Revision Control System (RCS). It updates multiple contract versions without starting anew for each update, and reduces time complexity, and where RCS optimizes space utilization through differentiated version control. LEAGAN also introduces the first status contract in upgradeable smart contracts, and which reduces overhead while maintaining immutability. In Ethereum Virtual Machine (EVM) experiments, LEAGAN shows 40% better space utilization, 30% improved time complexity, and 25% lower gas consumption compared to state-of-the-art models. It thus stands as a promising solution for enhancing blockchain system efficiency.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1529-1542"},"PeriodicalIF":5.5,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Cross-Chain Hierarchical Federated Learning Framework for Enhancing Service Security and Communication Efficiency","authors":"Li Duan;He Huang;Chao Li;Wei Ni;Bo Cheng","doi":"10.1109/TSC.2025.3562329","DOIUrl":"10.1109/TSC.2025.3562329","url":null,"abstract":"Traditional federated learning (FL) uploads local models to a central server for model aggregation and suffers from server centralization. While blockchain-based FL addresses the issue of centralization, new challenges arise, including limited scalability of a single chain, expensive overhead of blockchain consensus, and inconsistent quality of uploaded models. This article proposes a new cross-chain-based FL (CBFL) framework. Specifically, we propose a three-layer cross-chain FL architecture consisting of a task-releasing chain, a relay chain, and local model uploading chains. The task-releasing chain is used for task issuers to release FL tasks and global model aggregation. The local model uploading chain manages local devices, stores local models and aggregates these local models. To verify the quality of local models, we propose a dual-criteria model quality inspection method based on cross entropy and cosine similarity to exclude substandard local models. We also propose hierarchical FL before global model aggregation to further reduce the communication overhead. Moreover, multi-signature is used to ensure the consistent transmission of models in the cross-chain process. Experiments corroborate that the proposed CBFL improves performance by about 50% compared to the existing BFL framework. Moreover, the proposed dual-criteria model quality inspection method has better robustness than Krum and Trimmed Mean.","PeriodicalId":13255,"journal":{"name":"IEEE Transactions on Services Computing","volume":"18 3","pages":"1199-1212"},"PeriodicalIF":5.5,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143849650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}