{"title":"Strategy-proof mechanism based on dwarf mongoose optimization for task offloading in vehicle computing","authors":"Xi Liu , Jun Liu","doi":"10.1016/j.future.2025.108027","DOIUrl":"10.1016/j.future.2025.108027","url":null,"abstract":"<div><div>Along with intelligent vehicle (IV) development, IVCs can be used as mobile computing platforms to provide users with various services. The aim of this paper is to design an efficient task offloading mechanism to maximize group efficiency in vehicle computing. Considering that sensing data inherently support multi-user sharing, we introduce a resource-sharing model in which multiple users share sensing resources. To provide a scalable service, we propose auction-based dynamic pricing. To achieve a balance between quality and efficiency, the efficient task offloading mechanism proposed in this study is based on dwarf mongoose optimization. The initialization algorithm generates random, best-fit, and greedy allocations based on probability. Convergence characteristics are improved using a new scouting algorithm and a new babysitter algorithm, both of which also contribute to maintaining population diversity. We demonstrate that the proposed mechanism achieves strategy-proofness, group strategy-proofness, individual rationality, budget balance, and consumer sovereignty. The novelty consists in our showing how to design the strategy-proof mechanism based on swarm optimization. Furthermore, the approximate ratio of the proposed mechanism is analyzed. Experimental verifications are conducted to show the proposed mechanism shows good performance in different environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108027"},"PeriodicalIF":6.2,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144711237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid deep reinforcement learning-based workload migrating and resource allocation policies for weighted cost minimization in edge collaboration networks","authors":"Hongchang Ke , Jia Zhao , Yan Ding , Lin Pan","doi":"10.1016/j.future.2025.108002","DOIUrl":"10.1016/j.future.2025.108002","url":null,"abstract":"<div><div>In the context of efficient collaboration on 5G heterogeneous networks, computation and communication, mobile edge computing (MEC) and cloud–edge collaboration encounter several issues. These include multi-agent cooperation in deep reinforcement learning (DRL) for multi-task processing, the efficiency of action decisions by mobile edge computing server (MECS), and the ineffectiveness of traditional DRL algorithms in resource allocation. To address these challenges, a framework consisting of multiple wireless mobile terminals (WMTs) enabled by unmanned aerial vehicle (UAV) and multiple MEC servers is constructed, considering diversity and priorities of the workload generated by WMTs. Furthermore, to optimize edge collaborative workload offloading, migrating, and resource allocation decisions for minimizing the weighted cost of workload processing, we propose a hybrid DRL approach combining the dueling double deep Q-network (D3QN) and deep deterministic policy gradient (DDPG) algorithms, named OMRA-DRL. Regarding the proposed OMRA-DRL, a K-Means-based clustering algorithm groups similar workloads to simplify optimization. Additionally, a mixture-of-expert (MoE) system enables efficient action selection. Along with D3QN for better MECS selection, the maximum advantage selection strategy of the advantage function (MMSS-D3QN) is formulated to migrate workload clusters to multiple MECSs, achieving multi-edge cooperation. Comprehensive simulation experiments prove the convergence of OMRA-DRL under various parameters. Moreover, it outperforms the five benchmark algorithms in terms of average cumulative reward and unfinished ratio of workloads, with an increase of over 15% in average cumulative reward and a decrease of about 5% in unfinished rate of workloads, demonstrating its effectiveness in achieving weighted cost minimization in edge collaborative networks.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108002"},"PeriodicalIF":6.2,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144703536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huayue Zeng , Wangbo Shen , Haijie Wu , Min Dong , Weiwei Lin , C.L. Philip Chen
{"title":"CAGO-ECIL: Cloud-Assisted Genetic Optimization for Edge-Class Incremental Learning with training acceleration","authors":"Huayue Zeng , Wangbo Shen , Haijie Wu , Min Dong , Weiwei Lin , C.L. Philip Chen","doi":"10.1016/j.future.2025.108021","DOIUrl":"10.1016/j.future.2025.108021","url":null,"abstract":"<div><div>The integration of edge computing and deep learning has significantly advanced edge intelligence. However, implementing incremental learning directly on resource-constrained edge devices remains challenging. Most existing approaches rely on cloud-based training, leading to slow model updates and difficulties in meeting rapidly changing demands, such as in robotics and autonomous driving. To address this, we propose CAGO-ECIL, a Cloud-Assisted Genetic Optimization for Edge-Class Incremental Learning approach. CAGO-ECIL accelerates learning by formulating a learning optimization problem based on quantitative efficiency metrics and using a cloud-assisted genetic algorithm to determine the optimal ratio of new to old samples. This guides edge-based incremental learning to adapt more swiftly while maintaining high performance. Experimental results show that CAGO-ECIL improves accuracy by at least 4.66% and reduces training epoch time by up to 90% compared to state-of-the-art methods. It also achieves competitive average accuracy and average forgetting measures relative to cutting-edge approaches. With a convergence analysis, CAGO-ECIL effectively addresses the unique challenges of incremental learning in edge intelligence.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108021"},"PeriodicalIF":6.2,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Truthful mechanism for service utility maximization in edge-enabled metaverse based on NUMA","authors":"Jia Xu , Hao Wu , Jixian Zhang","doi":"10.1016/j.future.2025.108015","DOIUrl":"10.1016/j.future.2025.108015","url":null,"abstract":"<div><div>The high-quality operation of the edge metaverse is heavily based on the efficient allocation and pricing of computational resources. Non-Uniform Memory Access (NUMA) architecture divides systems into multiple computing nodes with local processors and memory. These nodes enable independent computing and collaborative work, making them ideal for metaverse service demands while becoming increasingly prevalent. Despite the widespread use of incentive-based mechanism design in metaverse resource allocation, current studies often overlook the unique challenges posed by NUMA architecture, especially changes in resource topology and deployment rules. To address this gap, we propose a monotone heuristic algorithm for resource allocation that considers deployment constraints and resource dominance density. In addition, we design a pricing algorithm based on critical values, utilizing binary search to ensure the truthfulness of the mechanism. Simulation experiments demonstrate that our proposed mechanism achieves favorable outcomes in terms of system utility, final revenue, and resource utilization. The mechanism effectively balances the interests of both resource demanders and edge service providers to ensure reasonableness. Our results highlight the feasibility and effectiveness of integrating NUMA architecture into metaverse resource allocation and pricing strategies.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108015"},"PeriodicalIF":6.2,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144711236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SecDAF: An efficient secure multi-source data analysis framework","authors":"Wenjia Zhao, Saiyu Qi, Yong Qi","doi":"10.1016/j.future.2025.108020","DOIUrl":"10.1016/j.future.2025.108020","url":null,"abstract":"<div><div>Multi-source data analysis promises valuable insights but encounters challenges in preserving data privacy. While cryptography facilitates secure multi-party computation, its performance overhead hinders practicality. Recent advancements in trusted execution environments — Intel Software Guard Extension (SGX), present a promising alternative due to its efficiency. However, existing SGX-based methods exhibit limitations: (1) Unrealistic assumption of code security. They presume the data analysis code itself is secure, which is often not guaranteed. (2) Performance bottlenecks for large datasets. Heavy reliance on data encryption/decryption significantly impacts performance. (3) Steep learning curve for data analysts. Analysts need prior knowledge of SGX to develop secure programs. To overcome these limitations, this paper presents SecDAF, a secure and efficient framework for multi-source data analysis. SecDAF introduces ReE-Fuse, a novel mechanism that combines reusable enclaves with a fuse-threshold security policy, enabling secure execution across diverse analysis tasks without requiring repeated code audits. By integrating this mechanism with homomorphic encryption via a lightweight protocol, SecDAF ensures strong privacy guarantees while significantly reducing cryptographic overhead. Additionally, SecDAF provides Python APIs that allow analysts to implement secure computations without prior knowledge of SGX internals. Experimental results show that SecDAF achieves over 2×performance improvement compared to a state-of-the-art secure multi-party computation approach, while also enhancing usability and security assurance.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108020"},"PeriodicalIF":6.2,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144711238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaxian Zhu , Weihua Bai , Huibing Zhang , Weiwei Lin , Teng Zhou , Keqin Li
{"title":"Adaptive multi-objective swarm intelligence for containerized microservice deployment","authors":"Jiaxian Zhu , Weihua Bai , Huibing Zhang , Weiwei Lin , Teng Zhou , Keqin Li","doi":"10.1016/j.future.2025.108012","DOIUrl":"10.1016/j.future.2025.108012","url":null,"abstract":"<div><div>Container-based microservice architecture is essential for modern applications. However, optimizing deployment remains critically challenging due to complex interdependencies among microservices. In this paper, we propose a formalized deployment model by systematically analyzing the interdependencies within Service Function Chains (SFCs). To achieve this, we design a novel swarm intelligence optimization algorithm, named Multi-objective Sand Cat Swarm Optimization with Hybrid Strategies (MSCSO-HS), for multi-objective optimization in microservice deployment. Our algorithm effectively optimizes inter-microservice communication costs and enhances container aggregation density to improve application reliability and maximize resource utilization. Extensive experiments demonstrate that MASCSO outperforms state-of-the-art algorithms for all optimization metrics. Our model achieves improvements of 23.76% in communication latency, 47.51% in deployment density, 38.70% in failure rate, 58.50% in CPU utilization, and 53.81% in RAM usage. The MASCSO framework not only enhances microservice performance and reliability but also provides a robust solution for resource scheduling in cloud environments for microservice deployment.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108012"},"PeriodicalIF":6.2,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Zhang , Zhuotao Lian , Weiyu Wang , Huakun Huang , Chunhua Su
{"title":"DETR-BAL: Decentralized mobile sensing intrusion detection via latent mining and Bayesian local optimization","authors":"Chen Zhang , Zhuotao Lian , Weiyu Wang , Huakun Huang , Chunhua Su","doi":"10.1016/j.future.2025.108014","DOIUrl":"10.1016/j.future.2025.108014","url":null,"abstract":"<div><div>With the rapid proliferation of mobile sensing in fields such as personal health monitoring in data processing are becoming more prominent. This paper introduces a decentralized DETR framework inspired by blockchain proof-of-work consensus. The framework trains models locally on each device and evaluates the device’s reputation based on its historical performance. Only devices meeting predefined criteria are admitted to the update committee, which enhances security. This mechanism reduces reliance on centralized servers and minimizes infrastructure costs. While a supervisory operator ensures the smooth operation of the system. To further enhance trust, we propose a credibility assessment method that integrates risk metrics with data quality scores via a non-cooperative game-theoretic model. By achieving Nash equilibrium, this method not only guarantees local optimality but also prioritizes users who provide high-quality, low-risk data, thereby promoting timely committee updates to achieve global optimality. As a complement to DETR, we propose BAL-IDS, an advanced intrusion detection system (IDS) that extracts latent features using autoencoders and dynamically fine-tunes the hyperparameters of OCSVM using a Bayesian joint local agent optimization strategy. This dual approach enhances the system’s resilience to complex threats, especially those that exploit requester feedback mechanisms. Experiments show that our research is superior to traditional schemes.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108014"},"PeriodicalIF":6.2,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144664873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel host-based intrusion detection approach leveraging audit logs","authors":"Jiaqing Jiang, Hongyang Chu, Donghai Tian","doi":"10.1016/j.future.2025.107995","DOIUrl":"10.1016/j.future.2025.107995","url":null,"abstract":"<div><div>Host-based intrusion detection systems (HIDS) struggle to detect advanced cyber attacks (e.g., APT, LoTL) due to their stealthy nature and reliance on either structural or semantic features alone. We hypothesize that integrating semantic audit log analysis with structural provenance graph learning improves detection accuracy and adaptability. To validate this, we propose MalSnif, a novel framework that (1) parses audit logs to construct provenance graphs enriched with process/event relationships, (2) simplifies graphs by pruning peripheral nodes while retaining critical attack trajectories, and (3) employs NLP techniques (word2vec, GRU, BiLSTM) to extract semantic features, combined with a graph convolutional network (GCN) for detection. Implemented using PyTorch and ETW, MalSnif addresses data imbalance via strategic downsampling during training. Evaluations show that our approach can effectively detect different kinds of cyber attacks and outperforms recent methods. In addition, our methods for simplifying process event sequences and provenance graphs also yield effective and explainable results.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107995"},"PeriodicalIF":6.2,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144664917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peichen Li , Xingwei Wang , Bo Yi , Tingting Yuan , Jiahao Chen , Jiaxin Zhang , Min Huang
{"title":"Cost-aware routing for computation offloading in knowledge-defined AIoT","authors":"Peichen Li , Xingwei Wang , Bo Yi , Tingting Yuan , Jiahao Chen , Jiaxin Zhang , Min Huang","doi":"10.1016/j.future.2025.108013","DOIUrl":"10.1016/j.future.2025.108013","url":null,"abstract":"<div><div>Edge computing plays a crucial role in supporting high-bandwidth and latency-sensitive applications in the Artificial Intelligence of Things (AIoT). These applications often demand both computing and network resources within strict time constraints, yet existing approaches often fall short in jointly considering dynamic destination-path combinations, pricing incentives, and differentiated computation costs. In this paper, we propose a Knowledge-Defined AIoT-based framework that incorporates a cost-aware routing algorithm called <span>CompuRoute</span> for computation offloading. This framework enables collaborative data collection and centralized data aggregation and analysis, supporting efficient cost estimation. Based on the estimated cost, <span>CompuRoute</span> integrates a reverse auction mechanism for selecting candidate edge servers. Next, <span>CompuRoute</span> considers networking states and introduces a multipath routing algorithm based on network flow theory to determine the destination edge servers and routing paths. Experimental results demonstrate that <span>CompuRoute</span> can improve the task success rate and reduce task completion time compared to baseline algorithms, exhibiting scalability across various network topologies.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108013"},"PeriodicalIF":6.2,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144664871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Geiß , Téodora Hovi , Alexandru Calotoiu , Felix Wolf
{"title":"Validating the performance of GPU ports using differential performance models","authors":"Alexander Geiß , Téodora Hovi , Alexandru Calotoiu , Felix Wolf","doi":"10.1016/j.future.2025.108018","DOIUrl":"10.1016/j.future.2025.108018","url":null,"abstract":"<div><div>Offloading computation to the GPU is crucial to leverage many of today’s supercomputers. We expect the GPU port of an application to outperform the pure CPU implementation, but is this always true? Simple benchmarking only allows us to take a limited number of samples from a vast space of execution configurations and can, therefore, deliver only a fragmented answer. To answer the question systematically, even for individual application kernels, we propose a semi-automatic toolchain based on differential performance modeling and intuitive visualizations. We combine empirical performance models based on unified CPU–GPU profiles with hardware characteristics to derive differential performance models that can be easily compared across device types. In four case studies, we demonstrate how our toolchain pinpoints scaling issues in GPU ports, guides performance improvements, and identifies execution configurations with superior performance.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 108018"},"PeriodicalIF":6.2,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144664876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}