Dun Li , Noel Crespi , Roberto Minerva , Wei Liang , Kuan-Ching Li , Joanna Kołodziej
{"title":"DPS-IIoT: Non-interactive zero-knowledge proof-inspired access control towards information-centric Industrial Internet of Things","authors":"Dun Li , Noel Crespi , Roberto Minerva , Wei Liang , Kuan-Ching Li , Joanna Kołodziej","doi":"10.1016/j.comcom.2025.108065","DOIUrl":"10.1016/j.comcom.2025.108065","url":null,"abstract":"<div><div>The advancements in 5G/6G communication technologies have enabled the rapid development and expanded application of the Industrial Internet of Things (IIoT). However, the limitations of traditional host-centric networks are becoming increasingly evident, especially in meeting the growing demands of the IIoT for higher data speeds, enhanced privacy protections, and improved resilience to disruptions. In this work, we present the ZK-CP-ABE algorithm, a novel security framework designed to enhance security and efficiency in distributing content within the IIoT. By integrating a non-interactive zero-knowledge proof (ZKP) protocol for user authentication and data validation into the existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE), the ZK-CP-ABE algorithm substantially improves privacy protections while efficiently managing bandwidth usage. Furthermore, we propose the Distributed Publish-Subscribe Industrial Internet of Things (DPS-IIoT) system, which uses Hyperledger Fabric blockchain technology to deploy access policies and ensure the integrity of ZKP from tampering and cyber-attacks, thus enhancing the security and reliability of IIoT networks. To validate the effectiveness of our approach, extensive experiments were conducted, demonstrating that the proposed ZK-CP-ABE algorithm significantly reduces bandwidth consumption, while maintaining robust security against unauthorized access. Experimental evaluation shows that the ZK-CP-ABE algorithm and DPS-IIoT system significantly enhance bandwidth efficiency and overall throughput in IIoT environments.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108065"},"PeriodicalIF":4.5,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ting Luo , Huaibing Peng , Anmin Fu , Wei Yang , Lihui Pang , Said F. Al-Sarawi , Derek Abbott , Yansong Gao
{"title":"Just a little human intelligence feedback! Unsupervised learning assisted supervised learning data poisoning based backdoor removal","authors":"Ting Luo , Huaibing Peng , Anmin Fu , Wei Yang , Lihui Pang , Said F. Al-Sarawi , Derek Abbott , Yansong Gao","doi":"10.1016/j.comcom.2025.108052","DOIUrl":"10.1016/j.comcom.2025.108052","url":null,"abstract":"<div><div>Backdoor attacks on deep learning (DL) models are recognized as one of the most alarming security threats, particularly in security-critical applications. A primary source of backdoor introduction is data outsourcing such as when data is aggregated from third parties or end Internet of Things (IoT) devices, which are susceptible to various attacks. Significant efforts have been made to counteract backdoor attacks through defensive measures. However, the majority of them are ineffective to either evolving trigger types or backdoor types. This study proposes a poisoned data detection method, termed as <span>LABOR</span> (unsupervised <strong>L</strong>earning <strong>A</strong>ssisted supervised learning data poisoning based <strong>B</strong>ackd <strong>O</strong>r <strong>R</strong>emoval), by incorporating a little human intelligence feedback. <span>LABOR</span> is specifically devised to counter backdoor induced by dirty-label data poisoning on the most common classification tasks. The key insight is that regardless of the underlying trigger types (e.g., patch or imperceptible triggers) and intended backdoor types (e.g., universal or partial backdoor), the poisoned samples still preserve the semantic features of their original classes. By clustering these poisoned samples based on their original categories through unsupervised learning, with category identification assisted by human intelligence, <span>LABOR</span> can detect and remove poisoned samples by identifying discrepancies between cluster categories and classification model predictions. Extensive experiments on eight benchmark datasets, including an intrusion detection dataset relevant to IoT device protection, validate <span>LABOR</span>’s effectiveness in combating dirty-label poisoning-based backdoor attacks. <span>LABOR</span>’s robustness is further demonstrated across various trigger and backdoor types, as well as diverse data modalities, including image, audio and text.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108052"},"PeriodicalIF":4.5,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"COIChain: Blockchain scheme for privacy data authentication in cross-organizational identification","authors":"Zhexuan Yang , Xiao Qu , Zeng Chen , Guozi Sun","doi":"10.1016/j.comcom.2025.108054","DOIUrl":"10.1016/j.comcom.2025.108054","url":null,"abstract":"<div><div>In cross-institutional user authentication, users’ personal privacy information is often exposed to the risk of disclosure and abuse. Users should have the right to decide on their own data, and others should not be able to use user data without users’ permission. In this study, we adopted a user-centered framework, so that users can obtain authorization among different resource owners through qualification proof, avoiding the dissemination of users’ personal privacy data. We have developed a blockchain-based cross-institutional authorization architecture where users can obtain identity authentication between different entities by structuring transactions. Through the selective disclosure algorithm, the user’s private information is hidden during the user identity authentication, and the authenticity of the user’s private information is verified by disclosing the user’s non-private information and authentication credentials. The architecture supports the generation of identity credentials of constant size based on atomic properties. We prototype the system on Ethereum. The prototype of the system is tested. The experiment proves that the sum of user information processing and verification time is about 80ms, and the time fluctuation of user information processing is very small. The results show that our data flow scheme can effectively avoid the privacy leakage problem in the user cross-agency authentication scenario with a small cost.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108054"},"PeriodicalIF":4.5,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liying Li , Yifei Gao , Peiwen Xia , Sijie Lin , Peijin Cong , Junlong Zhou
{"title":"Reinforcement learning based offloading and resource allocation for multi-intelligent vehicles in green edge-cloud computing","authors":"Liying Li , Yifei Gao , Peiwen Xia , Sijie Lin , Peijin Cong , Junlong Zhou","doi":"10.1016/j.comcom.2025.108051","DOIUrl":"10.1016/j.comcom.2025.108051","url":null,"abstract":"<div><div>Green edge-cloud computing (GECC) collaborative service architecture has become one of the mainstream frameworks for real-time intensive multi-intelligent vehicle applications in intelligent transportation systems (ITS). In GECC systems, effective task offloading and resource allocation are critical to system performance and efficiency. Existing works on task offloading and resource allocation for multi-intelligent vehicles in GECC systems focus on designing static methods, which offload tasks once or a fixed number of times. This offloading manner may lead to low resource utilization due to congestion on edge servers and is not suitable for ITS with dynamically changing parameters such as bandwidth. To solve the above problems, we present a dynamic task offloading and resource allocation method, which allows tasks to be offloaded an arbitrary number of times under time and resource constraints. Specifically, we consider the characteristics of tasks and propose a remaining model to obtain the states of vehicles and tasks in real-time. Then we present a task offloading and resource allocation method considering both time and energy according to a designed real-time multi-agent deep deterministic policy gradient (RT-MADDPG) model. Our approach can offload tasks in arbitrary number of times under resource and time constraints, and can dynamically adjust the task offloading and resource allocation solutions according to changing system states to maximize system utility, which considers both task processing time and energy. Extensive simulation results indicate that the proposed RT-MADDPG method can effectively improve the utility of ITS compared to 2 benchmarking methods.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108051"},"PeriodicalIF":4.5,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143160900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GNNetSlice: A GNN-based performance model to support network slicing in B5G networks","authors":"Miquel Farreras , Jordi Paillissé , Lluís Fàbrega , Pere Vilà","doi":"10.1016/j.comcom.2025.108044","DOIUrl":"10.1016/j.comcom.2025.108044","url":null,"abstract":"<div><div>Network slicing is gaining traction in Fifth Generation (5G) deployments and Beyond 5G (B5G) designs. In a nutshell, network slicing virtualizes a single physical network into multiple virtual networks or slices, so that each slice provides a desired network performance to the set of traffic flows (source–destination pairs) mapped to it. The network performance, defined by specific Quality of Service (QoS) parameters (latency, jitter and losses), is tailored to different use cases, such as manufacturing, automotive or smart cities. A network controller determines whether a new slice request can be safely granted without degrading the performance of existing slices, and therefore fast and accurate models are needed to efficiently allocate network resources to slices. Although there is a large body of work of network slicing modeling and resource allocation in the Radio Access Network (RAN), there are few works that deal with the implementation and modeling of network slicing in the core and transport network.</div><div>In this paper, we present GNNetSlice, a model that predicts the performance of a given configuration of network slices and traffic requirements in the core and transport network. The model is built leveraging Graph Neural Networks (GNNs), a kind of Neural Network specifically designed to deal with data structured as graphs. We have chosen a data-driven approach instead of classical modeling techniques, such as Queuing Theory or packet-level simulations due to their balance between prediction speed and accuracy. We detail the structure of GNNetSlice, the dataset used for training, and show how our model can accurately predict the delay, jitter and losses of a wide range of scenarios, achieving a Symmetric Mean Average Percentage Error (SMAPE) of 5.22%, 1.95% and 2.04%, respectively.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108044"},"PeriodicalIF":4.5,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143161690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mustafa J.M. Alhamdi , Jose Manuel Lopez-Guede , Jafar AlQaryouti , Javad Rahebi , Ekaitz Zulueta , Unai Fernandez-Gamiz
{"title":"AI-based malware detection in IoT networks within smart cities: A survey","authors":"Mustafa J.M. Alhamdi , Jose Manuel Lopez-Guede , Jafar AlQaryouti , Javad Rahebi , Ekaitz Zulueta , Unai Fernandez-Gamiz","doi":"10.1016/j.comcom.2025.108055","DOIUrl":"10.1016/j.comcom.2025.108055","url":null,"abstract":"<div><div>The exponential expansion of Internet of Things (IoT) applications in smart cities has significantly pushed smart city development forward. Intelligent applications have the potential to enhance systems' efficiency, service quality, and overall performance. Smart cities, intelligent transportation networks, and other influential infrastructure are the main targets of cyberattacks. These attacks have the potential to undercut the security of important government, commercial, and personal information, placing privacy and confidentiality at risk. Multiple scientific studies indicate that Smart City cyberattacks can result in millions of euros in financial losses due to data compromise and loss. The importance of anomaly detection rests in its ability to identify and analyze illegitimacy within IoT data. Unprotected, infected, or suspicious devices may be unsafe for intrusion attacks, which have the potential to enter several machines within a network. This interferes with the network's provision of customer service in terms of privacy and safety. The objective of this study is to assess procedures for detecting malware in the IoT using artificial intelligence (AI) approaches. To identify and prevent threats and malicious programs, current methodologies use AI algorithms such as support vector machines, decision trees, and deep neural networks. We explore existing studies that propose several methods to address malware in IoT using AI approaches. Finally, the survey highlights current issues in this context, including the accuracy of detection and the cost of security concerns in terms of detection performance and energy consumption.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"233 ","pages":"Article 108055"},"PeriodicalIF":4.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143132257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A two-stage federated learning method for personalization via selective collaboration","authors":"Jiuyun Xu , Liang Zhou , Yingzhi Zhao , Xiaowen Li , Kongshang Zhu , Xiangrui Xu , Qiang Duan , RuRu Zhang","doi":"10.1016/j.comcom.2025.108053","DOIUrl":"10.1016/j.comcom.2025.108053","url":null,"abstract":"<div><div>As an emerging distributed learning method, Federated learning has received much attention recently. Traditional federated learning aims to train a global model on a decentralized dataset, but in the case of uneven data distribution, a single global model may not be well adapted to each client, and even the local training performance of some clients may be superior to the global model. Under this background, clustering resemblance clients into the same group is a common approach. However, there is still some heterogeneity of clients within the same group, and general clustering methods usually assume that clients belong to a specific class only, but in real-world scenarios, it is difficult to accurately categorize clients into one class due to the complexity of data distribution. To solve these problems, we propose a two-stage <strong>fed</strong>erated learning method for personalization via <strong>s</strong>elective <strong>c</strong>ollaboration (FedSC). Different from previous clustering methods, we focus on how to independently exclude other clients with significant distributional differences for each client and break the restriction that clients can only belong to one category. We tend to select collaborators for each client who are more conducive to achieving local mission goals and build a collaborative group for them independently, and every client engages in a federated learning process only with group members to avoid negative knowledge transfer. Furthermore, FedSC performs finer-grained processing within each group, using an adaptive hierarchical fusion strategy of group and local models instead of the traditional approach’s scheme of directly overriding local models. Extensive experiments show that our proposed method considerably increases model performance under different heterogeneity scenarios.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108053"},"PeriodicalIF":4.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143161691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resource allocation and UAV deployment for a UAV-assisted URLLC system","authors":"Xinyue Gu, Hong Jiang, Hao Yang","doi":"10.1016/j.comcom.2025.108049","DOIUrl":"10.1016/j.comcom.2025.108049","url":null,"abstract":"<div><div>The unmanned aerial vehicle (UAV)-assisted transmission in ultra-reliable low-latency communication (URLLC) can achieve precise control in environments where communication infrastructures are unavailable, with enormous benefits in military and commercial applications. This paper investigates a three-hop decode-and-forward UAV-assisted system to guarantee the stringent quality-and-service requirements in long-distance URLLC. First, the block error rate (BLER) is derived for air-to-ground and air-to-air channels. Then, the transmit power, blocklength, and UAV deployment in three-dimensional space are optimized together to jointly minimize the overall BLER and UAV communication energy consumption. The formulated non-convex problem is divided into subproblems and an iterative algorithm is proposed to tackle it by utilizing the block coordinate descent. Different search techniques and the block successive convex approximation approach are used to conquer the subproblems. Finally, simulations are conducted to demonstrate the system performance and the effectiveness of the proposed algorithm.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108049"},"PeriodicalIF":4.5,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143160901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi Wang , Bo Yi , Saru Kumari , Chien Ming Chen , Mohammed J.F. Alenazi
{"title":"Graph convolutional networks and deep reinforcement learning for intelligent edge routing in IoT environment","authors":"Zhi Wang , Bo Yi , Saru Kumari , Chien Ming Chen , Mohammed J.F. Alenazi","doi":"10.1016/j.comcom.2025.108050","DOIUrl":"10.1016/j.comcom.2025.108050","url":null,"abstract":"<div><div>The rapid growth of the Internet of Things (IoT) has increased the demand for Quality of Service (QoS) in various applications. Intelligent routing algorithms have emerged to meet these high QoS requirements. However, existing algorithms face challenges such as long training time, limited generalization capabilities, and difficulties in handling high-dimensional continuous action spaces, which hinder their ability to achieve optimal routing solutions. To address these challenges, this paper proposes a novel intelligent edge routing optimization (RO) algorithm that integrates node classification (NC) using a graph convolutional network (GCN) with path selection (PS) based on deep reinforcement learning (DRL). This approach aims to intelligently select optimal paths while meeting high QoS requirements in complex, dynamically changing IoT Edge Network Environments (IENEs). The NC module reduces the computational complexity and enhances the generalization capability of the RO algorithm by transforming network topology and link state information into node features, effectively filtering out low-performing nodes. To cope with high-dimensional continuous action spaces and meet QoS requirements, the PS module utilizes the refined network topology and state information from NC to determine optimal routing paths. Simulation results show that the proposed algorithm outperforms state-of-the-art methods in key performance metrics such as average network delay, packet loss rate, and throughput. In addition, it shows significant improvements in convergence speed and generalization ability.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108050"},"PeriodicalIF":4.5,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143160894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Akwasi Frimpong , Mu Han , Usman Ahmad , Otu Larbi-Siaw , Joseph Kwame Adjei
{"title":"DBVA: Double-layered blockchain architecture for enhanced security in VANET vehicular authentication","authors":"Samuel Akwasi Frimpong , Mu Han , Usman Ahmad , Otu Larbi-Siaw , Joseph Kwame Adjei","doi":"10.1016/j.comcom.2025.108048","DOIUrl":"10.1016/j.comcom.2025.108048","url":null,"abstract":"<div><div>Vehicular ad-hoc networks (VANET) are crucial for improving road safety and traffic management in Intelligent Transportation Systems (ITS). However, these networks face significant security and privacy challenges due to their dynamic and decentralized nature. Traditional authentication methods, such as Public Key Infrastructure (PKI) and centralized systems, struggle with scalability, single points of failure, and privacy issues. To address these issues, this paper introduces DBVA, a Double-Layered Blockchain Architecture that integrates private and consortium blockchains to create a robust and scalable authentication framework for VANET. The DBVA framework segregates public transactions, such as traffic data, from private transactions, such as identity and location information, into separate blockchain layers, preserving privacy and enhancing security. Additionally, DBVA introduces strict access control smart contracts for the decentralized revocation of unauthorized vehicle privileges, minimizing communication risks and enhancing system resilience. A dynamic pseudonym identity generation mechanism with periodic updates further strengthens privacy by segregating real and pseudonymous identities into separate blockchain layers. Comprehensive performance evaluations reveal that DBVA significantly enhances computational efficiency, reducing the computational cost to 18.34 ms, lowering communication overhead to 992 bits per message, and minimizing storage requirements to just 50 units, making it competitive among contemporary schemes. Extensive security analysis and formal proof confirm that DBVA effectively meets all essential privacy and security requirements, making it a robust, reliable, and scalable solution for enhancing the security, privacy, and resilience of VANET.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"232 ","pages":"Article 108048"},"PeriodicalIF":4.5,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143160902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}