Mohammed Riyadh Abdmeziem , Amina Ahmed Nacer , Soumeya Demil
{"title":"Proactive handover for task offloading in UAVs","authors":"Mohammed Riyadh Abdmeziem , Amina Ahmed Nacer , Soumeya Demil","doi":"10.1016/j.comcom.2025.108282","DOIUrl":"10.1016/j.comcom.2025.108282","url":null,"abstract":"<div><div>Unmanned Aerial Vehicles (UAVs) are usually deployed alongside Internet of Things (IoT) devices in smart city applications, particularly for critical tasks such as disaster management that require continuous service. UAVs often handle resource-intensive and sensitive tasks through offloading, but unexpected task interruptions due to UAV dropouts can generate safety risks and increase costs. Although existing approaches in the literature have already addressed proactive handovers to mitigate such disruptions, their primary focus is on communication issues arising from UAV movement and are unable to handle offloading related issues. In this paper, we include in our model, in addition to communication, factors such as energy, computation requirements, and dynamic environmental conditions (e.g., wind speed and incentive), pushing toward a comprehensive solution for UAV task offloading and resource allocation. In fact, we formulate our problematic as a Markov game, which we solve using a Multi Agent Deep Q Network (MADQN). In our experiments, we assessed our approach using a federated learning scenario to illustrate its effectiveness in a realistic distributed application setting against several baselines from the state of the art. Results showed that our approach outperforms its peers in terms of system utility, and tradeoff between cost and dropout rates, leading to an improved handover management of computational and energy resources in UAV-IoT based systems. In fact, it reduces the dropout rate by approximately 45% compared to the second-best baseline, leading to a 2% improvement in model accuracy and a 50% reduction in deployment costs.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"242 ","pages":"Article 108282"},"PeriodicalIF":4.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farshad Rostami Ghadi , Masoud Kaveh , Kai-Kit Wong , Diego Martín , Riku Jäntti , Zheng Yan
{"title":"Physical layer security in FAS-aided wireless powered NOMA systems","authors":"Farshad Rostami Ghadi , Masoud Kaveh , Kai-Kit Wong , Diego Martín , Riku Jäntti , Zheng Yan","doi":"10.1016/j.comcom.2025.108274","DOIUrl":"10.1016/j.comcom.2025.108274","url":null,"abstract":"<div><div>The rapid evolution of communication technologies and the emergence of sixth-generation (6G) networks have introduced unprecedented opportunities for ultra-reliable, low-latency, and energy-efficient communication. Integrating technologies like non-orthogonal multiple access (NOMA) and wireless powered communication networks (WPCNs) brings new challenges. These include energy constraints and increased security vulnerabilities. Traditional antenna systems and orthogonal multiple access schemes struggle to meet the increasing demands for performance and security in such environments. To address this gap, this paper investigates the impact of emerging fluid antenna systems (FAS) on the performance of physical layer security (PLS) in WPCNs. Specifically, we consider a scenario in which a transmitter, powered by a power beacon via an energy link, transmits confidential messages to legitimate FAS-aided users over information links while an external eavesdropper attempts to decode the transmitted signals. Additionally, users leverage the NOMA scheme, where the far user may also act as an internal eavesdropper. For the proposed model, we first derive the distributions of the equivalent channels at each node and subsequently obtain compact expressions for the secrecy outage probability (SOP) and average secrecy capacity (ASC), using the Gaussian quadrature methods. Our results reveal that incorporating the FAS for NOMA users, instead of the TAS, enhances the performance of the proposed secure WPCN.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"242 ","pages":"Article 108274"},"PeriodicalIF":4.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144694386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giuliano Fittipaldi , Rodrigo S. Couto , Luís H.M.K. Costa
{"title":"Exploring traffic pattern variability in vehicular federated learning","authors":"Giuliano Fittipaldi , Rodrigo S. Couto , Luís H.M.K. Costa","doi":"10.1016/j.comcom.2025.108279","DOIUrl":"10.1016/j.comcom.2025.108279","url":null,"abstract":"<div><div>The emergence of software-defined vehicles has brought machine learning into the vehicular domain. To support these data-driven applications, techniques to incentivize users to share their vehicle data are crucial. Federated learning trains machine learning models in a distributed manner, leveraging client data without compromising its privacy. Nonetheless, in vehicular networks, the dynamic behavior of nodes affects client availability and the global model’s performance. Accordingly, this paper evaluates federated learning (FL) in a realistic vehicular network topology, accounting for real vehicle traffic in two Brazilian urban areas. The network simulation covers <span><math><mrow><mn>3</mn><mo>.</mo><mn>7</mn><mspace></mspace><msup><mrow><mi>km</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math></span> with 1290 vehicles per hour and road speeds, based on real data. Our paper provides a comprehensive analysis of the impact that different traffic behaviors can yield during the training phase of a federated learning model. We observe that there is a performance decay in urban areas with longer vehicle permanence. Interestingly, longer vehicle participation in FL training leads to a biased final model with reduced generalization. We propose a novel approach to verify vehicle variability over time, by using the Dice-Sørensen coefficient to compare the set of clients participating in different rounds of training. By maintaining the vehicle variability over the rounds we can reduce the effect of the bias on the model, and – with a 47% reduction of the communication overhead – achieve faster learning, higher convergence in the first 15 rounds, and an equivalent final accuracy. Additionally, we extend our analysis by conducting simulations under more extreme traffic scenarios across multiple datasets, using a MobileNetV3. The results confirm that sustaining high vehicle variability – in scenarios with a brief participation of vehicles in the training – yields comparable model performance while saving up to 83.5 GB in communication costs.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"242 ","pages":"Article 108279"},"PeriodicalIF":4.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maritime monitoring through LoRaWAN: Resilient decentralised mesh networks for enhanced data transmission","authors":"Salah Eddine Elgharbi , Mauricio Iturralde , Yohan Dupuis , Alain Gaugue","doi":"10.1016/j.comcom.2025.108276","DOIUrl":"10.1016/j.comcom.2025.108276","url":null,"abstract":"<div><div>Resilient communication networks from ocean-deployed buoys are crucial for maritime applications. However, wireless data transmission in these environments faces significant challenges due to limited buoy battery capacity, harsh weather conditions, and potential interference from maritime vessels. LoRaWAN technology, known for its low power consumption and long-range communication capabilities, presents a promising solution. Nevertheless, the standard LoRaWAN framework lacks native support for multi-hop routing, which is essential for enhancing network efficiency by relaying data between buoys. This paper introduces two novel multi-hop routing protocols designed for resilient LoRaWAN mesh networks in maritime environments. The first, Opportunistic Smart Routing over a Decentralised LoRaWAN Mesh (OSR-DLM), employs a cross-layer design with a hybrid routing strategy and balanced metric selection. The second, Beacon-Forwarding LoRaWAN with Channel-Aware Path Selection (BF-LoRaCAPS), maintains continuous device awareness using a scheduling mechanism and integrates the OSR-DLM strategy for further optimisation. We evaluate these protocols through extensive simulations that model the detrimental effects of severe weather on data transmission, validated by analysing varied parameter settings in massive Maritime of Things (MoT) scenarios. Key performance metrics, including packet delivery ratio, end-to-end latency, throughput, and traffic intensity for each hop-ratio, are analysed. The results show the superiority of both OSR-DLM and BF-LoRaCAPS over conventional Geographic Routing Protocol (GRP) variants under realistic marine channel conditions. Notably, BF-LoRaCAPS exhibits superior network coverage and resilience, outperforming both OSR-DLM and GRP variants, albeit with slightly increased latency.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"241 ","pages":"Article 108276"},"PeriodicalIF":4.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daqian Liu, Zhewei Zhang, Yuntao Shi, Yingying Wang, Zhenwu Lei
{"title":"A novel fuzzy-logic-based adaptive gate-controlled scheduling algorithm for time-aware shaper in TSN","authors":"Daqian Liu, Zhewei Zhang, Yuntao Shi, Yingying Wang, Zhenwu Lei","doi":"10.1016/j.comcom.2025.108268","DOIUrl":"10.1016/j.comcom.2025.108268","url":null,"abstract":"<div><div>Time-sensitive networking (TSN) is critical for real-time, industrial, and mission-critical applications that require deterministic communication. Scheduling time-triggered flows in TSN’s time-aware shaper (TAS) mechanism constitute an NP-hard problem, where the inherent trade-off between computational complexity and scheduling optimality persists. Exact algorithms achieve precision via exhaustive search mechanisms at prohibitive costs, while heuristic algorithms sacrifice fidelity to accelerate execution under complex network scenarios. This paper addresses these challenges through a novel rule-based framework that employs a fuzzy logic system to dynamically select algorithms, ensuring adaptation to complex requirements in diverse scenarios. In addition, a dynamic switching algorithm is proposed to intelligently select the most suitable scheduling method based on real-time network conditions and task requirements. Compared with traditional exact algorithms, our approach reduces computation time by over 35% in large-scale networks while meeting time constraints. In small-scale networks, it increases the scheduling success ratio by 20% compared to heuristic methods, particularly when higher accuracy is required. The proposed framework establishes an innovative analytical perspective for TAS traffic scheduling challenges by enabling self-adaptive algorithm matching across varying scheduling demands, rather than constraining specific algorithms to predefined operational scenarios.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"241 ","pages":"Article 108268"},"PeriodicalIF":4.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamad Wazzeh , Ahmad Hammoud , Azzam Mourad , Hadi Otrok , Chamseddine Talhi , Zbigniew Dziong , Chang-Dong Wang , Mohsen Guizani
{"title":"Dynamic Split Federated Learning for resource-constrained IoT systems","authors":"Mohamad Wazzeh , Ahmad Hammoud , Azzam Mourad , Hadi Otrok , Chamseddine Talhi , Zbigniew Dziong , Chang-Dong Wang , Mohsen Guizani","doi":"10.1016/j.comcom.2025.108275","DOIUrl":"10.1016/j.comcom.2025.108275","url":null,"abstract":"<div><div>Efficient resource utilization in Internet of Things (IoT) systems is challenging due to device limitations. These limitations restrict on-device machine learning (ML) model training, leading to longer processing times and inefficient metadata analysis. Additionally, conventional centralized data collection poses privacy concerns, as raw data has to leave the device to the server for processing. Combining Federated Learning (FL) and Split Learning (SL) offers a promising solution by enabling effective machine learning on resource-constrained devices while preserving user privacy. However, the dynamic nature of IoT resources and device heterogeneity can complicate the application of these solutions, as some IoT devices cannot complete the training task on time. To address these concerns, we have developed a Dynamic Split Federated Learning (DSFL) architecture that dynamically adjusts to the real-time resource availability of individual clients. Combining real-time split-point selection with a Genetic Algorithm (GA) for client selection, tailored to heterogeneous, resource-constrained IoT devices. DSFL ensures optimal utilization of resources and efficient training across heterogeneous IoT devices and servers. Our architecture detects each IoT device’s training capabilities by identifying the number of layers it can train. Moreover, an effective Genetic Algorithm (GA) process strategically selects the clients required to complete the split federated learning round. Cooperatively, the clients and servers train their parts of the model, aggregate them, and then combine the results before moving to the next round. Our proposed architecture enables collaborative model training across devices while preserving data privacy by combining FL’s parallelism with SL’s dynamic modeling. We evaluated our architecture on sensory and image-based datasets, showing improved accuracy and reduced overhead compared to baseline methods.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"242 ","pages":"Article 108275"},"PeriodicalIF":4.3,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144842664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuwei Qiu , Haiyan Shi , Mahammad Humayoo , Bin Qiu , Jianzhong Li , Xiaoqing Dong , Yinghui Zhu , Wei Huang
{"title":"UAV hovering location optimization for maximizing the throughput of IPv6 packet broadcast in Wireless Powered Sensor Network","authors":"Shuwei Qiu , Haiyan Shi , Mahammad Humayoo , Bin Qiu , Jianzhong Li , Xiaoqing Dong , Yinghui Zhu , Wei Huang","doi":"10.1016/j.comcom.2025.108256","DOIUrl":"10.1016/j.comcom.2025.108256","url":null,"abstract":"<div><div>In a Wireless Powered Sensor Network (WPSN) assisted by Unmanned Aerial Vehicles (UAVs), the UAVs provide wireless charging to the nodes in order to maintain uninterrupted functionality. Effective dissemination of IPv6 packets is essential in WPSN for many Internet of Things (IoT) applications, such as smart agriculture. Improving the efficiency of wireless charging to increase the transmission speed of IPv6 packets in WPSN is a critical problem. In this paper, we suggest using a Particle Swarm Optimization (PSO)-based approach to optimize the placement of UAVs. This method aims to optimize the places where UAVs hover in order to enhance the efficiency of charging nodes and ultimately boost network throughput. Our initial approach involves creating a method for broadcasting IPv6 packets in a WPSN by utilizing the combined power supply from several UAVs. This approach utilizes network coding technologies to improve the reliability of packet broadcasting. In addition, we transform the problem of IPv6 broadcasting into a unicast equivalent, resulting in a derived equation for throughput. Consequently, we establish an optimization problem where the positions of UAVs serve as variables, with the goal of maximizing network throughput as the objective function. An approach based on Particle Swarm Optimization (PSO) is developed to handle this optimization problem. The simulation results demonstrate that our method provides a throughput performance enhancement ranging from 10.39% to 70.46% compared to IFA (Improved Firefly Algorithm), Fixed, and Random solutions under various parameter configurations.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"241 ","pages":"Article 108256"},"PeriodicalIF":4.5,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mitigating IoT botnet attacks: An early-stage explainable network-based anomaly detection approach","authors":"Abdelaziz Amara Korba , Alaeddine Diaf , Mouhamed Amine Bouchiha , Yacine Ghamri-Doudane","doi":"10.1016/j.comcom.2025.108270","DOIUrl":"10.1016/j.comcom.2025.108270","url":null,"abstract":"<div><div>As the Internet of Things (IoT) continues to expand, botnet-driven threats pose a growing and severe risk to the security of IoT-enabled infrastructures. These threats exploit large numbers of compromised devices to establish covert control channels and, eventually, launch large-scale cyberattacks such as Distributed Denial of Service (DDoS), capable of severely disrupting critical services and causing substantial economic damage. This paper highlights the urgent need for detecting botnets at an early stage, particularly by identifying stealthy command and control (C&C) traffic that precedes the execution of such attacks. We propose an anomaly-based detection framework that combines semi-supervised learning with explainable Artificial Intelligence (XAI). Unlike most existing approaches, our method requires only benign traffic for training, thereby enabling the detection of previously unseen or evolving botnet threats without relying on labeled malicious data. The framework supports multiple traffic representations, including raw bytes, packet-level data, and unidirectional or bidirectional flows, enriched with diverse network features to enhance detection coverage and adaptability. Experimental evaluations using the IoT-23 dataset demonstrate a 99.51% detection rate and a 1.09% false positive rate for stealthy C&C communications, underscoring the method’s effectiveness and robustness. The integration of XAI enhances transparency and interpretability, enabling security professionals to better understand model decisions and refine detection strategies.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"241 ","pages":"Article 108270"},"PeriodicalIF":4.5,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic data partitioning strategy for distributed learning on heterogeneous edge system","authors":"Kun Yu, Weiwen Zhang","doi":"10.1016/j.comcom.2025.108262","DOIUrl":"10.1016/j.comcom.2025.108262","url":null,"abstract":"<div><div>Distributed machine learning on edge systems has attracted attention due to the development of artificial intelligence and edge computing. One challenge is straggler problem for synchronous updates during training, in which some edge nodes that complete training first have to wait for the nodes that complete training later. This results in long waiting time and downgrades the performance of distributed learning. In this paper, we investigate dynamic data partition for load balance among heterogeneous edge nodes. We propose experience-driven algorithms based on actor–critic deep reinforcement learning to optimize model training in distributed edge systems. It can learn the network environment and the computing capabilities of edge nodes, and thus strategically allocate training data to edge nodes. We conduct experiments on two commonly used datasets, i.e., MNIST and CIFAR-10, to evaluate the performance of the proposed method. The results show that the proposed DDPS can significantly reduce training latency, compared to random partition strategy, even partition strategy, greedy partition strategy and A2C strategy.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"241 ","pages":"Article 108262"},"PeriodicalIF":4.5,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient blockchain synchronization mechanism over NDN based on directed Interest forwarding","authors":"Dehao Zhang, Jiapeng Xiu, Zhengqiu Yang, Huixin Liu, Shaoyong Guo","doi":"10.1016/j.comcom.2025.108258","DOIUrl":"10.1016/j.comcom.2025.108258","url":null,"abstract":"<div><div>Blockchain technology, as a decentralized technology, has been applied across various industries due to its immutability and information security features. With the increasing adoption of blockchain technology, network scale and transaction volumes have increased rapidly. The growing data transmission demands have exposed network performance issues in blockchain systems, creating a bottleneck for further improvements. While Named Data Networking (NDN) offers strong support for blockchain networks, some existing designs lack efficient synchronization methods, resulting in redundancies and limiting the full potential of NDN in blockchain networks. To address this issue, this paper proposes a directed Interest forwarding-based synchronization mechanism for NDN-based blockchain networks. In this mechanism, we design a Block Synchronous Forward Table (BSFT) to record the synchronization status of upstream and downstream nodes. Through the structure of this table, nodes can obtain information about other nodes in the network via six specifically designed NDN Interests. During synchronization, nodes dynamically select the appropriate peers to send data request Interest based on the actual network state and synchronization status, thereby reducing the large number of redundant Interest packets and corresponding response Data packets caused by Interest broadcasts. Experimental results demonstrate that our proposed synchronization mechanism can effectively reduce network traffic, lowering traffic by about 30% or more compared to traditional IP-based blockchain and other NDN-based blockchain solutions. This also accelerates the synchronization of Data packets across the entire network, thereby enhancing the overall performance of blockchain networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"242 ","pages":"Article 108258"},"PeriodicalIF":4.5,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}