Maad Ebrahim , Abdelhakim Hafid , Mohamed Riduan Abid
{"title":"Enhancing fog load balancing through lifelong transfer learning of reinforcement learning agents","authors":"Maad Ebrahim , Abdelhakim Hafid , Mohamed Riduan Abid","doi":"10.1016/j.comcom.2024.108024","DOIUrl":"10.1016/j.comcom.2024.108024","url":null,"abstract":"<div><div>Fog computing is a promising paradigm for processing Internet of Things (IoT) data. Load balancing (LB) optimizes Fog performance through efficient resource allocation, improving resource utilization, latency for real-time IoT applications, and users’ quality of service. In this work, we enhance the learning process of privacy-aware Reinforcement Learning (PARL), which requires significant training to minimize waiting delays by reducing the number of queued requests without explicitly observing Fog load or resource capabilities. To achieve this, we explore different Transfer Learning (TL) techniques for efficient adaptation to variations in demand, triggering a fine-tuning process when abrupt surges in generation rates are detected. This exploration highlights the advantages and disadvantages of reusing previously learned policies (knowledge) and interactions (experience) over multiple learning epochs with increased difficulties. Our results show that Full TL (using knowledge and experience) enhances the learning and generalization of the PARL agent, allowing it to consistently converge to the optimal solution with 80% less training compared to without TL. Additionally, we propose a lifelong learning framework for practical agent deployment in frequently changing environments. Introducing TL in this framework significantly reduces the computationally expensive training phase compared to training from scratch. Instead of continuous adaptation through ongoing training, balancer resources are preserved to provide faster decisions via a lightweight inference model. In case of significant system changes, the model is swiftly fine-tuned using TL. Furthermore, the framework leverages existing (expert) or simulation-trained agents to initialize newly deployed agents in the network, reducing failure probability in new environments compared to learning from scratch. To our knowledge, no existing efforts in the literature use TL to address lifelong learning for practical RL-based Fog LB. This gap highlights the need for a practical yet efficient solution that minimizes the cost of continuous adaptation to changing conditions.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108024"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SMKA: Secure multi-key aggregation with verifiable search for IoMT","authors":"Xueli Nie , Aiqing Zhang , Yong Wang , Weiqi Wang","doi":"10.1016/j.comcom.2024.108012","DOIUrl":"10.1016/j.comcom.2024.108012","url":null,"abstract":"<div><div>The Internet of Medical Things (IoMT) aggregates numerous smart medical devices and fully employs the collected health data to enhance patients’ experiences. In IoMT, patients generate various encryption keys and receive keyword information sent by data requesters to securely share selected health data, which increases the potential risk of key leakage. Besides, the cloud server may tamper with search results. The existing schemes did not consider that patients may inadvertently disclose the keyword information of data requesters. Additionally, these schemes entail a significant cost for verifying the search results. To deal with these challenges, we innovatively propose a secure multi-key aggregation (SMKA) scheme with verifiable search for IoMT. Firstly, the SMKA scheme is built upon key-aggregate searchable encryption, utilizing an oblivious search request and blockchain technology to achieve secure key aggregation. Secondly, a dual verifiable algorithm is integrated into the scheme to provide lightweight verification for the search results. The proposed scheme can achieve access control, requester privacy, accountability, and dual verification while ensuring secure search. Furthermore, the security analysis and proof have shown the effectiveness of the proposed protocol in achieving the intended security goals. Finally, the performance analysis indicates the significant feasibility and scalability of the proposed scheme.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108012"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed cooperative task allocation for heterogeneous UAV swarms under complex constraints","authors":"Wei Yue, Xiaoyong Zhang, Zhongchang Liu","doi":"10.1016/j.comcom.2024.108043","DOIUrl":"10.1016/j.comcom.2024.108043","url":null,"abstract":"<div><div>This paper investigates the dynamic task allocation problem for a heterogeneous UAV swarm conducting reconnaissance and strike (RAS) tasks while considering constraints on critical task time, communication range, and task resource requirements. The main challenge is to reconnaissance and strike all unknown targets within the mission area, which involves managing the UAV's changing states, task information, and variable communication with neighboring nodes. It is also important to overcome the limitations of current consensus-based heuristic task allocation approaches, which often lead to sub-optimal solutions due to being trapped in a local optimum within a distributed computing framework. To solve these problems, a novel heterogeneous UAV swarm task allocation model is developed first to maximize task benefits and minimize path planning costs. Second, we propose a two-phase consensus-based group bundling algorithm (CBGBA), which enables UAVs to reach consensus on task allocation results in a dynamic environment. In the task inclusion phase, we create feasible time slots for newly added tasks by optimizing task delay and sequence revenue, thus preventing the occurrence local optima problems under the critical task time constraint. In the consensus procedure phase, we employ a block-information-sharing (BIS) strategy to establish local networks, resolving consensus conflicts due to communication range constraints. Additionally, we propose an improved consensus principle that facilitates dynamic task allocation among distributed heterogeneous UAVs, meeting task resource requirements. Finally, the simulation results demonstrate the effectiveness and superiority of our proposed algorithm. Furthermore, CBGBA exhibits a performance enhancement of up to 14.2 % compared to the consensus-based synergy algorithm (CBSA).</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108043"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aegis: A cloud-edge computing based multi-disaster crowd evacuation model using improved deep reinforcement learning","authors":"Jinbo Zhao , Xiaolong Xu , Fu Xiao","doi":"10.1016/j.comcom.2024.108036","DOIUrl":"10.1016/j.comcom.2024.108036","url":null,"abstract":"<div><div>Crowd evacuation is an important measure for urban disaster management, which can provide effective evacuation guidelines for victims and safeguard their lives. However, most of existing methods are designed for single-disaster scenarios, ignoring the fact that disasters often erupt in multiple locations simultaneously. Thus, a multi-disaster crowd evacuation model, Aegis, is proposed based on cloud-edge computing and improved deep reinforcement learning. Firstly, the multi-disaster crowd evacuation problem is modeled as a multi-objective optimization problem, which considers shelter load balancing and dangerous area crossing issues. Secondly, an improved deep reinforcement learning model is proposed in this paper to solve it. The model utilizes Attention mechanism, Gated Recurrent Unit (GRU) and Graph Attention Network (GAT) to obtain the embedding of raw data. Then, the model maps the embedded information to the evacuation plan by an attention-based decoder. The model parameters are optimized using a Policy Gradient method. Thirdly, a cloud-edge computing framework is also introduced for Aegis, featuring a three-tier architecture that includes cloud, edge, and terminal levels. This design allows for the seamless integration of the model into smart city management. The experimental results show that Aegis outperforms other baseline methods, especially in reducing evacuation costs and optimizing shelter loads. In experiments with four different scales, Aegis reduces the evacuation costs by 58.87 %, 64.56 %, 65.59 %, and 67.79 %.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108036"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chong Liu, Ruosi Cheng, Fuxiang Yuan, Shichang Ding, Yan Liu, Xiangyang Luo
{"title":"GDD-Geo: IPv6 geolocation by graph dual decomposition","authors":"Chong Liu, Ruosi Cheng, Fuxiang Yuan, Shichang Ding, Yan Liu, Xiangyang Luo","doi":"10.1016/j.comcom.2024.108019","DOIUrl":"10.1016/j.comcom.2024.108019","url":null,"abstract":"<div><div>IP geolocation is a technique used to infer the location of an IP address through its network measurement features. It is widely used in network security, network management, and location-based services. To improve geolocation accuracy in IPv6 networks, especially when landmarks are sparse, we propose an IPv6 geolocation method based on graph dual decomposition called GDD-Geo. GDD-Geo models an IPv6 address by its network measurement attributes, including paths, delay values, and addresses. The geolocation process involves comparing the similarity of these attributes. GDD-Geo comprises two sub-algorithms, GDD-CGeo and GDD-SGeo, which provide city-level and street-level (oriented) geolocation results, respectively. Particularly, we design two graph decomposition algorithms to transform the paths represented by router interfaces into the paths represented by subgraphs based on the characteristics of IPv6 address distribution and delay distribution. The former decomposition is the support for GDD-CGeo, while the latter decomposition is conducted on the results of the former decomposition and supports GDD-SGeo. Due to the aggregation and reconstruction effects of paths derived from graph decomposition, GDD-Geo can reduce the dependence on landmarks and thus can cope with the landmark-sparse scenarios. Experimental results of city-level geolocation show that GDD-CGeo can accurately geolocate the IPv6 targets at the city level. Street-level (oriented) geolocation results in six cities within different countries show that the median errors of GDD-SGeo are 1.66–5.27 km, and the mean errors are 2.55–5.88 km. Compared with popular algorithms SLG and MLP-Geo, GDD-SGeo performs significantly better on sparse landmark datasets, with at least a 60% decrease in errors.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108019"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongzhuang Wu , Cheng Cheng , Tao Peng , Hongzhi Zhou , Tao Chen
{"title":"Combining transformer with a latent variable model for radio tomography based robust device-free localization","authors":"Hongzhuang Wu , Cheng Cheng , Tao Peng , Hongzhi Zhou , Tao Chen","doi":"10.1016/j.comcom.2024.108022","DOIUrl":"10.1016/j.comcom.2024.108022","url":null,"abstract":"<div><div>Radio tomographic imaging (RTI) is a promising device-free localization (DFL) method for reconstructing the signal attenuation caused by physical objects in wireless networks. In this paper, we use the received signal strength (RSS) difference between the current and baseline measurements captured by a wireless network to achieve the RTI based DFL in a predefined monitoring area. RTI is formulated as solving a badly conditioned problem under complex noise. And the end-to-end deep learning method based on Transformers and latent variable models (LVMs) is considered to address the RTI problem. The data grouping strategy is designed to divide the RSS data into multiple spatially-correlated groups, and a Transformer-based convolutional neural network (TCNN) model is firstly developed for RTI, in which the Transformer blocks are able to help the model learn the more expressive feature for the environmental image reconstruction task. The RTI system is influenced by both sensor noise and environmental noise simultaneously. In order to improve the performance of the RTI method, a Transformer-based latent variable model (TLVM) is proposed further, where the robustness to interference can be enhanced by controlling the capacity of the latent variables. The comparative numerical experiments are conducted for RTI based DFL, and the efficacy of the proposed TCNN and TLVM based RTI methods is verified by the experimental results.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108022"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient sharding consensus protocol for improving blockchain scalability","authors":"Li Lu , Linfu Sun , Yisheng Zou","doi":"10.1016/j.comcom.2024.108032","DOIUrl":"10.1016/j.comcom.2024.108032","url":null,"abstract":"<div><div>A consortium blockchain facilitates establishment credit among supply and demand agents on a cloud platform. HotStuff, a Byzantine fault-tolerance consensus protocol, predominates the consortium blockchains and has undergone extensive research and practical applications. However, its scalability remains limited with an increased number of nodes, making it unsuitable for large-scale transactions. Consequently, an improved sharding consensus protocol (IShard) is proposed to consider decentralization, security, and scalability within the consortium blockchain. First, IShard employs the jump consistent hash algorithm for reasonable node allocation within the network, thus reducing data migration resulting from shard modifications. Second, a credit mechanism is devised to reflect credit based on the behavior of nodes, optimizing consensus nodes to enhance performance. Third, a credit-based consensus protocol is introduced to concurrently handle transactions through sharding among multiple shards, distributing transactions to each shard to alleviate the overall burden, thus enhancing the scalability of the blockchain. Fourth, a node removal mechanism is devised to identify and eliminate Byzantine nodes, minimizing view changes and ensuring efficient system operation in an environment susceptible to Byzantine faults. Finally, IShard has demonstrated its ability to ensure security and liveness in shard transactions, subject to particular constraints regarding Byzantine nodes. In addition, transaction processes involving supply and demand agents are designed to enhance data reliability. Experimental results demonstrate that IShard surpasses current leading protocols, achieving a communication complexity of O(<em>n</em>) and superior throughput and scalability.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108032"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi Zhang , Honglie Li , Mi Yan , Peng Guo , Yiyi Zhang , Zhe Tian
{"title":"Novel low-latency data gathering scheduling for multi-radio wireless multi-hop networks","authors":"Chi Zhang , Honglie Li , Mi Yan , Peng Guo , Yiyi Zhang , Zhe Tian","doi":"10.1016/j.comcom.2024.108020","DOIUrl":"10.1016/j.comcom.2024.108020","url":null,"abstract":"<div><div>Due to the complicated terrain, wireless multi-hop networks (WMNs) are often needed in large industrial environment. As the distribution of WMNs’ nodes in practical industrial environment are usually quite uneven, WMNs are prone to suffer from serious congestion during the data gathering. Employing multiple radios can help to mitigate the congestion. This is due to their flexible capability to concurrently schedule each node’s transmission and reception. Most existing works for multi-radio WMNs study the scheduling of gathering traffic flow at each node, while little work studies the scheduling of gathering a fixed amount of data at the nodes. The latter is quite typical in practice, quite complicated as the transmission load of each node in this scenario is dynamic and related to the descendant nodes’ scheduling. In this paper, we propose a novel low-latency scheduling for gathering a fixed amount of data spread in multi-radio WMNs. The proposed scheduling dynamically assigns each node’s radios for transmission, reception and their targets, according to the current amount of local data and the assignment of neighboring nodes’ radios. Extensive simulations are conducted and the results show the remarkable performance of the proposed scheduling, when compared to related scheduling methods.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108020"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deadline-constrained routing based on power-law and exponentially distributed contacts in DTNs","authors":"Tuan Le","doi":"10.1016/j.comcom.2024.108038","DOIUrl":"10.1016/j.comcom.2024.108038","url":null,"abstract":"<div><div>During a large-scale disaster, there is a severe destruction to physical infrastructures such as telecommunication and power lines, which result in the disruption of communication, making timely emergency response challenging. Since Delay Tolerant Networks (DTNs) are infrastructure-less, they tolerate physical destruction and thus can serve as an emergency response network during a disaster scenario. To be effective, DTNs need a routing protocol that maximizes the number of messages delivered within deadline. One obvious approach is to broadcast messages everywhere. However, this approach is impractical as DTNs are resource-constrained. In this work, we propose a cost-effective routing protocol based on the expected delivery delay that optimizes the number of messages delivered within deadline with a significantly low network overhead. Simulations using real-life mobility traces show that with our scheme, up to 95% of messages are delivered within deadline, while requiring on average less than three message copies.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108038"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandra Rizzardi, Sabrina Sicari, Alberto Coen-Porisini
{"title":"Attribute-based policies through microservices in a smart home scenario","authors":"Alessandra Rizzardi, Sabrina Sicari, Alberto Coen-Porisini","doi":"10.1016/j.comcom.2024.108039","DOIUrl":"10.1016/j.comcom.2024.108039","url":null,"abstract":"<div><div>Application containerization allows for efficient resource utilization and improved performance when compared to traditional virtualization techniques. However, managing multiple containers and providing services such as load balancing, fault tolerance and security represent challenging tasks in the emerging microservices architectures. In this context, Kubernetes platform allows to build resilient distributed containers. Besides its efficiency in terms of configuration and architectural resiliency, it must also guarantee the access control to the managed resources. In fact, information must be protected throughout the different microservices which compose an application. To cope with such an issue, this paper proposes the definition of attribute-based policies able to regulate data disclosure within a Kubernetes-based microservices network. Simulations are carried out in a local Minikube environment, considering a smart residence scenario. The investigated metrics include response time, required memory, CPU load, and disk usage.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"231 ","pages":"Article 108039"},"PeriodicalIF":4.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143157034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}