{"title":"A dual-tier adaptive one-class classification IDS for emerging cyberthreats","authors":"Md. Ashraf Uddin , Sunil Aryal , Mohamed Reda Bouadjenek , Muna Al-Hawawreh , Md. Alamin Talukder","doi":"10.1016/j.comcom.2024.108006","DOIUrl":"10.1016/j.comcom.2024.108006","url":null,"abstract":"<div><div>In today’s digital age, our dependence on IoT (Internet of Things) and IIoT (Industrial IoT) systems has grown immensely, which facilitates sensitive activities such as banking transactions and personal, enterprise data, and legal document exchanges. Cyberattackers consistently exploit weak security measures and tools. The Network Intrusion Detection System (IDS) acts as a primary tool against such cyber threats. However, machine learning-based IDSs, when trained on specific attack patterns, often misclassify new emerging cyberattacks. Further, the limited availability of attack instances for training a supervised learner and the ever-evolving nature of cyber threats further complicate the matter. This emphasizes the need for an adaptable IDS framework capable of recognizing and learning from unfamiliar/unseen attacks over time. In this research, we propose a one-class classification-driven IDS system structured on two tiers. The first tier distinguishes between normal activities and attacks/threats, while the second tier determines if the detected attack is known or unknown. Within this second tier, we also embed a multi-classification mechanism coupled with a clustering algorithm. This model not only identifies unseen attacks but also uses them for retraining them by clustering unseen attacks. This enables our model to be future-proofed, capable of evolving with emerging threat patterns. Leveraging one-class classifiers (OCC) at the first level, our approach bypasses the need for attack samples, addressing data imbalance and zero-day attack concerns and OCC at the second level can effectively separate unknown attacks from the known attacks. Our methodology and evaluations indicate that the presented framework exhibits promising potential for real-world deployments.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 108006"},"PeriodicalIF":4.5,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jehad Ali , Sushil Kumar Singh , Weiwei Jiang , Abdulmajeed M. Alenezi , Muhammad Islam , Yousef Ibrahim Daradkeh , Asif Mehmood
{"title":"A deep dive into cybersecurity solutions for AI-driven IoT-enabled smart cities in advanced communication networks","authors":"Jehad Ali , Sushil Kumar Singh , Weiwei Jiang , Abdulmajeed M. Alenezi , Muhammad Islam , Yousef Ibrahim Daradkeh , Asif Mehmood","doi":"10.1016/j.comcom.2024.108000","DOIUrl":"10.1016/j.comcom.2024.108000","url":null,"abstract":"<div><div>The integration of the Internet of Things (IoT) and artificial intelligence (AI) in urban infrastructure, powered by advanced information communication technologies (ICT), has paved the way for smart cities. While these technologies promise enhanced quality of life, economic growth, and improved public services, they also introduce significant cybersecurity challenges. This article comprehensively examines the complex factors in securing AI-driven IoT-enabled smart cities within the framework of future communication networks. Our research addresses critical questions about the evolving threat, multi-layered security approaches, the role of AI in enhancing cybersecurity, and necessary policy frameworks. We conduct an in-depth analysis of cybersecurity solutions across service, application, network, and physical layers, evaluating their effectiveness and integration potential with existing systems. The study offers a detailed examination of AI-driven security approaches, particularly ML and DL techniques, assessing their applicability and limitations in smart city environments. We incorporate real-world case studies to illustrate successful strategies and show areas requiring further research, especially considering emerging communication technologies. Our findings contribute to the field by providing a multi-layered classification of cybersecurity solutions, assessing AI-driven security approaches, and exploring future research directions. Additionally, we investigate the essential role played by policy and regulatory frameworks in safeguarding smart city security. Based on our analysis, we offer recommendations for technical implementations and policy development, aiming to create a holistic approach that balances technological advancements with robust security measures. This study also provides valuable insights for scholars, professionals, and policymakers, offering a comprehensive perspective on the cybersecurity challenges and solutions for AI-driven IoT-enabled smart cities in advanced communication networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 108000"},"PeriodicalIF":4.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ning Rao, Hua Xu, Zisen Qi, Dan Wang, Yue Zhang, Xiang Peng, Lei Jiang
{"title":"The pupil outdoes the master: Imperfect demonstration-assisted trust region jamming policy optimization against frequency-hopping spread spectrum","authors":"Ning Rao, Hua Xu, Zisen Qi, Dan Wang, Yue Zhang, Xiang Peng, Lei Jiang","doi":"10.1016/j.comcom.2024.107993","DOIUrl":"10.1016/j.comcom.2024.107993","url":null,"abstract":"<div><div>Jamming decision-making is a pivotal component of modern electromagnetic warfare, wherein recent years have witnessed the extensive application of deep reinforcement learning techniques to enhance the autonomy and intelligence of wireless communication jamming decisions. However, existing researches heavily rely on manually designed customized jamming reward functions, leading to significant consumption of human and computational resources. To this end, under the premise of obviating designing task-customized reward functions, we propose a jamming policy optimization method that learns from imperfect demonstrations to effectively address the complex and high-dimensional jamming resource allocation problem against frequency hopping spread spectrum (FHSS) communication systems. To achieve this, a policy network is meticulously architected to consecutively ascertain jamming schemes for each jamming node, facilitating the construction of the dynamic transition within the Markov decision process. Subsequently, anchored in the dual-trust region concept, we design policy improvement and policy adversarial imitation phases. During the policy improvement phase, the trust region policy optimization method is utilized to refine the policy, while the policy adversarial imitation phase employs adversarial training to guide policy exploration using information embedded in demonstrations. Extensive simulation results indicate that our proposed method can approximate the optimal jamming performance trained under customized reward functions, even with rough binary reward settings, and also significantly surpass demonstration performance.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107993"},"PeriodicalIF":4.5,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Datong Xu , Chaosheng Qiu , Wenshan Yin , Pan Zhao , Mingyang Cui
{"title":"Symbol-level scheme for combating eavesdropping: Symbol conversion and constellation adjustment","authors":"Datong Xu , Chaosheng Qiu , Wenshan Yin , Pan Zhao , Mingyang Cui","doi":"10.1016/j.comcom.2024.107992","DOIUrl":"10.1016/j.comcom.2024.107992","url":null,"abstract":"<div><div>In the non-orthogonal multiple access scenario, users may suffer inter-multiuser eavesdropping due to the feature of successive interference cancellation, and the conditions of eavesdropping suppression methods in the traditional schemes may not be satisfied. To combat this eavesdropping, we consider physical layer security and propose a novel scheme by specially designing symbol conversion and constellation adjustment methods. Based on these methods, the amplitudes and phases of symbols are properly changed. When each user intercepts information as an eavesdropper, he/she has to accept high error probability, or he/she has to undergo exorbitant overhead. Analytical and numerical results demonstrate that the proposed scheme can protect the privacy of information, and this protection does not destruct the execution of successive interference cancellation and symbol transmission.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107992"},"PeriodicalIF":4.5,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Hao , Chaozheng Ding , Xiaohai Dai , Hao Fan , Jianwen Xiang
{"title":"High-performance BFT consensus for Metaverse through block linking and shortcut loop","authors":"Rui Hao , Chaozheng Ding , Xiaohai Dai , Hao Fan , Jianwen Xiang","doi":"10.1016/j.comcom.2024.107990","DOIUrl":"10.1016/j.comcom.2024.107990","url":null,"abstract":"<div><div>In recent years, the Metaverse has captured increasing attention. As the foundational technologies for these digital realms, blockchain systems and their critical component – <em>the Byzantine Fault Tolerance</em> (BFT) consensus protocol – significantly influence the performance of Metaverse. Due to vulnerabilities to network attacks, synchronous and partially synchronous consensus protocols often face compromises in their liveness or security. Consequently, recent efforts in BFT consensus have shifted towards asynchronous consensus protocols, notably the <em>Multi-valued Validated Binary Agreement</em> (MVBA) protocols, with sMVBA being particularly prominent. Despite its advances, sMVBA struggles to meet the high-performance demands of Metaverse applications. Each sMVBA instance commits only one block, discarding all others, which severely restricts throughput. Moreover, if a leader in a given view crashes, nodes must rebroadcast blocks in the subsequent view, resulting in increased latency.</div><div>To overcome these challenges, this paper introduces <span>Mercury</span>, a protocol designed to enhance throughput under various conditions and reduce latency in less favorable scenarios where leaders are crashed. <span>Mercury</span> incorporates a mechanism whereby each block contains hashes from blocks of a previous instance, linking blocks across instances. This structure ensures that once a block is committed, all its linked blocks are also committed, thereby boosting throughput. Additionally, <span>Mercury</span> integrates a ‘shortcut loop’ mechanism, allowing nodes to bypass the last phase of the current view and the block broadcasting in the next view, significantly decreasing latency. Our experimental evaluations of <span>Mercury</span> confirm its superior performance. Compared to the cutting-edge protocols, sMVBA, CKPS, and AMS, <span>Mercury</span> boosts throughput by 1.03X, 1.65X, and 2.51X, respectively.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107990"},"PeriodicalIF":4.5,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automating 5G network slice management for industrial applications","authors":"André Perdigão, José Quevedo, Rui L. Aguiar","doi":"10.1016/j.comcom.2024.107991","DOIUrl":"10.1016/j.comcom.2024.107991","url":null,"abstract":"<div><div>The transition to Industry 4.0 introduces new use cases with unique communication requirements, demanding wireless technologies capable of dynamically adjusting their performance to meet various demands. Leveraging network slicing, 5G technology offers the flexibility to support such use cases. However, the usage and deployment of network slices in networks are complex tasks. To increase the adoption of 5G, there is a need for mechanisms that automate the deployment and management of network slices. This paper introduces a design for a network slice manager capable of such mechanisms in 5G networks. This design adheres to related standards, facilitating interoperability with other software, while also considering the capabilities and limitations of the technology. The proposed design can provision custom slices tailored to meet the unique requirements of verticals, offering communication performance across the spectrum of the three primary 5G services (eMBB, URLLC, and mMTC/mIoT). To access the proposed design, a Proof-of-Concept (PoC) prototype was developed and evaluated. The evaluation results demonstrate the flexibility of the proposed solution for deploying slices adjusted to the vertical use cases. Additionally, the slices generated by the PoC maintain a high TRL (Technology Readiness Level) equivalent to that of the commercial-graded network used.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107991"},"PeriodicalIF":4.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diego Lopez-Pajares , Elisa Rojas , Mankamana Prasad Mishra , Parveen Jindgar , Joaquin Alvarez-Horcajo , Nicolas Manso , Jonathan Desmarais
{"title":"MDTA: An efficient, scalable and fast Multiple Disjoint Tree Algorithm for dynamic environments","authors":"Diego Lopez-Pajares , Elisa Rojas , Mankamana Prasad Mishra , Parveen Jindgar , Joaquin Alvarez-Horcajo , Nicolas Manso , Jonathan Desmarais","doi":"10.1016/j.comcom.2024.107989","DOIUrl":"10.1016/j.comcom.2024.107989","url":null,"abstract":"<div><div>Emerging applications such as telemedicine, the tactile Internet or live streaming place high demands on low latency to ensure a satisfactory Quality of Experience (QoE). In these scenarios the use of trees can be particularly interesting to efficiently deliver traffic to groups of users because they further enhance network performance by providing redundancy and fault tolerance, ensuring service continuity when network failure or congestion scenarios occur. Furthermore, if trees are isolated from each other (they do not share common communication elements as links and/or nodes), their benefits are further enhanced since events such as failures or congestion in one tree do not affect others. However, the challenge of computing fully disjoint trees (both link- and node-disjoint) introduces significant mathematical complexity, resulting in longer computation times, which negatively impacts latency-sensitive applications.</div><div>In this article, we propose a novel algorithm designed to rapidly compute multiple fully (either link- or node-) disjoint trees while maintaining efficiency and scalability, specifically focused on targeting the low-latency requirements of emerging services and applications. The proposed algorithm addresses the complexity of ensuring disjointedness between trees without sacrificing performance. Our solution has been tested in a variety of network environments, including both wired and wireless scenarios.</div><div>The results showcase that our proposed method is approximately 100 times faster than existing techniques, while achieving a comparable success rate in terms of number of obtained disjoint trees. This significant improvement in computational speed makes our approach highly suitable for the low-latency requirements of next-generation networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107989"},"PeriodicalIF":4.5,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Safe load balancing in software-defined-networking","authors":"Lam Dinh, Pham Tran Anh Quang, Jérémie Leguay","doi":"10.1016/j.comcom.2024.107985","DOIUrl":"10.1016/j.comcom.2024.107985","url":null,"abstract":"<div><div>High performance, reliability and safety are crucial properties of any Software-Defined-Networking (SDN) system. Although the use of Deep Reinforcement Learning (DRL) algorithms has been widely studied to improve performance, their practical applications are still limited as they fail to ensure safe operations in exploration and decision-making. To fill this gap, we explore the design of a Control Barrier Function (CBF) on top of Deep Reinforcement Learning (DRL) algorithms for load-balancing. We show that our DRL-CBF approach is capable of meeting safety requirements during training and testing while achieving near-optimal performance in testing. We provide results using two simulators: a flow-based simulator, which is used for proof-of-concept and benchmarking, and a packet-based simulator that implements real protocols and scheduling. Thanks to the flow-based simulator, we compared the performance against the optimal policy, solving a Non Linear Programming (NLP) problem with the SCIP solver. Furthermore, we showed that pre-trained models in the flow-based simulator, which is faster, can be transferred to the packet simulator, which is slower but more accurate, with some fine-tuning. Overall, the results suggest that near-optimal Quality-of-Service (QoS) performance in terms of end-to-end delay can be achieved while safety requirements related to link capacity constraints are guaranteed. In the packet-based simulator, we also show that our DRL-CBF algorithms outperform non-RL baseline algorithms. When the models are fine-tuned over a few episodes, we achieved smoother QoS and safety in training, and similar performance in testing compared to the case where models have been trained from scratch.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107985"},"PeriodicalIF":4.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A.S.M. Sharifuzzaman Sagar, Amir Haider, Hyung Seok Kim
{"title":"A hierarchical adaptive federated reinforcement learning for efficient resource allocation and task scheduling in hierarchical IoT network","authors":"A.S.M. Sharifuzzaman Sagar, Amir Haider, Hyung Seok Kim","doi":"10.1016/j.comcom.2024.107969","DOIUrl":"10.1016/j.comcom.2024.107969","url":null,"abstract":"<div><div>The increasing demand for processing numerous data from IoT devices in a hierarchical IoT network drives researchers to propose different resource allocation methods in the edge hosts efficiently. Traditional approaches often compromise on one of these aspects: either prioritizing local decision-making at the edge, which lacks global system insights or centralizing decisions in cloud systems, which raises privacy concerns. Additionally, most solutions do not consider scheduling tasks at the same time to effectively complete the prioritized task accordingly. This study introduces the hierarchical adaptive federated reinforcement learning (HAFedRL) framework for robust resource allocation and task scheduling in hierarchical IoT networks. At the local edge host level, a primal–dual update based deep deterministic policy gradient (DDPG) method is introduced for effective individual task resource allocation and scheduling. Concurrently, the central server utilizes an adaptive multi-objective policy gradient (AMOPG) which integrates a multi-objective policy adaptation (MOPA) with dynamic federated reward aggregation (DFRA) method to allocate resources across connected edge hosts. An adaptive learning rate modulation (ALRM) is proposed for faster convergence and to ensure high performance output from HAFedRL. Our proposed HAFedRL enables the effective integration of reward from edge hosts, ensuring the alignment of local and global optimization goals. The experimental results of HAFedRL showcase its efficacy in improving system-wide utility, average task completion rate, and optimizing resource utilization, establishing it as a robust solution for hierarchical IoT networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107969"},"PeriodicalIF":4.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"5G core network control plane: Network security challenges and solution requirements","authors":"Rajendra Patil , Zixu Tian , Mohan Gurusamy , Joshua McCloud","doi":"10.1016/j.comcom.2024.107982","DOIUrl":"10.1016/j.comcom.2024.107982","url":null,"abstract":"<div><div>The control plane of the 5G Core Network (5GCN) is essential for ensuring reliable and high-performance 5G communication. It provides critical network services such as authentication, user credentials, and privacy-sensitive signaling. However, the security threat landscape of the 5GCN control plane has largely expanded and it faces serious security threats from various sources and interfaces. In this paper, we analyze the new features and vulnerabilities of the 5GCN service-based architecture (SBA) with a focus on the control plane. We investigate the network threat surface in the 5GCN and outline potential vulnerabilities in the control plane. We develop a threat model to illustrate the potential threat sources, vulnerable interfaces, possible threats and their impacts. We provide a comprehensive survey of the existing security solutions, identify their challenges and propose possible solution requirements to address the network security challenges in the control plane of 5GCN and beyond.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107982"},"PeriodicalIF":4.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}