Computer NetworksPub Date : 2025-05-12DOI: 10.1016/j.comnet.2025.111379
Ankit Kumar , Mikail Mohammed Salim , David Camacho , Jong Hyuk Park
{"title":"A comprehensive survey on large language models for multimedia data security: challenges and solutions","authors":"Ankit Kumar , Mikail Mohammed Salim , David Camacho , Jong Hyuk Park","doi":"10.1016/j.comnet.2025.111379","DOIUrl":"10.1016/j.comnet.2025.111379","url":null,"abstract":"<div><div>The rapid expansion of IoT applications utilizes multimedia data integrated with Large Language Models (LLMs) for interpreting digital information by leveraging the capabilities of artificial intelligence (AI) driven neural network systems. These models are extensively used as generative AI tools for data augmentation but data security and privacy remain a fundamental concern associated with LLM model in the digital domain. Traditional security approach shows potential challenges in addressing emerging threats such as adversarial attacks, data poisoning, or privacy breaches, especially in dynamic and resource-constrained IoT environments. Such malicious attacks target the LLM model during the learning and evaluation phase to exploit the vulnerabilities for unauthorized access. The proposed study conducts a comprehensive survey of the transformative potential of LLM models for securing multimedia data offering analysis of their capabilities, challenges, and solutions. The proposed study explores potential security threats and remedies for each type of multimedia data and investigates the various traditional and emerging data protection schemes. The study systematically classifies emerging attacks on LLM models during training and testing phases which include membership attacks, adversarial perturbations, prompt injection, etc. The study also investigates the various robust defense mechanism such as adversarial training, regularization, encryption, etc. The study evaluates the efficiency of potential LLM models such as generative LLM, transformer-based, and other multimodal systems in securing image, text, and video multimedia data highlighting their adaptability and scalability. The proposed survey compares state-of-the-art solutions and underscores the efficiency of LLM-driven mechanisms over traditional approaches in mitigating emerging attacks such as zero-day threats on multimedia data. It ensures real-time compliance with standard regulations like GDPR (General Data Protection Regulation). The proposed work identifies some open challenges including privacy-preserving LLM deployment, black-box interpretability, personalized LLM privacy risk, and cross-model security integration. It also highlights some robust future solutions such as lightweight LLM design and hybrid security frameworks. The proposed work bridges critical research gaps by providing insights into LLM-based emerging techniques to safeguard sensitive data in IoT-based real-world applications.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"267 ","pages":"Article 111379"},"PeriodicalIF":4.4,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144071569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cooperative offloading multi-access edge computing (COMEC) for cell-edge users in heterogeneous dense networks","authors":"Muhammad Saleem Khan , Sobia Jangsher , Junaid Qadir , Hassaan Khaliq Qureshi","doi":"10.1016/j.comnet.2025.111332","DOIUrl":"10.1016/j.comnet.2025.111332","url":null,"abstract":"<div><div>Multi-access edge computing (MEC) addresses the rising computational demands of advanced applications by bringing processing closer to users. Yet, cell-edge users often face high latency and low throughput—challenges that can be mitigated by deploying multiple MEC servers for simultaneous task offloading in dense heterogeneous networks. This paper investigates the performance gains of collaborative computing and presents a novel <strong>C</strong>ooperative <strong>O</strong>ffloading <strong>M</strong>ulti-access <strong>E</strong>dge <strong>C</strong>omputing (COMEC) scheme. The COMEC aims to optimize resource allocation for cell-edge users by reducing latency and maximizing energy efficiency (EE). In this way, cell-edge users with limited battery and computational power can sustain low-latency applications for a longer time. A bi-objective optimization problem is formulated to maximize the EE of edge users while simultaneously minimizing the latency. We propose an iterative algorithm named ORA-ETO to solve the mixed integer non-linear fractional (MINLF) problem. The proposed scheme has been evaluated using both the Rayleigh and WINNER-II propagation models within an asymmetric cell configuration. The obtained results validate the efficacy of the proposed COMEC scheme for cell-edge users, achieving performance gains of over 55% compared to dense multi-server-assisted MEC and CoMP-assisted MEC architectures. The COMEC scheme is statistically more significant (<span><math><mrow><mi>p</mi><mo><</mo><mn>0</mn><mo>.</mo><mn>01</mn></mrow></math></span>) and has more stable performance with standard deviation less than <span><math><mrow><mn>0</mn><mo>.</mo><mn>082</mn></mrow></math></span> kbps/J, making it a superior choice for cell edge users. The effect size (<span><math><mrow><msup><mrow><mi>η</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>=</mo><mn>0</mn><mo>.</mo><mn>71</mn></mrow></math></span>) confirms that the choice of scheme has a considerable impact on EE of cell-edge users.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"267 ","pages":"Article 111332"},"PeriodicalIF":4.4,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-05-12DOI: 10.1016/j.comnet.2025.111344
Peixuan Song , JunKyu Lee , Ahmed M. Abdelmoniem , Lev Mukhanov
{"title":"Unlocking the power of 4G/5G mobile networks: An empirical dive into quality and energy efficiency in YouTube Edge services","authors":"Peixuan Song , JunKyu Lee , Ahmed M. Abdelmoniem , Lev Mukhanov","doi":"10.1016/j.comnet.2025.111344","DOIUrl":"10.1016/j.comnet.2025.111344","url":null,"abstract":"<div><div>The advancements in 5G mobile networks and Edge computing offer great potential for services like augmented reality and Cloud gaming, thanks to their low latency and high bandwidth capabilities. However, the practical limitations of achieving optimal latency on real applications remain uncertain. This paper investigates the latency, bandwidth, and energy consumption of 5G Networks and leverages YouTube Edge service as the practical use case. We analyze how latency, bandwidth, and energy consumption differ between 4G LTE and 5G networks and how the location of YouTube Edge servers impacts these metrics. Surprisingly, our observations show that the 5G ecosystems have average latency hikes of up to 2<span><math><mo>×</mo></math></span>, demonstrating that they are far from achieving their proclaimed promises. Our research study reveals over 10 significant observations and implications, indicating that the primary constraints on 4G/LTE and 5G capabilities are the ecosystem and energy efficiency of mobile devices’ when receiving data. Moreover, our study demonstrates that to unlock the potential of 5G and its applications fully, it is crucial to prioritize efforts to improve the 5G ecosystem and introduce better methods and techniques to enhance energy efficiency.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"267 ","pages":"Article 111344"},"PeriodicalIF":4.4,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-05-10DOI: 10.1016/j.comnet.2025.111333
Yuhao Chai , Yong Zhang , Zhenyu Zhang , Da Guo , Yinglei Teng
{"title":"Research on joint game theory and multi-agent reinforcement learning-based resource allocation in micro operator networks","authors":"Yuhao Chai , Yong Zhang , Zhenyu Zhang , Da Guo , Yinglei Teng","doi":"10.1016/j.comnet.2025.111333","DOIUrl":"10.1016/j.comnet.2025.111333","url":null,"abstract":"<div><div>Local 5G networks represent an emerging architecture in the 5G architecture, where local micro operators (MOs) reuse public mobile networks to support differentiated service transmission capacity and coverage requirements. 5G Radio Access Network (RAN) slicing technology offers a solution that allows for the flexible deployment of heterogeneous services as slices sharing the same infrastructure. In this paper, a multi-micro operators scenario with deployed RAN slicing is considered, where interference price is used to incentivize local micro operators to optimize their transmission power and resource block allocation strategies, reducing interference to mobile network users and enhancing transmission efficiency. We formulate the competitive interaction between mobile network operator (MNO) and local MOs as a two-stage Stackelberg game, with the MNO as the leader and MOs as followers. The MNO is responsible for establishing the interference price that MOs need to pay for their communication. MOs decide on their transmission power strategies to satisfy user-customized slice requirements by solving the game. A resource management scheme based on multi-agent reinforcement learning is proposed, introducing a game-theoretic equilibrium solution to determine resource block allocation strategies, ensuring slice isolation while increasing operator revenue. Experimental results demonstrate that our approach outperforms standalone reinforcement learning strategies in terms of transmission rates and interference price payments.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"266 ","pages":"Article 111333"},"PeriodicalIF":4.4,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143934683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EAP-FIDO: A novel EAP method for using FIDO2 credentials for network authentication","authors":"Martiño Rivera-Dourado , Christos Xenakis , Alejandro Pazos , Jose Vázquez-Naya","doi":"10.1016/j.comnet.2025.111348","DOIUrl":"10.1016/j.comnet.2025.111348","url":null,"abstract":"<div><div>The adoption of FIDO2 authentication by major tech companies in web applications has grown significantly in recent years. However, we argue FIDO2 has broader potential applications. In this paper, we introduce EAP-FIDO, a novel Extensible Authentication Protocol (EAP) method for use in IEEE 802.1X-protected networks. This allows organisations with WPA2/3-Enterprise wireless networks or MACSec-enabled wired networks to leverage FIDO2’s passwordless authentication in compliance with existing standards. Additionally, we provide a comprehensive security and performance analysis to support the feasibility of this approach.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"266 ","pages":"Article 111348"},"PeriodicalIF":4.4,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144067916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-05-10DOI: 10.1016/j.comnet.2025.111331
Jorrit J. Olthuis, Savio Sciancalepore, Nicola Zannone
{"title":"Cyberattacks and defenses for Autonomous Navigation Systems: A systematic literature review","authors":"Jorrit J. Olthuis, Savio Sciancalepore, Nicola Zannone","doi":"10.1016/j.comnet.2025.111331","DOIUrl":"10.1016/j.comnet.2025.111331","url":null,"abstract":"<div><div>Autonomous Navigation Systems (ANSs) are revolutionizing transportation and logistics by enhancing operational efficiency and reshaping industry standards. However, the absence of human intervention during operational failures makes ANSs more vulnerable to cyberattacks and their consequences. Although prior research has addressed the security challenges of ANSs and proposed various defenses to prevent and mitigate cyberattacks against ANSs, we still lack a comprehensive understanding of the ANS attack surface and the effectiveness of both attacks and defenses. To address this gap, we conduct a systematic review of 125 articles on cybersecurity for ANSs, focusing on their domain, characteristics, and the attack and defense strategies studied in the literature. Our analysis reveals notable research trends, open gaps, and areas for future investigation. Security research on navigation functions remains limited, despite their central role and the risks associated with their compromise. Moreover, our analysis reveals a lack of cross-domain research, resulting in threats and defenses analyzed for one domain being overlooked in others. Finally, we identify discrepancies between attacks and defenses studied in the literature, with a disproportionate focus on defense strategies.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"267 ","pages":"Article 111331"},"PeriodicalIF":4.4,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144088855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Containerized service placement and resource allocation at edge: A Hybrid Reinforcement Learning approach","authors":"Chao Zeng, Xingwei Wang, Rongfei Zeng, Shining Zhang, Jianzhi Shi, Min Huang","doi":"10.1016/j.comnet.2025.111343","DOIUrl":"10.1016/j.comnet.2025.111343","url":null,"abstract":"<div><div>Container has already become a default and prevalent solution due to its efficient and easy-to-deploy in edge computing. However, constrained resources in edge nodes may introduce significant deployment costs and increase service response latency in containerized services. Existing studies mainly focus on optimizing container placement strategies, while largely overlooking computational resources allocation. To tackle this problem, we introduce a joint optimization approach for containerized service placement and computational resources allocation from the perspective of image layer sharing. Specifically, we define a profit-driven mixed integer nonlinear programming (MINLP) problem and propose a graph-aware hybrid reinforcement learning (GAHRL) algorithm. By capturing inter-layer sharing dependencies and edge resource distribution, our algorithm optimizes containerized service placement while ensuring efficient computational resources allocation. Extensive experimental results show that the proposed algorithm outperforms other baseline algorithms in maximizing revenue as well as reducing service latency and storage cost.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"267 ","pages":"Article 111343"},"PeriodicalIF":4.4,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144071568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-05-08DOI: 10.1016/j.comnet.2025.111339
Jose Gomez , Elie F. Kfoury , Ali Mazloum , Jorge Crichigno
{"title":"Improving flow fairness in non-programmable networks using P4-programmable Data Planes","authors":"Jose Gomez , Elie F. Kfoury , Ali Mazloum , Jorge Crichigno","doi":"10.1016/j.comnet.2025.111339","DOIUrl":"10.1016/j.comnet.2025.111339","url":null,"abstract":"<div><div>This paper presents a system that leverages P4-programmable Data Planes (PDPs) to achieve flow separation in non-programmable networks, enhancing fairness and performance for TCP flows with varying Round-Trip Times (RTTs). The system passively taps into traffic at the physical layer, sending a copy to a PDP for real-time flow identification, RTT computation, and classification. Using the Jenks natural breaks algorithm, the system classifies flows based on their RTTs and allocates them to distinct queues within non-programmable routers. The paper demonstrates improvements in fairness, bandwidth distribution, and reduced latency through a series of experiments, including tests on long-flow fairness, adaptability to changing network conditions, bufferbloat prevention, and Flow Completion Time (FCT). Additionally, the system is extended to mitigate UDP abuses, preventing it from monopolizing network resources. Limitations such as memory constraints on programmable switches and computational overhead are also discussed, along with potential areas for optimization. Experimental results show that the proposed system improves average fairness by up to 15% compared to a single-queue baseline approach. Furthermore, results show a reduction in average FCTs by approximately 20% for large TCP flows. These efficiency gains underscore the system’s potential for mitigating RTT unfairness, reducing latency, and enhancing overall throughput independently of the Congestion Control Algorithm (CCA) in mixed-traffic network environments.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"266 ","pages":"Article 111339"},"PeriodicalIF":4.4,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143942293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SIEVE: Generating a cybersecurity log dataset collection for SIEM event classification","authors":"Pierpaolo Artioli , Vincenzo Dentamaro , Stefano Galantucci , Alessio Magrì , Gianluca Pellegrini , Gianfranco Semeraro","doi":"10.1016/j.comnet.2025.111330","DOIUrl":"10.1016/j.comnet.2025.111330","url":null,"abstract":"<div><div>Effective cyber threat monitoring relies on deploying robust Security Information and Event Management (SIEM) systems. SIEM applications receive security events generated by different devices, systems, and applications. They should properly correlate them to identify potential cyber threats based on tactics, techniques, and procedures (TTP), bypassing other security mechanisms (e.g., firewall, IDS, etc.). Given that logs are primarily generated to notify relevant system events and activities in a human-readable format, supervised Natural Language Processing (NLP) techniques could be used to train models that complement conventional parsing methodologies by automatically suggesting event classification into pre-defined categories. Training such models requires a substantial amount of pre-classified (labeled) data of different types to provide the learning patterns and nuances needed to make accurate predictions. Since the number of security event datasets is scarce due to privacy or availability reasons, and the few publicly available ones are often limited in terms of event diversity, number of labels, or simply unfit for the task at hand, an effective synthetic dataset for training SIEM-related machine learning event classification algorithms could be very useful. For these reasons, this paper proposes the generation of a synthetic dataset specifically designed to train SIEM systems for log-type classification. This research paper, starting from an in-depth methodological analysis of the prominent Cybersecurity related datasets available in the literature, introduces SIEVE (Siem Ingesting EVEnts), a synthetic dataset collection built from publicly available log samples using SPICE (Semantic Perturbation and Instantiation for Content Enrichment), a novel text augmentation and perturbation technique. SPICE is shown to be effective in generating realistic logs. Each instance of the dataset collection displays different levels of augmentation. Subsequent performance assessments were conducted through comprehensive benchmarking against various NLP classification models. Tests were conducted by training the classifiers using SIEVE and testing them on both the same SIEVE logs and real logs. The results of the experiments show that the best model among those tested is SVM (MaF1 0.9323 - 0.9737), which maintains its performance with slight degradation, even in tests on real logs (MaF1 0.9477 - 0.9636). BERT, on the other hand, performs better than SVM in most of the tests on SIEVE (MaF1 0.9528 - 0.9730) but does not show robustness when tested on real logs (MaF1 0.8864 - 0.9182).</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"266 ","pages":"Article 111330"},"PeriodicalIF":4.4,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143927469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer NetworksPub Date : 2025-05-06DOI: 10.1016/j.comnet.2025.111329
N. Rana Singha, Nityananda Sarma, Dilip Kumar Saikia
{"title":"TMLpSA-MEC: Transformer-based Mobility Aware Periodic Service Assignment in Mobile Edge Computing","authors":"N. Rana Singha, Nityananda Sarma, Dilip Kumar Saikia","doi":"10.1016/j.comnet.2025.111329","DOIUrl":"10.1016/j.comnet.2025.111329","url":null,"abstract":"<div><div>Mobile edge computing (MEC) brings the power of cloud computing closer to where users are, enhancing network performance and improving user experience. However, as users move around and individual edge servers cover only limited areas, there is a need for intelligent service assignment to keep up with user demands and low turn around time (TAT) requirements. This paper introduces TMLpSA, a synergistic framework that combines advanced user mobility prediction and decision-making techniques to optimize service assignments in a periodic fashion. By leveraging a Transformer model to anticipate where users will go next, and integrating a DRL-based TOPSIS technique, TMLpSA predicts user trajectories and proactively identifies the most suitable edge servers to assign services along their anticipated paths. Simulation results highlight how TMLpSA minimizes average application TAT significantly by 23.32%, while not only reducing offload energy consumption but also improving task completion rate and resource utilization with reasonable service migration frequency relative to the second best benchmark approach.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"266 ","pages":"Article 111329"},"PeriodicalIF":4.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143934682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}