{"title":"Incremental-learning-based graph neural networks on edge-forwarding devices for network intrusion detection","authors":"Qiang Gao , Samina Kausar , HuaXiong Zhang","doi":"10.1016/j.aej.2025.03.102","DOIUrl":null,"url":null,"abstract":"<div><div>Graph neural networks have become one of the research hotspots for network intrusion detection due to their natural suitability for representing computer networks. However, most of the related research on training GNNs is centralized, and this approach involves long-distance transmission and dumping of network data, so it is inefficient to perform, has the potential for privacy leakage, and introduces an additional transmission burden to the network. To address these challenges, this paper investigates the feasibility of offloading both graph neural networks' training and inference phases to edge-forwarding devices such as switches. We propose a distributed framework that aggregates residual computational resources from edge-forwarding devices into a micro-computing network. This framework then migrates GNN execution to edge-forwarding devices through a hybrid parallelism paradigm, thus locally detecting network anomalies to reduce network data transmission significantly. Meanwhile, to address the problem of computational and memory constraints of edge-forwarding devices, we propose a novel attention heatmap-driven memoryless incremental learning algorithm that learns network features and detects anomalies with minimal resources while avoiding catastrophic forgetting. Finally, we implement and verify the feasibility of the above framework and algorithm using a general-purpose embedded system and open-source software. The experiments show that although each edge-forwarding device's computational and memory load is light, the framework performs similarly to traditional approaches. To the best of our knowledge, this is the first approach that offloads a graph neural network model to edge-forwarding devices.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"126 ","pages":"Pages 81-89"},"PeriodicalIF":6.2000,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825004107","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Graph neural networks have become one of the research hotspots for network intrusion detection due to their natural suitability for representing computer networks. However, most of the related research on training GNNs is centralized, and this approach involves long-distance transmission and dumping of network data, so it is inefficient to perform, has the potential for privacy leakage, and introduces an additional transmission burden to the network. To address these challenges, this paper investigates the feasibility of offloading both graph neural networks' training and inference phases to edge-forwarding devices such as switches. We propose a distributed framework that aggregates residual computational resources from edge-forwarding devices into a micro-computing network. This framework then migrates GNN execution to edge-forwarding devices through a hybrid parallelism paradigm, thus locally detecting network anomalies to reduce network data transmission significantly. Meanwhile, to address the problem of computational and memory constraints of edge-forwarding devices, we propose a novel attention heatmap-driven memoryless incremental learning algorithm that learns network features and detects anomalies with minimal resources while avoiding catastrophic forgetting. Finally, we implement and verify the feasibility of the above framework and algorithm using a general-purpose embedded system and open-source software. The experiments show that although each edge-forwarding device's computational and memory load is light, the framework performs similarly to traditional approaches. To the best of our knowledge, this is the first approach that offloads a graph neural network model to edge-forwarding devices.
期刊介绍:
Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification:
• Mechanical, Production, Marine and Textile Engineering
• Electrical Engineering, Computer Science and Nuclear Engineering
• Civil and Architecture Engineering
• Chemical Engineering and Applied Sciences
• Environmental Engineering