{"title":"Leveraging Large Language Models for Network Security: A Multi-Expert Approach","authors":"Tianshun Lin, Changgui Xu, Jianshan Zhang, Nan Lin, Yuxin Liu, Yuanjun Zheng","doi":"10.1002/itl2.70016","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The optimization of diverse industrial edge computing tasks presents a significant challenge due to the dynamic and heterogeneous nature of industrial operational demands. While deep reinforcement learning (DRL) has shown promise, task-specific DRL models are often required in complex industrial edge networks, such as real-time anomaly detection and latency-sensitive decision-making, increasing computational overhead. This leads to large computational overheads, unstable performance, and increased energy consumption. Such a cost has become a concern in resource-limited industrial edge networks. In this paper, we propose a novel multi-expert optimization approach with the help of powerful large language models (LLMs). Our goals are to dynamically interpret industrial task requirements, activate specialized DRL experts, and synthesize their outputs into context-aware decisions. Specifically, we replace conventional gate networks with an LLM-based orchestrator. LLMs provide the benefits of semantic reasoning and contextual understanding when managing expert selection and collaboration. This approach eliminates the need to train unique DRL models for each industrial optimization task, thereby reducing deployment costs and improving scalability. Our experiments indicate that our approach achieves 13% higher anomaly detection accuracy when compared with traditional DRL methods.</p>\n </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/itl2.70016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
The optimization of diverse industrial edge computing tasks presents a significant challenge due to the dynamic and heterogeneous nature of industrial operational demands. While deep reinforcement learning (DRL) has shown promise, task-specific DRL models are often required in complex industrial edge networks, such as real-time anomaly detection and latency-sensitive decision-making, increasing computational overhead. This leads to large computational overheads, unstable performance, and increased energy consumption. Such a cost has become a concern in resource-limited industrial edge networks. In this paper, we propose a novel multi-expert optimization approach with the help of powerful large language models (LLMs). Our goals are to dynamically interpret industrial task requirements, activate specialized DRL experts, and synthesize their outputs into context-aware decisions. Specifically, we replace conventional gate networks with an LLM-based orchestrator. LLMs provide the benefits of semantic reasoning and contextual understanding when managing expert selection and collaboration. This approach eliminates the need to train unique DRL models for each industrial optimization task, thereby reducing deployment costs and improving scalability. Our experiments indicate that our approach achieves 13% higher anomaly detection accuracy when compared with traditional DRL methods.