{"title":"面向大规模网络管理的联邦学习数据与模型迁移优化","authors":"Kengo Tajiri;Ryoichi Kawahara","doi":"10.1109/TNSM.2025.3538156","DOIUrl":null,"url":null,"abstract":"Recently, deep learning has been introduced to automate network management to reduce human costs. However, the amount of log data obtained from the large-scale network is huge, and conventional centralized deep learning faces communication and computation costs. This paper aims to reduce communication and computation costs by training deep learning models using federated learning on data generated in the network and to deploy deep learning models as soon as possible. In this scheme, data generated at each point in the network are transferred to servers in the network, and deep learning models are trained by federated learning among the servers. In this paper, we first reveal that the training time depends on the transfer routes and the destinations of data and model parameters. Then, we introduce a simultaneous optimization method for (1) to which servers each point transfers the data through which routes and (2) through which routes the servers transfer the parameters to others. In the experiments, we numerically and experimentally compared the proposed method and naive methods in complicated wired network environments. We show that the proposed method reduced the total training time by 34% to 79% compared with the naive methods.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"22 2","pages":"958-973"},"PeriodicalIF":4.7000,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10870172","citationCount":"0","resultStr":"{\"title\":\"Optimization of Data and Model Transfer for Federated Learning to Manage Large-Scale Network\",\"authors\":\"Kengo Tajiri;Ryoichi Kawahara\",\"doi\":\"10.1109/TNSM.2025.3538156\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, deep learning has been introduced to automate network management to reduce human costs. However, the amount of log data obtained from the large-scale network is huge, and conventional centralized deep learning faces communication and computation costs. This paper aims to reduce communication and computation costs by training deep learning models using federated learning on data generated in the network and to deploy deep learning models as soon as possible. In this scheme, data generated at each point in the network are transferred to servers in the network, and deep learning models are trained by federated learning among the servers. In this paper, we first reveal that the training time depends on the transfer routes and the destinations of data and model parameters. Then, we introduce a simultaneous optimization method for (1) to which servers each point transfers the data through which routes and (2) through which routes the servers transfer the parameters to others. In the experiments, we numerically and experimentally compared the proposed method and naive methods in complicated wired network environments. We show that the proposed method reduced the total training time by 34% to 79% compared with the naive methods.\",\"PeriodicalId\":13423,\"journal\":{\"name\":\"IEEE Transactions on Network and Service Management\",\"volume\":\"22 2\",\"pages\":\"958-973\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2025-02-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10870172\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Network and Service Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10870172/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network and Service Management","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10870172/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Optimization of Data and Model Transfer for Federated Learning to Manage Large-Scale Network
Recently, deep learning has been introduced to automate network management to reduce human costs. However, the amount of log data obtained from the large-scale network is huge, and conventional centralized deep learning faces communication and computation costs. This paper aims to reduce communication and computation costs by training deep learning models using federated learning on data generated in the network and to deploy deep learning models as soon as possible. In this scheme, data generated at each point in the network are transferred to servers in the network, and deep learning models are trained by federated learning among the servers. In this paper, we first reveal that the training time depends on the transfer routes and the destinations of data and model parameters. Then, we introduce a simultaneous optimization method for (1) to which servers each point transfers the data through which routes and (2) through which routes the servers transfer the parameters to others. In the experiments, we numerically and experimentally compared the proposed method and naive methods in complicated wired network environments. We show that the proposed method reduced the total training time by 34% to 79% compared with the naive methods.
期刊介绍:
IEEE Transactions on Network and Service Management will publish (online only) peerreviewed archival quality papers that advance the state-of-the-art and practical applications of network and service management. Theoretical research contributions (presenting new concepts and techniques) and applied contributions (reporting on experiences and experiments with actual systems) will be encouraged. These transactions will focus on the key technical issues related to: Management Models, Architectures and Frameworks; Service Provisioning, Reliability and Quality Assurance; Management Functions; Enabling Technologies; Information and Communication Models; Policies; Applications and Case Studies; Emerging Technologies and Standards.