Toward low-complexity neural networks for failure management in optical networks

IF 4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Lareb Zar Khan;Joao Pedro;Omran Ayoub;Nelson Costa;Andrea Sgambelluri;Lorenzo De Marinis;Antonio Napoli;Nicola Sambo
{"title":"Toward low-complexity neural networks for failure management in optical networks","authors":"Lareb Zar Khan;Joao Pedro;Omran Ayoub;Nelson Costa;Andrea Sgambelluri;Lorenzo De Marinis;Antonio Napoli;Nicola Sambo","doi":"10.1364/JOCN.550933","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) continues to show its potential and efficacy in automating network management tasks, such as failure management. However, as ML deployment considerations broaden, aspects that go beyond predictive performance, such as a model’s computational complexity (CC), start to gain significance, as higher CC incurs higher costs and energy consumption. Balancing high predictive performance with reduced CC is an important aspect, and therefore, it needs more investigation, especially in the context of optical networks. In this work, we focus on the problem of reducing the CC of ML models, specifically neural networks (NNs), for the use case of failure identification in optical networks. We propose an approach that exploits the relative activity of neurons in NNs to reduce their size (and hence, their CC). Our proposed approach, referred to as iterative neural removal (INR), iteratively computes neurons’ activity and removes neurons with no activity until reaching a predefined stopping condition. We also propose another approach, referred to as guided knowledge distillation (GKD), that combines INR with knowledge distillation (KD), a known technique for compression of NNs. GKD inherently determines the size of the compressed NN without requiring any manual suboptimal selection or other time-consuming optimization strategies, as in traditional KD. To quantify the effectiveness of INR and GKD, we evaluate their performance against pruning (i.e., a well-known NN compression technique) in terms of impact on predictive performance and reduction in CC and memory footprint. For the considered scenario, experimental results on testbed data show that INR and GKD are more effective than pruning in reducing CC and memory footprint.","PeriodicalId":50103,"journal":{"name":"Journal of Optical Communications and Networking","volume":"17 7","pages":"555-563"},"PeriodicalIF":4.0000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Optical Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11027919/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning (ML) continues to show its potential and efficacy in automating network management tasks, such as failure management. However, as ML deployment considerations broaden, aspects that go beyond predictive performance, such as a model’s computational complexity (CC), start to gain significance, as higher CC incurs higher costs and energy consumption. Balancing high predictive performance with reduced CC is an important aspect, and therefore, it needs more investigation, especially in the context of optical networks. In this work, we focus on the problem of reducing the CC of ML models, specifically neural networks (NNs), for the use case of failure identification in optical networks. We propose an approach that exploits the relative activity of neurons in NNs to reduce their size (and hence, their CC). Our proposed approach, referred to as iterative neural removal (INR), iteratively computes neurons’ activity and removes neurons with no activity until reaching a predefined stopping condition. We also propose another approach, referred to as guided knowledge distillation (GKD), that combines INR with knowledge distillation (KD), a known technique for compression of NNs. GKD inherently determines the size of the compressed NN without requiring any manual suboptimal selection or other time-consuming optimization strategies, as in traditional KD. To quantify the effectiveness of INR and GKD, we evaluate their performance against pruning (i.e., a well-known NN compression technique) in terms of impact on predictive performance and reduction in CC and memory footprint. For the considered scenario, experimental results on testbed data show that INR and GKD are more effective than pruning in reducing CC and memory footprint.
面向光网络故障管理的低复杂度神经网络研究
机器学习(ML)在自动化网络管理任务(如故障管理)方面继续显示其潜力和功效。然而,随着ML部署考虑因素的扩大,超出预测性能的方面,如模型的计算复杂性(CC),开始变得重要,因为更高的CC会导致更高的成本和能耗。平衡高预测性能和低CC是一个重要的方面,因此,它需要更多的研究,特别是在光网络的背景下。在这项工作中,我们专注于减少ML模型的CC问题,特别是神经网络(nn),用于光网络中故障识别的用例。我们提出了一种利用神经网络中神经元的相对活动来减小其大小(从而减小其CC)的方法。我们提出的方法被称为迭代神经去除(INR),迭代地计算神经元的活动,并去除没有活动的神经元,直到达到预定义的停止条件。我们还提出了另一种方法,称为引导知识蒸馏(GKD),它将INR与知识蒸馏(KD)相结合,这是一种已知的神经网络压缩技术。GKD本质上决定了压缩NN的大小,而不需要任何人工次优选择或其他耗时的优化策略,就像传统KD一样。为了量化INR和GKD的有效性,我们根据对预测性能的影响以及减少CC和内存占用来评估它们对修剪(即一种众所周知的神经网络压缩技术)的性能。对于所考虑的场景,在试验台数据上的实验结果表明,在减少CC和内存占用方面,INR和GKD比剪枝更有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
9.40
自引率
16.00%
发文量
104
审稿时长
4 months
期刊介绍: The scope of the Journal includes advances in the state-of-the-art of optical networking science, technology, and engineering. Both theoretical contributions (including new techniques, concepts, analyses, and economic studies) and practical contributions (including optical networking experiments, prototypes, and new applications) are encouraged. Subareas of interest include the architecture and design of optical networks, optical network survivability and security, software-defined optical networking, elastic optical networks, data and control plane advances, network management related innovation, and optical access networks. Enabling technologies and their applications are suitable topics only if the results are shown to directly impact optical networking beyond simple point-to-point networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信