{"title":"Improving Neural Network Fault Tolerance Against Weight Attack","authors":"Chia Jen Cheng;Ethan Chen;Vanessa Chen","doi":"10.1109/OJCAS.2025.3602678","DOIUrl":null,"url":null,"abstract":"The increase of neural networks used in mission-critical applications requires protecting model parameters to maintain correct inferences. While traditional threats like adversarial inputs have been well-studied, recent research in neural network security has explored attacking model weights to degrade prediction accuracy. Many studies focused on developing fault detection methods, and few recovery strategies have been offered. This work proposes combining neural compression technique with modular redundancy to enhance model parameters' fault tolerance against adversarial bit-flips at runtime. The fault tolerance improvement of the proposed method is demonstrated with two model architectures and two datasets. Further, a field programmable gate array realization of the scheme has been implemented to demonstrate a hardware proof of concept.","PeriodicalId":93442,"journal":{"name":"IEEE open journal of circuits and systems","volume":"6 ","pages":"383-392"},"PeriodicalIF":2.4000,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11142271","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of circuits and systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11142271/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The increase of neural networks used in mission-critical applications requires protecting model parameters to maintain correct inferences. While traditional threats like adversarial inputs have been well-studied, recent research in neural network security has explored attacking model weights to degrade prediction accuracy. Many studies focused on developing fault detection methods, and few recovery strategies have been offered. This work proposes combining neural compression technique with modular redundancy to enhance model parameters' fault tolerance against adversarial bit-flips at runtime. The fault tolerance improvement of the proposed method is demonstrated with two model architectures and two datasets. Further, a field programmable gate array realization of the scheme has been implemented to demonstrate a hardware proof of concept.