var-nmODE:基于nmODE的l2 -稳定性模型,用于防御对抗性攻击

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qing Yang , Xiaobing Hu , Jiali Yu , Qixun Sun , Lan Shu , Zhang Yi , Yong Liao
{"title":"var-nmODE:基于nmODE的l2 -稳定性模型,用于防御对抗性攻击","authors":"Qing Yang ,&nbsp;Xiaobing Hu ,&nbsp;Jiali Yu ,&nbsp;Qixun Sun ,&nbsp;Lan Shu ,&nbsp;Zhang Yi ,&nbsp;Yong Liao","doi":"10.1016/j.neucom.2025.130605","DOIUrl":null,"url":null,"abstract":"<div><div>Deep neural networks (DNN) have demonstrated remarkable performance in various applications. However, their performance is significantly influenced by a wide range of perturbations, particularly adversarial perturbations, especially adversarial perturbations, which are difficult to recognize by the naked eye but cause the network to produce incorrect classifications. Some studies have shown that ordinary differential equation (ODE) networks are inherently more robust to adversarial perturbations than general deep networks. nmODE (Neural Memory Ordinary Differential Equation) is a recently proposed artificial neural network model, which has strong nonlinearity. Despite its potential, nmODE still faces challenges in adversarial defense. In this paper, we propose a variant model of neural memory ordinary differential equations (var-nmODE) to defend against adversarial attacks. Based on the theoretical foundation, var-nmODE has <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> stable mapping, which corresponds to authentication defense against <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> adversarial perturbations. Further, we conduct adversarial training on the proposed model and show that var-nmODE has better performance through experiments than nmODE. In addition, through adversarial training, the performance of var-nmODE is significantly improved, which indicates that our proposed model can resist adversarial disturbance. It is worth mentioning that var-nmODE provides inherent and certified stability, making it a valuable addition to deep learning defense research.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130605"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"var-nmODE: Model with L2-stability based on nmODE for defending against adversarial attacks\",\"authors\":\"Qing Yang ,&nbsp;Xiaobing Hu ,&nbsp;Jiali Yu ,&nbsp;Qixun Sun ,&nbsp;Lan Shu ,&nbsp;Zhang Yi ,&nbsp;Yong Liao\",\"doi\":\"10.1016/j.neucom.2025.130605\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep neural networks (DNN) have demonstrated remarkable performance in various applications. However, their performance is significantly influenced by a wide range of perturbations, particularly adversarial perturbations, especially adversarial perturbations, which are difficult to recognize by the naked eye but cause the network to produce incorrect classifications. Some studies have shown that ordinary differential equation (ODE) networks are inherently more robust to adversarial perturbations than general deep networks. nmODE (Neural Memory Ordinary Differential Equation) is a recently proposed artificial neural network model, which has strong nonlinearity. Despite its potential, nmODE still faces challenges in adversarial defense. In this paper, we propose a variant model of neural memory ordinary differential equations (var-nmODE) to defend against adversarial attacks. Based on the theoretical foundation, var-nmODE has <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> stable mapping, which corresponds to authentication defense against <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> adversarial perturbations. Further, we conduct adversarial training on the proposed model and show that var-nmODE has better performance through experiments than nmODE. In addition, through adversarial training, the performance of var-nmODE is significantly improved, which indicates that our proposed model can resist adversarial disturbance. It is worth mentioning that var-nmODE provides inherent and certified stability, making it a valuable addition to deep learning defense research.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"648 \",\"pages\":\"Article 130605\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2025-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225012779\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225012779","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络(Deep neural networks, DNN)在各种应用中表现出了卓越的性能。然而,它们的性能受到各种扰动的显著影响,尤其是对抗性扰动,尤其是对抗性扰动,这些扰动很难被肉眼识别,但会导致网络产生错误的分类。一些研究表明,与一般深度网络相比,常微分方程(ODE)网络对对抗性扰动具有更强的鲁棒性。神经记忆常微分方程(nmODE)是近年来提出的一种具有强非线性的人工神经网络模型。尽管具有潜力,nmODE仍然面临对抗性防御方面的挑战。在本文中,我们提出了一种神经记忆常微分方程(var-nmODE)的变体模型来防御对抗性攻击。基于理论基础,var-nmODE具有L2稳定映射,对应于对L2对抗性扰动的认证防御。此外,我们对所提出的模型进行了对抗性训练,并通过实验证明var-nmODE比nmODE具有更好的性能。此外,通过对抗性训练,var-nmODE的性能得到了显著提高,表明我们提出的模型能够抵抗对抗性干扰。值得一提的是,var-nmODE提供了固有的和经过认证的稳定性,使其成为深度学习防御研究的有价值的补充。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
var-nmODE: Model with L2-stability based on nmODE for defending against adversarial attacks
Deep neural networks (DNN) have demonstrated remarkable performance in various applications. However, their performance is significantly influenced by a wide range of perturbations, particularly adversarial perturbations, especially adversarial perturbations, which are difficult to recognize by the naked eye but cause the network to produce incorrect classifications. Some studies have shown that ordinary differential equation (ODE) networks are inherently more robust to adversarial perturbations than general deep networks. nmODE (Neural Memory Ordinary Differential Equation) is a recently proposed artificial neural network model, which has strong nonlinearity. Despite its potential, nmODE still faces challenges in adversarial defense. In this paper, we propose a variant model of neural memory ordinary differential equations (var-nmODE) to defend against adversarial attacks. Based on the theoretical foundation, var-nmODE has L2 stable mapping, which corresponds to authentication defense against L2 adversarial perturbations. Further, we conduct adversarial training on the proposed model and show that var-nmODE has better performance through experiments than nmODE. In addition, through adversarial training, the performance of var-nmODE is significantly improved, which indicates that our proposed model can resist adversarial disturbance. It is worth mentioning that var-nmODE provides inherent and certified stability, making it a valuable addition to deep learning defense research.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信