Optimizing Information Loss Towards Robust Neural Networks

Philip Sperl, Konstantin Böttinger
{"title":"Optimizing Information Loss Towards Robust Neural Networks","authors":"Philip Sperl, Konstantin Böttinger","doi":"10.1145/3477997.3478016","DOIUrl":null,"url":null,"abstract":"Neural Networks (NNs) are vulnerable to adversarial examples. Such inputs differ only slightly from their benign counterparts yet provoke misclassifications of the attacked NNs. The perturbations required to craft the examples are often negligible and even human-imperceptible. To protect deep learning-based systems from such attacks, several countermeasures have been proposed with adversarial training still being considered the most effective. Here, NNs are iteratively retrained using adversarial examples forming a computationally expensive and time consuming process, which often leads to a performance decrease. To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call entropic retraining. Based on an information-theoretic-inspired analysis, we investigate the effects of adversarial training and achieve a robustness increase without laboriously generating adversarial examples. With our prototype implementation we validate and show the effectiveness of our approach for various NN architectures and data sets. We empirically show that entropic retraining leads to a significant increase in NNs’ security and robustness while only relying on the given original data.","PeriodicalId":130265,"journal":{"name":"Proceedings of the 2020 Workshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security","volume":"305 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Workshop on DYnamic and Novel Advances in Machine Learning and Intelligent Cyber Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3477997.3478016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Neural Networks (NNs) are vulnerable to adversarial examples. Such inputs differ only slightly from their benign counterparts yet provoke misclassifications of the attacked NNs. The perturbations required to craft the examples are often negligible and even human-imperceptible. To protect deep learning-based systems from such attacks, several countermeasures have been proposed with adversarial training still being considered the most effective. Here, NNs are iteratively retrained using adversarial examples forming a computationally expensive and time consuming process, which often leads to a performance decrease. To overcome the downsides of adversarial training while still providing a high level of security, we present a new training approach we call entropic retraining. Based on an information-theoretic-inspired analysis, we investigate the effects of adversarial training and achieve a robustness increase without laboriously generating adversarial examples. With our prototype implementation we validate and show the effectiveness of our approach for various NN architectures and data sets. We empirically show that entropic retraining leads to a significant increase in NNs’ security and robustness while only relying on the given original data.
面向鲁棒神经网络的信息损失优化
神经网络(NNs)容易受到对抗性示例的影响。这样的输入与良性输入只有轻微的不同,但却会引起被攻击的神经网络的错误分类。制作示例所需的扰动通常可以忽略不计,甚至是人类无法察觉的。为了保护基于深度学习的系统免受此类攻击,已经提出了几种对策,对抗性训练仍然被认为是最有效的。在这里,使用对抗性示例迭代地重新训练神经网络,形成一个计算昂贵且耗时的过程,这通常会导致性能下降。为了克服对抗性训练的缺点,同时仍然提供高水平的安全性,我们提出了一种新的训练方法,我们称之为熵再训练。基于信息理论启发的分析,我们研究了对抗性训练的效果,并在不费力生成对抗性示例的情况下实现了鲁棒性的提高。通过我们的原型实现,我们验证并展示了我们的方法对各种神经网络架构和数据集的有效性。我们的经验表明,熵再训练导致神经网络的安全性和鲁棒性显著提高,而只依赖于给定的原始数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信