{"title":"A Simple Stochastic Neural Network for Improving Adversarial Robustness","authors":"Hao Yang, Min Wang, Zhengfei Yu, Yun Zhou","doi":"10.1109/ICME55011.2023.00392","DOIUrl":null,"url":null,"abstract":"The vulnerability of deep learning algorithms to malicious attack has garnered significant attention from researchers in recent years. In order to provide more reliable services for safety-sensitive applications, prior studies have introduced Stochastic Neural Networks (SNNs) as a means of improving adversarial robustness. However, existing SNNs are not designed from the perspective of optimizing the adversarial decision boundary and rely on complex and expensive adversarial training. To find an appropriate decision boundary, we propose a simple and effective stochastic neural network that incorporates a regularization term into the objective function. Our approach maximizes the variance of the feature distribution in low-dimensional space and forces the feature direction to align with the eigenvectors of the covariance matrix. Due to no need of adversarial training, our method requires lower computational cost and does not sacrifice accuracy on normal examples, making it suitable for use with a variety of models. Extensive experiments against various well-known white- and black-box attacks show that our proposed method outperforms state-of-the-art methods.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00392","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The vulnerability of deep learning algorithms to malicious attack has garnered significant attention from researchers in recent years. In order to provide more reliable services for safety-sensitive applications, prior studies have introduced Stochastic Neural Networks (SNNs) as a means of improving adversarial robustness. However, existing SNNs are not designed from the perspective of optimizing the adversarial decision boundary and rely on complex and expensive adversarial training. To find an appropriate decision boundary, we propose a simple and effective stochastic neural network that incorporates a regularization term into the objective function. Our approach maximizes the variance of the feature distribution in low-dimensional space and forces the feature direction to align with the eigenvectors of the covariance matrix. Due to no need of adversarial training, our method requires lower computational cost and does not sacrifice accuracy on normal examples, making it suitable for use with a variety of models. Extensive experiments against various well-known white- and black-box attacks show that our proposed method outperforms state-of-the-art methods.