{"title":"通过输入过滤为尖峰神经网络提供抵御对抗性示例的稳健方法","authors":"Shasha Guo , Lei Wang , Zhijie Yang , Yuliang Lu","doi":"10.1016/j.sysarc.2024.103209","DOIUrl":null,"url":null,"abstract":"<div><p>Spiking Neural Networks (SNNs) are increasingly deployed in applications on resource constraint embedding systems due to their low power. Unfortunately, SNNs are vulnerable to adversarial examples which threaten the application security. Existing denoising filters can protect SNNs from adversarial examples. However, the reason why filters can defend against adversarial examples remains unclear and thus it cannot ensure a trusty defense. In this work, we aim to explain the reason and provide a more robust filter against different adversarial examples. First, we propose two new norms <span><math><msub><mrow><mi>l</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>l</mi></mrow><mrow><mi>∞</mi></mrow></msub></math></span> to describe the spatial and temporal features of adversarial events for understanding the working principles of filters. Second, we propose to combine filters to provide a robust defense against different perturbation events. To make up the gap between the goal and the ability of existing filters, we propose a new filter that can defend against both spatially and temporally dense perturbation events. We conduct the experiments on two widely used neuromorphic datasets, NMNIST and IBM DVSGesture. Experimental results show that the combined defense can restore the accuracy to over 80% of the original SNN accuracy.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103209"},"PeriodicalIF":3.7000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A robust defense for spiking neural networks against adversarial examples via input filtering\",\"authors\":\"Shasha Guo , Lei Wang , Zhijie Yang , Yuliang Lu\",\"doi\":\"10.1016/j.sysarc.2024.103209\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Spiking Neural Networks (SNNs) are increasingly deployed in applications on resource constraint embedding systems due to their low power. Unfortunately, SNNs are vulnerable to adversarial examples which threaten the application security. Existing denoising filters can protect SNNs from adversarial examples. However, the reason why filters can defend against adversarial examples remains unclear and thus it cannot ensure a trusty defense. In this work, we aim to explain the reason and provide a more robust filter against different adversarial examples. First, we propose two new norms <span><math><msub><mrow><mi>l</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>l</mi></mrow><mrow><mi>∞</mi></mrow></msub></math></span> to describe the spatial and temporal features of adversarial events for understanding the working principles of filters. Second, we propose to combine filters to provide a robust defense against different perturbation events. To make up the gap between the goal and the ability of existing filters, we propose a new filter that can defend against both spatially and temporally dense perturbation events. We conduct the experiments on two widely used neuromorphic datasets, NMNIST and IBM DVSGesture. Experimental results show that the combined defense can restore the accuracy to over 80% of the original SNN accuracy.</p></div>\",\"PeriodicalId\":50027,\"journal\":{\"name\":\"Journal of Systems Architecture\",\"volume\":\"153 \",\"pages\":\"Article 103209\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems Architecture\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1383762124001462\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762124001462","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
A robust defense for spiking neural networks against adversarial examples via input filtering
Spiking Neural Networks (SNNs) are increasingly deployed in applications on resource constraint embedding systems due to their low power. Unfortunately, SNNs are vulnerable to adversarial examples which threaten the application security. Existing denoising filters can protect SNNs from adversarial examples. However, the reason why filters can defend against adversarial examples remains unclear and thus it cannot ensure a trusty defense. In this work, we aim to explain the reason and provide a more robust filter against different adversarial examples. First, we propose two new norms and to describe the spatial and temporal features of adversarial events for understanding the working principles of filters. Second, we propose to combine filters to provide a robust defense against different perturbation events. To make up the gap between the goal and the ability of existing filters, we propose a new filter that can defend against both spatially and temporally dense perturbation events. We conduct the experiments on two widely used neuromorphic datasets, NMNIST and IBM DVSGesture. Experimental results show that the combined defense can restore the accuracy to over 80% of the original SNN accuracy.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.