{"title":"实现尖峰神经网络的高效形式验证","authors":"Baekryun Seong, Jieung Kim, Sang-Ki Ko","doi":"arxiv-2408.10900","DOIUrl":null,"url":null,"abstract":"Recently, AI research has primarily focused on large language models (LLMs),\nand increasing accuracy often involves scaling up and consuming more power. The\npower consumption of AI has become a significant societal issue; in this\ncontext, spiking neural networks (SNNs) offer a promising solution. SNNs\noperate event-driven, like the human brain, and compress information\ntemporally. These characteristics allow SNNs to significantly reduce power\nconsumption compared to perceptron-based artificial neural networks (ANNs),\nhighlighting them as a next-generation neural network technology. However,\nsocietal concerns regarding AI go beyond power consumption, with the\nreliability of AI models being a global issue. For instance, adversarial\nattacks on AI models are a well-studied problem in the context of traditional\nneural networks. Despite their importance, the stability and property\nverification of SNNs remains in the early stages of research. Most SNN\nverification methods are time-consuming and barely scalable, making practical\napplications challenging. In this paper, we introduce temporal encoding to\nachieve practical performance in verifying the adversarial robustness of SNNs.\nWe conduct a theoretical analysis of this approach and demonstrate its success\nin verifying SNNs at previously unmanageable scales. Our contribution advances\nSNN verification to a practical level, facilitating the safer application of\nSNNs.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Efficient Formal Verification of Spiking Neural Network\",\"authors\":\"Baekryun Seong, Jieung Kim, Sang-Ki Ko\",\"doi\":\"arxiv-2408.10900\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, AI research has primarily focused on large language models (LLMs),\\nand increasing accuracy often involves scaling up and consuming more power. The\\npower consumption of AI has become a significant societal issue; in this\\ncontext, spiking neural networks (SNNs) offer a promising solution. SNNs\\noperate event-driven, like the human brain, and compress information\\ntemporally. These characteristics allow SNNs to significantly reduce power\\nconsumption compared to perceptron-based artificial neural networks (ANNs),\\nhighlighting them as a next-generation neural network technology. However,\\nsocietal concerns regarding AI go beyond power consumption, with the\\nreliability of AI models being a global issue. For instance, adversarial\\nattacks on AI models are a well-studied problem in the context of traditional\\nneural networks. Despite their importance, the stability and property\\nverification of SNNs remains in the early stages of research. Most SNN\\nverification methods are time-consuming and barely scalable, making practical\\napplications challenging. In this paper, we introduce temporal encoding to\\nachieve practical performance in verifying the adversarial robustness of SNNs.\\nWe conduct a theoretical analysis of this approach and demonstrate its success\\nin verifying SNNs at previously unmanageable scales. Our contribution advances\\nSNN verification to a practical level, facilitating the safer application of\\nSNNs.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"30 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.10900\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.10900","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Efficient Formal Verification of Spiking Neural Network
Recently, AI research has primarily focused on large language models (LLMs),
and increasing accuracy often involves scaling up and consuming more power. The
power consumption of AI has become a significant societal issue; in this
context, spiking neural networks (SNNs) offer a promising solution. SNNs
operate event-driven, like the human brain, and compress information
temporally. These characteristics allow SNNs to significantly reduce power
consumption compared to perceptron-based artificial neural networks (ANNs),
highlighting them as a next-generation neural network technology. However,
societal concerns regarding AI go beyond power consumption, with the
reliability of AI models being a global issue. For instance, adversarial
attacks on AI models are a well-studied problem in the context of traditional
neural networks. Despite their importance, the stability and property
verification of SNNs remains in the early stages of research. Most SNN
verification methods are time-consuming and barely scalable, making practical
applications challenging. In this paper, we introduce temporal encoding to
achieve practical performance in verifying the adversarial robustness of SNNs.
We conduct a theoretical analysis of this approach and demonstrate its success
in verifying SNNs at previously unmanageable scales. Our contribution advances
SNN verification to a practical level, facilitating the safer application of
SNNs.