Hugo Bulzomi, Amélie Gruel, Jean Martinet, Takeshi Fujita, Yuta Nakano, R. Bendahan
{"title":"基于微脉冲神经网络的嵌入式系统目标检测:通过视觉注意过滤噪声","authors":"Hugo Bulzomi, Amélie Gruel, Jean Martinet, Takeshi Fujita, Yuta Nakano, R. Bendahan","doi":"10.23919/MVA57639.2023.10215590","DOIUrl":null,"url":null,"abstract":"Object detection is an important task becoming increasingly common in numerous applications for embedded systems. The traditional state-of-the-art deep neural networks (DNNs) tend to be incompatible with the limitations of many of those systems: their large size and high computational cost make them hard to deploy on hardware with limited resources. Spiking Neural Networks (SNNs) have been attracting attention in recent years because of their potential as energy-efficient alternatives when implemented on specialized hardware, and their smooth integration with energy-efficient event cameras. In this paper, we present a lightweight SNN architecture for efficient object detection in embedded systems using event camera data. We show that by applying visual attention mechanisms, we can ignore most of the noise from the input and thus reduce the number of neurons and activations since additional noise-filtering layers are not needed. Our proposed SNN is 24 times smaller than a previous similar method for our input resolution and maintains similar overall detection performances, while being more robust to noise. We finally demonstrate the energy efficiency of our network during runtime with an implementation on SpiNNaker chip, showing the applicability of our approach.","PeriodicalId":338734,"journal":{"name":"2023 18th International Conference on Machine Vision and Applications (MVA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Object Detection for Embedded Systems Using Tiny Spiking Neural Networks: Filtering Noise Through Visual Attention\",\"authors\":\"Hugo Bulzomi, Amélie Gruel, Jean Martinet, Takeshi Fujita, Yuta Nakano, R. Bendahan\",\"doi\":\"10.23919/MVA57639.2023.10215590\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Object detection is an important task becoming increasingly common in numerous applications for embedded systems. The traditional state-of-the-art deep neural networks (DNNs) tend to be incompatible with the limitations of many of those systems: their large size and high computational cost make them hard to deploy on hardware with limited resources. Spiking Neural Networks (SNNs) have been attracting attention in recent years because of their potential as energy-efficient alternatives when implemented on specialized hardware, and their smooth integration with energy-efficient event cameras. In this paper, we present a lightweight SNN architecture for efficient object detection in embedded systems using event camera data. We show that by applying visual attention mechanisms, we can ignore most of the noise from the input and thus reduce the number of neurons and activations since additional noise-filtering layers are not needed. Our proposed SNN is 24 times smaller than a previous similar method for our input resolution and maintains similar overall detection performances, while being more robust to noise. We finally demonstrate the energy efficiency of our network during runtime with an implementation on SpiNNaker chip, showing the applicability of our approach.\",\"PeriodicalId\":338734,\"journal\":{\"name\":\"2023 18th International Conference on Machine Vision and Applications (MVA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 18th International Conference on Machine Vision and Applications (MVA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/MVA57639.2023.10215590\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 18th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA57639.2023.10215590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Object Detection for Embedded Systems Using Tiny Spiking Neural Networks: Filtering Noise Through Visual Attention
Object detection is an important task becoming increasingly common in numerous applications for embedded systems. The traditional state-of-the-art deep neural networks (DNNs) tend to be incompatible with the limitations of many of those systems: their large size and high computational cost make them hard to deploy on hardware with limited resources. Spiking Neural Networks (SNNs) have been attracting attention in recent years because of their potential as energy-efficient alternatives when implemented on specialized hardware, and their smooth integration with energy-efficient event cameras. In this paper, we present a lightweight SNN architecture for efficient object detection in embedded systems using event camera data. We show that by applying visual attention mechanisms, we can ignore most of the noise from the input and thus reduce the number of neurons and activations since additional noise-filtering layers are not needed. Our proposed SNN is 24 times smaller than a previous similar method for our input resolution and maintains similar overall detection performances, while being more robust to noise. We finally demonstrate the energy efficiency of our network during runtime with an implementation on SpiNNaker chip, showing the applicability of our approach.