{"title":"专用视觉传感器与动态神经场耦合,用于嵌入式注意过程","authors":"Marino Rasamuel, Lyes Khacef, Laurent Rodriguez, Benoît Miramond","doi":"10.1109/SAS.2019.8705979","DOIUrl":null,"url":null,"abstract":"Machine learning has recently taken the leading role in machine vision through deep learning algorithms. It has brought the best results in object detection, recognition and tracking. Nevertheless, these systems are computationally expensive since they need to process the whole images from the camera for producing such results. Consequently, they require important hardware resources that limit their use for embedded applications. In the other hand, we find a more efficient mechanism in biological systems. The brain, indeed, enables an attentional process to focus on the relevant information from the environment, and hence process only a sub-part of the visual field at a time. In this work, we implement a brain-inspired attentional process through dynamic neural fields that is integrated in two types of specialized visual sensors: frame-based and event-based cameras. We compare the obtained results on tracking performances and power consumption in the context of embedded recognition and tracking.","PeriodicalId":360234,"journal":{"name":"2019 IEEE Sensors Applications Symposium (SAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Specialized visual sensor coupled to a dynamic neural field for embedded attentional process\",\"authors\":\"Marino Rasamuel, Lyes Khacef, Laurent Rodriguez, Benoît Miramond\",\"doi\":\"10.1109/SAS.2019.8705979\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning has recently taken the leading role in machine vision through deep learning algorithms. It has brought the best results in object detection, recognition and tracking. Nevertheless, these systems are computationally expensive since they need to process the whole images from the camera for producing such results. Consequently, they require important hardware resources that limit their use for embedded applications. In the other hand, we find a more efficient mechanism in biological systems. The brain, indeed, enables an attentional process to focus on the relevant information from the environment, and hence process only a sub-part of the visual field at a time. In this work, we implement a brain-inspired attentional process through dynamic neural fields that is integrated in two types of specialized visual sensors: frame-based and event-based cameras. We compare the obtained results on tracking performances and power consumption in the context of embedded recognition and tracking.\",\"PeriodicalId\":360234,\"journal\":{\"name\":\"2019 IEEE Sensors Applications Symposium (SAS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Sensors Applications Symposium (SAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SAS.2019.8705979\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Sensors Applications Symposium (SAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAS.2019.8705979","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Specialized visual sensor coupled to a dynamic neural field for embedded attentional process
Machine learning has recently taken the leading role in machine vision through deep learning algorithms. It has brought the best results in object detection, recognition and tracking. Nevertheless, these systems are computationally expensive since they need to process the whole images from the camera for producing such results. Consequently, they require important hardware resources that limit their use for embedded applications. In the other hand, we find a more efficient mechanism in biological systems. The brain, indeed, enables an attentional process to focus on the relevant information from the environment, and hence process only a sub-part of the visual field at a time. In this work, we implement a brain-inspired attentional process through dynamic neural fields that is integrated in two types of specialized visual sensors: frame-based and event-based cameras. We compare the obtained results on tracking performances and power consumption in the context of embedded recognition and tracking.