2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)最新文献

筛选
英文 中文
Morphological, Object Detection Framework for Embedded, Event-based Sensing 形态学,嵌入式对象检测框架,基于事件的感知
M. Villemur, Jonah P. Sengupta, P. Julián, A. Andreou
{"title":"Morphological, Object Detection Framework for Embedded, Event-based Sensing","authors":"M. Villemur, Jonah P. Sengupta, P. Julián, A. Andreou","doi":"10.1109/EBCCSP56922.2022.9845661","DOIUrl":"https://doi.org/10.1109/EBCCSP56922.2022.9845661","url":null,"abstract":"This paper presents a high-speed, object detection algorithm that leverages data from an event-based camera and a spike-based, cellular neural network framework for morphological processing. Event-based data flows into the algorithm in a time-serial, asynchronous fashion, but the algorithm and subsequent architecture description lends itself towards a parallel approach. A cellular neural-network (CNN) is composed of multimodal processing elements that provide the means to spatiotemporally filter event data, but also are used to apply a cascade of piece-wise linear functions in a synchronous fashion. When applied in succession, these morphological operations form object “blobs”, produce shape skeletons, and place centroids. Over an event stream, this rapid centroid placement provides a means to perform low-latency object detection in an embedded framework. Using processing intervals of 25ms and assuming a clock of 100 MHz, a computational latency of around 5.5μs is incurred and an estimated 161uW is consumed by the morphological algorithm thus providing a promising solution for event-based, embedded processing.","PeriodicalId":383039,"journal":{"name":"2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116213362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
[WIP] Unlocking Static Images for Training Event-driven Neural Networks [WIP]解锁静态图像用于训练事件驱动的神经网络
N. Carissimi, Gaurvi Goyal, Franco Di Pietro, C. Bartolozzi, Arren J. Glover
{"title":"[WIP] Unlocking Static Images for Training Event-driven Neural Networks","authors":"N. Carissimi, Gaurvi Goyal, Franco Di Pietro, C. Bartolozzi, Arren J. Glover","doi":"10.1109/EBCCSP56922.2022.9845526","DOIUrl":"https://doi.org/10.1109/EBCCSP56922.2022.9845526","url":null,"abstract":"Event driven cameras have the potential to revolutionise the real-time visual sensory processing paradigm. These asynchronous sensors detect change in the environment with low latency and high dynamic range, allowing for orders of magnitude faster systems than the state of the art using intensity cameras. On the other hand, deep artificial neural networks have refashioned machine vision in the last decade, greatly expanding the reach of viable tasks, supported by the creation of many large scale image datasets. In this work, we present a modality to leverage these large scale datasets for the purpose of training off-the-shelf deep learning architectures and re-appropriating them for event-based tasks. To this end, we describe an event representation, EROS, and a method to convert images to an EROS-like representation such that image datasets can train neural networks for event driven applications.","PeriodicalId":383039,"journal":{"name":"2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130575459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving SpikiLi:基于激光雷达的自动驾驶实时目标检测的峰值仿真
S. Mohapatra, Thomas Mesquida, Mona Hodaei, S. Yogamani, H. Gotzig, Patrick Mäder
{"title":"SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving","authors":"S. Mohapatra, Thomas Mesquida, Mona Hodaei, S. Yogamani, H. Gotzig, Patrick Mäder","doi":"10.1109/EBCCSP56922.2022.9845647","DOIUrl":"https://doi.org/10.1109/EBCCSP56922.2022.9845647","url":null,"abstract":"Spiking Neural Networks are a recent and new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency. They do so by using asynchronous spike-based data flow, event-based signal generation, processing, and modifying the neuron model to resemble biological neurons closely. While some initial works have shown significant initial evidence of applicability to common deep learning tasks, their applications in complex real-world tasks have been relatively low. In this work, we first illustrate the applicability of spiking neural networks to a complex deep learning task, namely LiDAR based 3D object detection for automated driving. Secondly, we make a step-by-step demonstration of simulating spiking behavior using a pre-trained Convolutional Neural Network. We closely model essential aspects of spiking neural networks in simulation and achieve equivalent run-time and accuracy on a GPU. We expect significant improvements in power efficiency when the model is implemented on neuromorphic hardware.","PeriodicalId":383039,"journal":{"name":"2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116537621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A toolbox for neuromorphic perception in robotics 机器人中神经形态感知的工具箱
Julien Dupeyroux, S. Stroobants, G. D. Croon
{"title":"A toolbox for neuromorphic perception in robotics","authors":"Julien Dupeyroux, S. Stroobants, G. D. Croon","doi":"10.1109/EBCCSP56922.2022.9845664","DOIUrl":"https://doi.org/10.1109/EBCCSP56922.2022.9845664","url":null,"abstract":"The third generation of artificial intelligence (AI) introduced by neuromorphic computing is revolutionizing the way robots and autonomous systems can sense the world, process the information, and interact with their environment. Research towards fulfilling the promises of high flexibility, energy efficiency, and robustness of neuromorphic systems is widely supported by software tools for simulating spiking neural networks, and hardware integration (neuromorphic processors). Yet, while efforts have been made on neuromorphic vision (event-based cameras), it is worth noting that most of the sensors available for robotics remain inherently incompatible with neuromorphic computing, where information is encoded into spikes. To facilitate the use of traditional sensors, we need to convert the output signals into streams of spikes, i.e., a series of events (+1,-1) along with their corresponding timestamps. In this paper, we propose a review of the coding algorithms from a robotics perspective and further supported by a benchmark to assess their performance. We also introduce a ROS (Robot Operating System) toolbox to encode and decode input signals coming from any type of sensor available on a robot. This initiative is meant to stimulate and facilitate robotic integration of neuromorphic AI, with the opportunity to adapt traditional off-the-shelf sensors to spiking neural nets within one of the most powerful robotic tools, ROS.","PeriodicalId":383039,"journal":{"name":"2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121662987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信