L. Rosa, Aiko Dinale, Simeon A. Bamford, C. Bartolozzi, Arren J. Glover
{"title":"High-Throughput Asynchronous Convolutions for High-Resolution Event-Cameras","authors":"L. Rosa, Aiko Dinale, Simeon A. Bamford, C. Bartolozzi, Arren J. Glover","doi":"10.1109/EBCCSP56922.2022.9845500","DOIUrl":null,"url":null,"abstract":"Event cameras are promising sensors for on-line and real-time vision tasks due to their high temporal resolution, low latency, and redundant static data elimination. Many vision algorithms use some form of spatial convolution (i.e. spatial pattern detection) as a fundamental component. However, additional consideration must be taken for event cameras, as the visual signal is asynchronous and sparse. While elegant methods have been proposed for event-based convolutions, they are unsuitable for real scenarios due to their inefficient processing pipeline and subsequent low event-throughput. This paper presents an efficient implementation based on decoupling the event-based computations from the computationally heavy convolutions, increasing the maximum event processing rate by 15. 92 × to over 10 million events/second, while still maintaining the event-based paradigm of asynchronous input and output. Results on public datasets with modern 640 × 480 event-camera recordings show that the proposed implementation achieves real-time processing with minimal impact on the convolution result, while the prior state-of-the-art results in a latency of over 1 second.","PeriodicalId":383039,"journal":{"name":"2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EBCCSP56922.2022.9845500","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Event cameras are promising sensors for on-line and real-time vision tasks due to their high temporal resolution, low latency, and redundant static data elimination. Many vision algorithms use some form of spatial convolution (i.e. spatial pattern detection) as a fundamental component. However, additional consideration must be taken for event cameras, as the visual signal is asynchronous and sparse. While elegant methods have been proposed for event-based convolutions, they are unsuitable for real scenarios due to their inefficient processing pipeline and subsequent low event-throughput. This paper presents an efficient implementation based on decoupling the event-based computations from the computationally heavy convolutions, increasing the maximum event processing rate by 15. 92 × to over 10 million events/second, while still maintaining the event-based paradigm of asynchronous input and output. Results on public datasets with modern 640 × 480 event-camera recordings show that the proposed implementation achieves real-time processing with minimal impact on the convolution result, while the prior state-of-the-art results in a latency of over 1 second.