Event-driven stereo vision algorithm based on silicon retina sensors

F. Eibensteiner, H. Brachtendorf, J. Scharinger
{"title":"Event-driven stereo vision algorithm based on silicon retina sensors","authors":"F. Eibensteiner, H. Brachtendorf, J. Scharinger","doi":"10.1109/RADIOELEK.2017.7937602","DOIUrl":null,"url":null,"abstract":"In this paper a new stereo matching concept for event-driven silicon retinae is presented. The main contribution of the proposed approach is the correlation of incoming events. As a novelty, not only the spatial information is used, but also the time of occurrence of the events as a part of the similarity measure. Stereo matching is used in depth generating camera systems for solving the correspondence problem and for 3D reconstruction of the sensed environment. In fact, using conventionally frame-based cameras, this is a time consuming and computationally expensive task, especially for high frame rates and spatial resolutions. An event-based silicon retina delivers events only on illumination changes and completely asynchronous in time. The sensor provides no frames, but a time-continuous data stream of intensity differences and thus inherently reduces the visual information to a minimum. This paper focuses on an event-based stereo matching algorithm implemented in hardware on a field programmable gate array (FPGA) that allows a reliable matching of the sparse input event data. Furthermore, the approach is compared to other standard frame-based and event driven stereo methods. The results show that the achieved depth map outperforms other algorithms in terms of accuracy and the calculation performance of the hardware architecture is in the range or still higher than state-of-the-art computing platforms.","PeriodicalId":160577,"journal":{"name":"2017 27th International Conference Radioelektronika (RADIOELEKTRONIKA)","volume":"193 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 27th International Conference Radioelektronika (RADIOELEKTRONIKA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RADIOELEK.2017.7937602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In this paper a new stereo matching concept for event-driven silicon retinae is presented. The main contribution of the proposed approach is the correlation of incoming events. As a novelty, not only the spatial information is used, but also the time of occurrence of the events as a part of the similarity measure. Stereo matching is used in depth generating camera systems for solving the correspondence problem and for 3D reconstruction of the sensed environment. In fact, using conventionally frame-based cameras, this is a time consuming and computationally expensive task, especially for high frame rates and spatial resolutions. An event-based silicon retina delivers events only on illumination changes and completely asynchronous in time. The sensor provides no frames, but a time-continuous data stream of intensity differences and thus inherently reduces the visual information to a minimum. This paper focuses on an event-based stereo matching algorithm implemented in hardware on a field programmable gate array (FPGA) that allows a reliable matching of the sparse input event data. Furthermore, the approach is compared to other standard frame-based and event driven stereo methods. The results show that the achieved depth map outperforms other algorithms in terms of accuracy and the calculation performance of the hardware architecture is in the range or still higher than state-of-the-art computing platforms.
基于硅视网膜传感器的事件驱动立体视觉算法
本文提出了一种新的事件驱动硅视网膜立体匹配概念。提出的方法的主要贡献是传入事件的相关性。作为一种新颖的方法,该方法不仅使用了空间信息,而且还将事件发生的时间作为相似性度量的一部分。立体匹配在深度生成相机系统中用于解决对应问题和对被感环境进行三维重建。事实上,使用传统的基于帧的相机,这是一个耗时和计算昂贵的任务,特别是对于高帧率和空间分辨率。基于事件的硅视网膜仅在光照变化时传递事件,并且在时间上完全异步。该传感器不提供帧,而是提供强度差异的时间连续数据流,从而固有地将视觉信息减少到最小。本文研究了一种基于事件的立体匹配算法,该算法在现场可编程门阵列(FPGA)硬件上实现,可实现稀疏输入事件数据的可靠匹配。此外,还将该方法与其他标准的基于框架和事件驱动的立体方法进行了比较。结果表明,所获得的深度图在精度方面优于其他算法,硬件架构的计算性能在最先进的计算平台的范围内或仍高于最先进的计算平台。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信