EV-TTC:弱光条件下基于事件的碰撞时间

IF 4.6 2区 计算机科学 Q2 ROBOTICS
Anthony Bisulco;Vijay Kumar;Kostas Daniilidis
{"title":"EV-TTC:弱光条件下基于事件的碰撞时间","authors":"Anthony Bisulco;Vijay Kumar;Kostas Daniilidis","doi":"10.1109/LRA.2025.3565150","DOIUrl":null,"url":null,"abstract":"Rapid and accurate dense time-to-collision (TTC) estimation in resource-constrained, low-light environments is challenging for event-based camera systems. Fixed-time event representations like voxel grids face an inherent trade-off: larger temporal windows improve perception accuracy but increase storage demands, while smaller windows reduce storage at the cost of accuracy. We present a hardware-aware TTC estimation system designed for mobile robots, satisfying strict bandwidth, computation, and storage requirements. Our core innovation is a time-scale separation method for computing a multi-temporal scale event representation, achieving a latency of 3.3 ms at 75 Million Events per Second (MEPS). As part of this study, we developed Time-To-Collision/Event Flow (<inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>) a new high-temporal-resolution TTC dataset, using HD event cameras, with temporal estimates at least 7 times greater than existing event datasets such as MVSEC, DSEC, and VECtor via SE(3) interpolation. Our method outperforms existing approaches, reducing mean frame median TTC error by at least 20% compared to voxel grids on <inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>, and achieving an average 31% improvement over other baselines across multiple datasets. Our system runs in real-time on a Jetson Orin NX with just 9.5 ms latency at 141 Hz, outperforming all other methods on embedded hardware, making it ideal for mobile robots.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6151-6158"},"PeriodicalIF":4.6000,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EV-TTC: Event-Based Time to Collision Under Low Light Conditions\",\"authors\":\"Anthony Bisulco;Vijay Kumar;Kostas Daniilidis\",\"doi\":\"10.1109/LRA.2025.3565150\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Rapid and accurate dense time-to-collision (TTC) estimation in resource-constrained, low-light environments is challenging for event-based camera systems. Fixed-time event representations like voxel grids face an inherent trade-off: larger temporal windows improve perception accuracy but increase storage demands, while smaller windows reduce storage at the cost of accuracy. We present a hardware-aware TTC estimation system designed for mobile robots, satisfying strict bandwidth, computation, and storage requirements. Our core innovation is a time-scale separation method for computing a multi-temporal scale event representation, achieving a latency of 3.3 ms at 75 Million Events per Second (MEPS). As part of this study, we developed Time-To-Collision/Event Flow (<inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>) a new high-temporal-resolution TTC dataset, using HD event cameras, with temporal estimates at least 7 times greater than existing event datasets such as MVSEC, DSEC, and VECtor via SE(3) interpolation. Our method outperforms existing approaches, reducing mean frame median TTC error by at least 20% compared to voxel grids on <inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>, and achieving an average 31% improvement over other baselines across multiple datasets. Our system runs in real-time on a Jetson Orin NX with just 9.5 ms latency at 141 Hz, outperforming all other methods on embedded hardware, making it ideal for mobile robots.\",\"PeriodicalId\":13241,\"journal\":{\"name\":\"IEEE Robotics and Automation Letters\",\"volume\":\"10 6\",\"pages\":\"6151-6158\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-04-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Robotics and Automation Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10979412/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10979412/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

摘要

对于基于事件的相机系统来说,在资源受限的低光环境下快速准确地估计密集碰撞时间(TTC)是一个挑战。像体素网格这样的固定时间事件表示面临着固有的权衡:较大的时间窗口提高了感知准确性,但增加了存储需求,而较小的窗口以准确性为代价减少了存储。我们提出了一个针对移动机器人设计的硬件感知TTC估计系统,满足严格的带宽、计算和存储要求。我们的核心创新是一种时间尺度分离方法,用于计算多时间尺度事件表示,在每秒7500万事件(MEPS)下实现3.3 ms的延迟。作为本研究的一部分,我们开发了一个新的高时间分辨率TTC数据集,使用高清事件相机,其时间估计至少是现有事件数据集(如MVSEC, DSEC和VECtor)的7倍。我们的方法优于现有的方法,与体素网格相比,在$T^{2}CEF$上将平均帧中位数TTC误差降低了至少20%,并且在多个数据集上比其他基线平均提高了31%。我们的系统在Jetson Orin NX上实时运行,在141 Hz下只有9.5 ms的延迟,优于嵌入式硬件上的所有其他方法,使其成为移动机器人的理想选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
EV-TTC: Event-Based Time to Collision Under Low Light Conditions
Rapid and accurate dense time-to-collision (TTC) estimation in resource-constrained, low-light environments is challenging for event-based camera systems. Fixed-time event representations like voxel grids face an inherent trade-off: larger temporal windows improve perception accuracy but increase storage demands, while smaller windows reduce storage at the cost of accuracy. We present a hardware-aware TTC estimation system designed for mobile robots, satisfying strict bandwidth, computation, and storage requirements. Our core innovation is a time-scale separation method for computing a multi-temporal scale event representation, achieving a latency of 3.3 ms at 75 Million Events per Second (MEPS). As part of this study, we developed Time-To-Collision/Event Flow ($T^{2}CEF$) a new high-temporal-resolution TTC dataset, using HD event cameras, with temporal estimates at least 7 times greater than existing event datasets such as MVSEC, DSEC, and VECtor via SE(3) interpolation. Our method outperforms existing approaches, reducing mean frame median TTC error by at least 20% compared to voxel grids on $T^{2}CEF$, and achieving an average 31% improvement over other baselines across multiple datasets. Our system runs in real-time on a Jetson Orin NX with just 9.5 ms latency at 141 Hz, outperforming all other methods on embedded hardware, making it ideal for mobile robots.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters Computer Science-Computer Science Applications
CiteScore
9.60
自引率
15.40%
发文量
1428
期刊介绍: The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信