{"title":"EV-TTC:弱光条件下基于事件的碰撞时间","authors":"Anthony Bisulco;Vijay Kumar;Kostas Daniilidis","doi":"10.1109/LRA.2025.3565150","DOIUrl":null,"url":null,"abstract":"Rapid and accurate dense time-to-collision (TTC) estimation in resource-constrained, low-light environments is challenging for event-based camera systems. Fixed-time event representations like voxel grids face an inherent trade-off: larger temporal windows improve perception accuracy but increase storage demands, while smaller windows reduce storage at the cost of accuracy. We present a hardware-aware TTC estimation system designed for mobile robots, satisfying strict bandwidth, computation, and storage requirements. Our core innovation is a time-scale separation method for computing a multi-temporal scale event representation, achieving a latency of 3.3 ms at 75 Million Events per Second (MEPS). As part of this study, we developed Time-To-Collision/Event Flow (<inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>) a new high-temporal-resolution TTC dataset, using HD event cameras, with temporal estimates at least 7 times greater than existing event datasets such as MVSEC, DSEC, and VECtor via SE(3) interpolation. Our method outperforms existing approaches, reducing mean frame median TTC error by at least 20% compared to voxel grids on <inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>, and achieving an average 31% improvement over other baselines across multiple datasets. Our system runs in real-time on a Jetson Orin NX with just 9.5 ms latency at 141 Hz, outperforming all other methods on embedded hardware, making it ideal for mobile robots.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 6","pages":"6151-6158"},"PeriodicalIF":4.6000,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EV-TTC: Event-Based Time to Collision Under Low Light Conditions\",\"authors\":\"Anthony Bisulco;Vijay Kumar;Kostas Daniilidis\",\"doi\":\"10.1109/LRA.2025.3565150\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Rapid and accurate dense time-to-collision (TTC) estimation in resource-constrained, low-light environments is challenging for event-based camera systems. Fixed-time event representations like voxel grids face an inherent trade-off: larger temporal windows improve perception accuracy but increase storage demands, while smaller windows reduce storage at the cost of accuracy. We present a hardware-aware TTC estimation system designed for mobile robots, satisfying strict bandwidth, computation, and storage requirements. Our core innovation is a time-scale separation method for computing a multi-temporal scale event representation, achieving a latency of 3.3 ms at 75 Million Events per Second (MEPS). As part of this study, we developed Time-To-Collision/Event Flow (<inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>) a new high-temporal-resolution TTC dataset, using HD event cameras, with temporal estimates at least 7 times greater than existing event datasets such as MVSEC, DSEC, and VECtor via SE(3) interpolation. Our method outperforms existing approaches, reducing mean frame median TTC error by at least 20% compared to voxel grids on <inline-formula><tex-math>$T^{2}CEF$</tex-math></inline-formula>, and achieving an average 31% improvement over other baselines across multiple datasets. Our system runs in real-time on a Jetson Orin NX with just 9.5 ms latency at 141 Hz, outperforming all other methods on embedded hardware, making it ideal for mobile robots.\",\"PeriodicalId\":13241,\"journal\":{\"name\":\"IEEE Robotics and Automation Letters\",\"volume\":\"10 6\",\"pages\":\"6151-6158\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2025-04-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Robotics and Automation Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10979412/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10979412/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
摘要
对于基于事件的相机系统来说,在资源受限的低光环境下快速准确地估计密集碰撞时间(TTC)是一个挑战。像体素网格这样的固定时间事件表示面临着固有的权衡:较大的时间窗口提高了感知准确性,但增加了存储需求,而较小的窗口以准确性为代价减少了存储。我们提出了一个针对移动机器人设计的硬件感知TTC估计系统,满足严格的带宽、计算和存储要求。我们的核心创新是一种时间尺度分离方法,用于计算多时间尺度事件表示,在每秒7500万事件(MEPS)下实现3.3 ms的延迟。作为本研究的一部分,我们开发了一个新的高时间分辨率TTC数据集,使用高清事件相机,其时间估计至少是现有事件数据集(如MVSEC, DSEC和VECtor)的7倍。我们的方法优于现有的方法,与体素网格相比,在$T^{2}CEF$上将平均帧中位数TTC误差降低了至少20%,并且在多个数据集上比其他基线平均提高了31%。我们的系统在Jetson Orin NX上实时运行,在141 Hz下只有9.5 ms的延迟,优于嵌入式硬件上的所有其他方法,使其成为移动机器人的理想选择。
EV-TTC: Event-Based Time to Collision Under Low Light Conditions
Rapid and accurate dense time-to-collision (TTC) estimation in resource-constrained, low-light environments is challenging for event-based camera systems. Fixed-time event representations like voxel grids face an inherent trade-off: larger temporal windows improve perception accuracy but increase storage demands, while smaller windows reduce storage at the cost of accuracy. We present a hardware-aware TTC estimation system designed for mobile robots, satisfying strict bandwidth, computation, and storage requirements. Our core innovation is a time-scale separation method for computing a multi-temporal scale event representation, achieving a latency of 3.3 ms at 75 Million Events per Second (MEPS). As part of this study, we developed Time-To-Collision/Event Flow ($T^{2}CEF$) a new high-temporal-resolution TTC dataset, using HD event cameras, with temporal estimates at least 7 times greater than existing event datasets such as MVSEC, DSEC, and VECtor via SE(3) interpolation. Our method outperforms existing approaches, reducing mean frame median TTC error by at least 20% compared to voxel grids on $T^{2}CEF$, and achieving an average 31% improvement over other baselines across multiple datasets. Our system runs in real-time on a Jetson Orin NX with just 9.5 ms latency at 141 Hz, outperforming all other methods on embedded hardware, making it ideal for mobile robots.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.