{"title":"流激光雷达场景流估计","authors":"Mazen Abdelfattah;Z. Jane Wang;Rabab Ward","doi":"10.1109/OJSP.2025.3572759","DOIUrl":null,"url":null,"abstract":"Safe navigation of autonomous vehicles requires accurate and rapid understanding of their dynamic 3D environment. Scene flow estimation models this dynamic environment by predicting point motion between sequential point cloud scans, and is crucial for safe navigation. Existing state-of-the-art scene flow estimation methods, based on test-time optimization, achieve high accuracy but suffer from significant latency, limiting their applicability in real-time onboard systems. This latency stems from both the iterative test-time optimization process and the inherent delay of waiting for the LiDAR to acquire a complete <inline-formula><tex-math>$360^\\circ$</tex-math></inline-formula> scan. To overcome this bottleneck, we introduce a novel <italic>streaming</i> scene flow framework leveraging the sequential nature of LiDAR slice acquisition, demonstrating a dramatic reduction in end-to-end latency. Instead of waiting for the full <inline-formula><tex-math>$360^\\circ$</tex-math></inline-formula> scan, our method immediately estimates scene flow using each LiDAR slice once it is captured. To mitigate the reduced context of individual slices, we propose a novel contextual augmentation technique that expands the target slice by a small angular margin, incorporating crucial slice boundary information. Furthermore, to enhance test-time optimization within our streaming framework, our novel initialization scheme ’warm-starts' the current optimization using optimized parameters from the preceding slice. This achieves substantial speedups while maintaining, and in some cases surpassing, full-scan accuracy. We rigorously evaluate our approach on the challenging Waymo and Argoverse datasets, demonstrating significant latency reduction without compromising scene flow quality. This work paves the way for deploying high-accuracy, real-time scene flow algorithms in autonomous driving, advancing the field towards more responsive and safer autonomous systems.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"590-598"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11012710","citationCount":"0","resultStr":"{\"title\":\"Streaming LiDAR Scene Flow Estimation\",\"authors\":\"Mazen Abdelfattah;Z. Jane Wang;Rabab Ward\",\"doi\":\"10.1109/OJSP.2025.3572759\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Safe navigation of autonomous vehicles requires accurate and rapid understanding of their dynamic 3D environment. Scene flow estimation models this dynamic environment by predicting point motion between sequential point cloud scans, and is crucial for safe navigation. Existing state-of-the-art scene flow estimation methods, based on test-time optimization, achieve high accuracy but suffer from significant latency, limiting their applicability in real-time onboard systems. This latency stems from both the iterative test-time optimization process and the inherent delay of waiting for the LiDAR to acquire a complete <inline-formula><tex-math>$360^\\\\circ$</tex-math></inline-formula> scan. To overcome this bottleneck, we introduce a novel <italic>streaming</i> scene flow framework leveraging the sequential nature of LiDAR slice acquisition, demonstrating a dramatic reduction in end-to-end latency. Instead of waiting for the full <inline-formula><tex-math>$360^\\\\circ$</tex-math></inline-formula> scan, our method immediately estimates scene flow using each LiDAR slice once it is captured. To mitigate the reduced context of individual slices, we propose a novel contextual augmentation technique that expands the target slice by a small angular margin, incorporating crucial slice boundary information. Furthermore, to enhance test-time optimization within our streaming framework, our novel initialization scheme ’warm-starts' the current optimization using optimized parameters from the preceding slice. This achieves substantial speedups while maintaining, and in some cases surpassing, full-scan accuracy. We rigorously evaluate our approach on the challenging Waymo and Argoverse datasets, demonstrating significant latency reduction without compromising scene flow quality. This work paves the way for deploying high-accuracy, real-time scene flow algorithms in autonomous driving, advancing the field towards more responsive and safer autonomous systems.\",\"PeriodicalId\":73300,\"journal\":{\"name\":\"IEEE open journal of signal processing\",\"volume\":\"6 \",\"pages\":\"590-598\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-03-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11012710\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE open journal of signal processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11012710/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of signal processing","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11012710/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Safe navigation of autonomous vehicles requires accurate and rapid understanding of their dynamic 3D environment. Scene flow estimation models this dynamic environment by predicting point motion between sequential point cloud scans, and is crucial for safe navigation. Existing state-of-the-art scene flow estimation methods, based on test-time optimization, achieve high accuracy but suffer from significant latency, limiting their applicability in real-time onboard systems. This latency stems from both the iterative test-time optimization process and the inherent delay of waiting for the LiDAR to acquire a complete $360^\circ$ scan. To overcome this bottleneck, we introduce a novel streaming scene flow framework leveraging the sequential nature of LiDAR slice acquisition, demonstrating a dramatic reduction in end-to-end latency. Instead of waiting for the full $360^\circ$ scan, our method immediately estimates scene flow using each LiDAR slice once it is captured. To mitigate the reduced context of individual slices, we propose a novel contextual augmentation technique that expands the target slice by a small angular margin, incorporating crucial slice boundary information. Furthermore, to enhance test-time optimization within our streaming framework, our novel initialization scheme ’warm-starts' the current optimization using optimized parameters from the preceding slice. This achieves substantial speedups while maintaining, and in some cases surpassing, full-scan accuracy. We rigorously evaluate our approach on the challenging Waymo and Argoverse datasets, demonstrating significant latency reduction without compromising scene flow quality. This work paves the way for deploying high-accuracy, real-time scene flow algorithms in autonomous driving, advancing the field towards more responsive and safer autonomous systems.