Unsupervised Learning of 3D Scene Flow With LiDAR Odometry Assistance

IF 7.9 1区 工程技术 Q1 ENGINEERING, CIVIL
Guangming Wang;Zhiheng Feng;Chaokang Jiang;Jiuming Liu;Hesheng Wang
{"title":"Unsupervised Learning of 3D Scene Flow With LiDAR Odometry Assistance","authors":"Guangming Wang;Zhiheng Feng;Chaokang Jiang;Jiuming Liu;Hesheng Wang","doi":"10.1109/TITS.2025.3538765","DOIUrl":null,"url":null,"abstract":"3D scene flow represents the 3D motion of each point in the point cloud, which is a base 3D perception task for autonomous driving, like optical flow for 2D images. As non-learning methods are often inefficient or struggled to learn accurate correspondence in complex 3D real world, recent works turn to supervised learning methods, which require ground truth labels. However, acquiring the ground truth of 3D scene flow is challenging mainly due to the lack of sensors capable of capturing point-level motion and the complexity of accurately tracking each point in real-world environments. Therefore, it is important to resort to self-supervised methods, which do not require ground truth labels. In this paper, a novel unsupervised learning method of scene flow with LiDAR odometry is proposed, which enables the scene flow network can be trained directly on real-world LiDAR data without scene flow labels. In this structure, supervised odometry provides a more accurate shared cost volume for the interframe association of 3D scene flow. In addition, because static and occluded points are more suitable for using the pose transform while dynamic and non-occluded points are more suitable for using the scene flow transform, a static mask and an occlusion mask are designed to classify the states of points and a mask-weighted warp layer is proposed to transform source points in a divide-and-conquer manner. The experiments demonstrate that the divide-and-conquer strategy makes the predicted scene flow more accurate. The experiment results compared to other methods also show the application ability of our proposed method to real-world data. Our source codes are released at: <uri>https://github.com/IRMVLab/PSFNet</uri>.","PeriodicalId":13416,"journal":{"name":"IEEE Transactions on Intelligent Transportation Systems","volume":"26 4","pages":"4557-4567"},"PeriodicalIF":7.9000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Transportation Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10906337/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
引用次数: 0

Abstract

3D scene flow represents the 3D motion of each point in the point cloud, which is a base 3D perception task for autonomous driving, like optical flow for 2D images. As non-learning methods are often inefficient or struggled to learn accurate correspondence in complex 3D real world, recent works turn to supervised learning methods, which require ground truth labels. However, acquiring the ground truth of 3D scene flow is challenging mainly due to the lack of sensors capable of capturing point-level motion and the complexity of accurately tracking each point in real-world environments. Therefore, it is important to resort to self-supervised methods, which do not require ground truth labels. In this paper, a novel unsupervised learning method of scene flow with LiDAR odometry is proposed, which enables the scene flow network can be trained directly on real-world LiDAR data without scene flow labels. In this structure, supervised odometry provides a more accurate shared cost volume for the interframe association of 3D scene flow. In addition, because static and occluded points are more suitable for using the pose transform while dynamic and non-occluded points are more suitable for using the scene flow transform, a static mask and an occlusion mask are designed to classify the states of points and a mask-weighted warp layer is proposed to transform source points in a divide-and-conquer manner. The experiments demonstrate that the divide-and-conquer strategy makes the predicted scene flow more accurate. The experiment results compared to other methods also show the application ability of our proposed method to real-world data. Our source codes are released at: https://github.com/IRMVLab/PSFNet.
无监督学习3D场景流与激光雷达测速协助
三维场景流表示点云中每个点的三维运动,是自动驾驶的基础三维感知任务,类似于二维图像的光流。由于非学习方法通常效率低下或难以在复杂的三维现实世界中学习准确的对应关系,最近的研究转向需要地面真值标签的监督学习方法。然而,获取3D场景流的地面真相是具有挑战性的,主要原因是缺乏能够捕捉点水平运动的传感器,以及在现实环境中精确跟踪每个点的复杂性。因此,重要的是采用自我监督的方法,它不需要基础真值标签。本文提出了一种新的基于LiDAR里程计的场景流无监督学习方法,使场景流网络可以直接在真实LiDAR数据上进行训练,而不需要场景流标签。在这种结构中,监督里程计为3D场景流的帧间关联提供了更准确的共享成本体积。此外,由于静态点和遮挡点更适合使用姿态变换,而动态点和未遮挡点更适合使用场景流变换,因此设计了静态掩模和遮挡掩模对点的状态进行分类,并提出了掩模加权的warp层对源点进行分而治之的变换。实验表明,分治策略使预测的场景流更加准确。与其他方法的对比实验结果也表明了本文方法在实际数据中的应用能力。我们的源代码发布在:https://github.com/IRMVLab/PSFNet。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Intelligent Transportation Systems 工程技术-工程:电子与电气
CiteScore
14.80
自引率
12.90%
发文量
1872
审稿时长
7.5 months
期刊介绍: The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical activities among IEEE entities, and providing a focus for cooperative activities, both internally and externally.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信