OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments

Chubin Zhang;Juncheng Yan;Yi Wei;Jiaxin Li;Li Liu;Yansong Tang;Yueqi Duan;Jiwen Lu
{"title":"OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments","authors":"Chubin Zhang;Juncheng Yan;Yi Wei;Jiaxin Li;Li Liu;Yansong Tang;Yueqi Duan;Jiwen Lu","doi":"10.1109/TIP.2025.3567828","DOIUrl":null,"url":null,"abstract":"Occupancy prediction reconstructs 3D structures of surrounding environments. It provides detailed information for autonomous driving planning and navigation. However, most existing methods heavily rely on the LiDAR point clouds to generate occupancy ground truth, which is not available in the vision-based system. In this paper, we propose an OccNeRF method for training occupancy networks without 3D ground truth. Different from previous works which consider a bounded scene, we parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras’ infinite perceptive range. The neural rendering is adopted to convert occupancy fields to multi-camera depth maps, supervised by multi-frame photometric consistency. Moreover, for semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model. Extensive experiments for both self-supervised depth estimation and 3D occupancy prediction tasks on nuScenes and SemanticKITTI datasets demonstrate the effectiveness of our method. The code is available at <uri>https://github.com/LinShan-Bin/OccNeRF</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"3096-3107"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11003427/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Occupancy prediction reconstructs 3D structures of surrounding environments. It provides detailed information for autonomous driving planning and navigation. However, most existing methods heavily rely on the LiDAR point clouds to generate occupancy ground truth, which is not available in the vision-based system. In this paper, we propose an OccNeRF method for training occupancy networks without 3D ground truth. Different from previous works which consider a bounded scene, we parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras’ infinite perceptive range. The neural rendering is adopted to convert occupancy fields to multi-camera depth maps, supervised by multi-frame photometric consistency. Moreover, for semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model. Extensive experiments for both self-supervised depth estimation and 3D occupancy prediction tasks on nuScenes and SemanticKITTI datasets demonstrate the effectiveness of our method. The code is available at https://github.com/LinShan-Bin/OccNeRF
ocnerf:推进无激光雷达环境下的3D占用预测
占用率预测重建了周围环境的三维结构。它为自动驾驶规划和导航提供了详细的信息。然而,现有的大多数方法严重依赖于LiDAR点云来生成占用地真实值,这在基于视觉的系统中是无法实现的。在本文中,我们提出了一种OccNeRF方法来训练没有三维地面真值的占用网络。与以往考虑有界场景的工作不同,我们对重建的占用场进行了参数化,并重新组织了采样策略,以适应相机的无限感知范围。在多帧光度一致性监督下,采用神经渲染将占用场转换为多相机深度图。此外,对于语义占用预测,我们设计了几种策略来优化提示并过滤预训练的开放词汇二维分割模型的输出。在nuScenes和SemanticKITTI数据集上进行的自监督深度估计和3D占用预测任务的大量实验证明了我们的方法的有效性。代码可在https://github.com/LinShan-Bin/OccNeRF上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信