{"title":"MonoDFM: Density Field Modeling-Based End-to-End Monocular 3D Object Detection","authors":"Gang Liu;Xinrui Huang;Xiaoxiao Xie","doi":"10.1109/ACCESS.2025.3563248","DOIUrl":null,"url":null,"abstract":"Monocular 3D object detection aims to infer the 3D properties of objects from a single RGB image. Existing methods primarily rely on planar features to estimate 3D information directly. However, the limited 3D information available in 2D images often results in suboptimal detection accuracy. To address this challenge, we propose MonoDFM, an end-to-end monocular 3D object detection method based on density field modeling. By modeling the density field from the features of a single image, MonoDFM enables a more accurate transition from 2D to 3D representations, improving 3D attribute prediction accuracy. Unlike traditional depth map methods, which are limited to visible regions, MonoDFM infers geometric information from occluded regions by predicting the scene’s density field. Moreover, compared with more complex approaches like Neural Radiance Fields (NeRF), MonoDFM provides a streamlined and efficient prediction process. Experiments conducted on the KITTI dataset show that MonoDFM achieves <inline-formula> <tex-math>$\\mathrm {AP_{3D}}$ </tex-math></inline-formula> of (25.13, 16.61, 13.82) and <inline-formula> <tex-math>$\\mathrm {AP_{BEV}}$ </tex-math></inline-formula> of (32.61, 22.14, 18.71) on the KITTI benchmark for the Car category under three difficulty settings (easy, moderate, and hard), achieving competitive performance. Ablation studies further validate the effectiveness of each component. As a result, MonoDFM offers an effective approach to monocular 3D object detection, demonstrating strong performance.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"74015-74031"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10973614","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10973614/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Monocular 3D object detection aims to infer the 3D properties of objects from a single RGB image. Existing methods primarily rely on planar features to estimate 3D information directly. However, the limited 3D information available in 2D images often results in suboptimal detection accuracy. To address this challenge, we propose MonoDFM, an end-to-end monocular 3D object detection method based on density field modeling. By modeling the density field from the features of a single image, MonoDFM enables a more accurate transition from 2D to 3D representations, improving 3D attribute prediction accuracy. Unlike traditional depth map methods, which are limited to visible regions, MonoDFM infers geometric information from occluded regions by predicting the scene’s density field. Moreover, compared with more complex approaches like Neural Radiance Fields (NeRF), MonoDFM provides a streamlined and efficient prediction process. Experiments conducted on the KITTI dataset show that MonoDFM achieves $\mathrm {AP_{3D}}$ of (25.13, 16.61, 13.82) and $\mathrm {AP_{BEV}}$ of (32.61, 22.14, 18.71) on the KITTI benchmark for the Car category under three difficulty settings (easy, moderate, and hard), achieving competitive performance. Ablation studies further validate the effectiveness of each component. As a result, MonoDFM offers an effective approach to monocular 3D object detection, demonstrating strong performance.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.