Ming Li, Xiong Yang, Chaofan Wu, Jiaheng Li, Pinzhi Wang, Xuejiao Hu, Sidan Du, Yang Li
{"title":"Real-time Multi-view Omnidirectional Depth Estimation System for Robots and Autonomous Driving on Real Scenes","authors":"Ming Li, Xiong Yang, Chaofan Wu, Jiaheng Li, Pinzhi Wang, Xuejiao Hu, Sidan Du, Yang Li","doi":"arxiv-2409.07843","DOIUrl":null,"url":null,"abstract":"Omnidirectional Depth Estimation has broad application prospects in fields\nsuch as robotic navigation and autonomous driving. In this paper, we propose a\nrobotic prototype system and corresponding algorithm designed to validate\nomnidirectional depth estimation for navigation and obstacle avoidance in\nreal-world scenarios for both robots and vehicles. The proposed HexaMODE system\ncaptures 360$^\\circ$ depth maps using six surrounding arranged fisheye cameras.\nWe introduce a combined spherical sweeping method and optimize the model\narchitecture for proposed RtHexa-OmniMVS algorithm to achieve real-time\nomnidirectional depth estimation. To ensure high accuracy, robustness, and\ngeneralization in real-world environments, we employ a teacher-student\nself-training strategy, utilizing large-scale unlabeled real-world data for\nmodel training. The proposed algorithm demonstrates high accuracy in various\ncomplex real-world scenarios, both indoors and outdoors, achieving an inference\nspeed of 15 fps on edge computing platforms.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07843","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Omnidirectional Depth Estimation has broad application prospects in fields
such as robotic navigation and autonomous driving. In this paper, we propose a
robotic prototype system and corresponding algorithm designed to validate
omnidirectional depth estimation for navigation and obstacle avoidance in
real-world scenarios for both robots and vehicles. The proposed HexaMODE system
captures 360$^\circ$ depth maps using six surrounding arranged fisheye cameras.
We introduce a combined spherical sweeping method and optimize the model
architecture for proposed RtHexa-OmniMVS algorithm to achieve real-time
omnidirectional depth estimation. To ensure high accuracy, robustness, and
generalization in real-world environments, we employ a teacher-student
self-training strategy, utilizing large-scale unlabeled real-world data for
model training. The proposed algorithm demonstrates high accuracy in various
complex real-world scenarios, both indoors and outdoors, achieving an inference
speed of 15 fps on edge computing platforms.