Yuanfan Xu;Shuaiwen Chen;Xinting Yang;Yunfei Xiang;Jincheng Yu;Wenbo Ding;Jian Wang;Yu Wang
{"title":"Efficient and Hardware-Friendly Online Adaptation for Deep Stereo Depth Estimation on Embedded Robots","authors":"Yuanfan Xu;Shuaiwen Chen;Xinting Yang;Yunfei Xiang;Jincheng Yu;Wenbo Ding;Jian Wang;Yu Wang","doi":"10.1109/LRA.2025.3548504","DOIUrl":null,"url":null,"abstract":"Accurate and real-time stereo depth estimation is important for autonomous robots, such as autonomous aerial vehicles (AAVs). Due to the computation constraints of these miniaturized robots, current state-of-the-art algorithms deploy light-weight neural networks while using self-supervised online adaptation to compensate for the lack of generalization. However, the traditional online training approach introduces 2× extra computation overhead, resulting in the failure to meet real-time requirements. Existing efficient training algorithms are primarily designed for train-from-scratch scenarios rather than online training, and involve complicated data quantization methods and non-standard operations, making them highly unfriendly to deployment on robots equipped with embedded GPUs or neural processing units (NPUs). Therefore, this paper aims to improve the online adaptation for deep stereo at the system level from both hardware and software aspects, and proposes a novel online adaptation method, which is robust, computationally efficient, and hardware-friendly. First, we adopt 8-bit quantized training strategy to maximize the performance of typical embedded computing platforms. Considering the streaming input of data during deployment, we design an online calibration method for quantized self-adaptive deep stereo. Then we only update the bias of the convolutional layers and design a plug-in layer with negligible computational cost to enhance the adaptation effect. Meanwhile, this layer is inherently compatible with existing GPUs and NPUs. Our final deep stereo system speeds up the inference and adaptation by 2.11×, which can process 640×360 resolution images at 11.1 FPS on the NVIDIA Jetson Orin NX, and obtains estimation accuracy comparable to current adaptation methods. When deployed on the Horizon Journey-5 Chip, it can further achieve a 10× speedup than Orin NX.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 5","pages":"4308-4315"},"PeriodicalIF":4.6000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10910155/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate and real-time stereo depth estimation is important for autonomous robots, such as autonomous aerial vehicles (AAVs). Due to the computation constraints of these miniaturized robots, current state-of-the-art algorithms deploy light-weight neural networks while using self-supervised online adaptation to compensate for the lack of generalization. However, the traditional online training approach introduces 2× extra computation overhead, resulting in the failure to meet real-time requirements. Existing efficient training algorithms are primarily designed for train-from-scratch scenarios rather than online training, and involve complicated data quantization methods and non-standard operations, making them highly unfriendly to deployment on robots equipped with embedded GPUs or neural processing units (NPUs). Therefore, this paper aims to improve the online adaptation for deep stereo at the system level from both hardware and software aspects, and proposes a novel online adaptation method, which is robust, computationally efficient, and hardware-friendly. First, we adopt 8-bit quantized training strategy to maximize the performance of typical embedded computing platforms. Considering the streaming input of data during deployment, we design an online calibration method for quantized self-adaptive deep stereo. Then we only update the bias of the convolutional layers and design a plug-in layer with negligible computational cost to enhance the adaptation effect. Meanwhile, this layer is inherently compatible with existing GPUs and NPUs. Our final deep stereo system speeds up the inference and adaptation by 2.11×, which can process 640×360 resolution images at 11.1 FPS on the NVIDIA Jetson Orin NX, and obtains estimation accuracy comparable to current adaptation methods. When deployed on the Horizon Journey-5 Chip, it can further achieve a 10× speedup than Orin NX.
期刊介绍:
The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.