Boosting Depth Estimation for Self-Driving in a Self-Supervised Framework via Improved Pose Network

Yazan Dayoub;Andrey V. Savchenko;Ilya Makarov
{"title":"Boosting Depth Estimation for Self-Driving in a Self-Supervised Framework via Improved Pose Network","authors":"Yazan Dayoub;Andrey V. Savchenko;Ilya Makarov","doi":"10.1109/OJCS.2024.3505876","DOIUrl":null,"url":null,"abstract":"Depth estimation is a critical component of self-driving vehicles, enabling accurate scene understanding, obstacle detection, and precise localization. Improving the performance of depth estimation networks without increasing computational cost is highly advantageous for autonomous driving systems. In this article, we propose to enhance depth estimation by improving the pose network in a self-supervised framework. Unlike conventional pose networks, our approach preserves more detailed spatial information by integrating multi-scale features and normalized coordinates. This improved spatial awareness allows for more accurate depth predictions. Comprehensive evaluations on the KITTI and Make3D datasets show that our method yields a 2-7% improvement in the absolute relative error (abs_rel) metric. Furthermore, on the KITTI odometry dataset, our approach demonstrates competitive performance, with relative translational error (\n<inline-formula><tex-math>$t_{rel}$</tex-math></inline-formula>\n) of \n<inline-formula><tex-math>$6.11$</tex-math></inline-formula>\n and \n<inline-formula><tex-math>$7.21$</tex-math></inline-formula>\n, and relative rotational error (\n<inline-formula><tex-math>$r_{rel}$</tex-math></inline-formula>\n) of \n<inline-formula><tex-math>$1.12$</tex-math></inline-formula>\n and \n<inline-formula><tex-math>$2.05$</tex-math></inline-formula>\n for sequences 9 and 10, respectively.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"109-118"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767273","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10767273/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Depth estimation is a critical component of self-driving vehicles, enabling accurate scene understanding, obstacle detection, and precise localization. Improving the performance of depth estimation networks without increasing computational cost is highly advantageous for autonomous driving systems. In this article, we propose to enhance depth estimation by improving the pose network in a self-supervised framework. Unlike conventional pose networks, our approach preserves more detailed spatial information by integrating multi-scale features and normalized coordinates. This improved spatial awareness allows for more accurate depth predictions. Comprehensive evaluations on the KITTI and Make3D datasets show that our method yields a 2-7% improvement in the absolute relative error (abs_rel) metric. Furthermore, on the KITTI odometry dataset, our approach demonstrates competitive performance, with relative translational error ( $t_{rel}$ ) of $6.11$ and $7.21$ , and relative rotational error ( $r_{rel}$ ) of $1.12$ and $2.05$ for sequences 9 and 10, respectively.
基于改进姿态网络的自监督框架下自动驾驶深度估计增强
深度估计是自动驾驶汽车的关键组成部分,可以实现准确的场景理解、障碍物检测和精确定位。在不增加计算成本的情况下提高深度估计网络的性能对自动驾驶系统非常有利。在本文中,我们提出在自监督框架下通过改进姿态网络来增强深度估计。与传统的姿态网络不同,我们的方法通过整合多尺度特征和标准化坐标来保留更详细的空间信息。这种空间感知能力的提高使得深度预测更加准确。对KITTI和Make3D数据集的综合评估表明,我们的方法在绝对相对误差(abs_rel)度量方面提高了2-7%。此外,在KITTI odometry数据集上,我们的方法显示出具有竞争力的性能,序列9和序列10的相对平移误差($t_{rel}$)分别为$6.11和$7.21,相对旋转误差($r_{rel}$)分别为$1.12和$2.05。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信