Monocular Depth Estimation Using a Deep Learning Model with Pre-Depth Estimation based on Size Perspective

Takanori Asano, Yoshiaki Yasumura
{"title":"Monocular Depth Estimation Using a Deep Learning Model with Pre-Depth Estimation based on Size Perspective","authors":"Takanori Asano, Yoshiaki Yasumura","doi":"10.5121/csit.2023.131803","DOIUrl":null,"url":null,"abstract":"In this paper, For the task of the depth map of a scene given a single RGB image. We present an estimation method using a deep learning model that incorporates size perspective (size constancy cues). By utilizing a size perspective, the proposed method aims to address the difficulty of depth estimation tasks which stems from the limited correlation between the information inherent to objects in RGB images (such as shape and color) and their corresponding depths. The proposed method consists of two deep learning models, a size perspective model and a depth estimation model, The size-perspective model plays a role like that of the size perspective and estimates approximate depths for each object in the image based on the size of the object's bounding box and its actual size. Based on these rough depth estimation (pre-depth estimation) results, A depth image representing through depths of each object (pre-depth image) is generated and this image is input with the RGB image into the depth estimation model. The pre-depth image is used as a hint for depth estimation and improves the performance of the depth estimation model. With the proposed method, it becomes possible to obtain depth inputs for the depth estimation model without using any devices other than a monocular camera be forehand. The proposed method contributes to the improvement in accuracy when there are objects present in the image that can be detected by the object detection model. In the experiments using an original indoor scene dataset, the proposed method demonstrated improvement in accuracy compared to the method without pre-depth images.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial intelligence and applications (Commerce, Calif.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/csit.2023.131803","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, For the task of the depth map of a scene given a single RGB image. We present an estimation method using a deep learning model that incorporates size perspective (size constancy cues). By utilizing a size perspective, the proposed method aims to address the difficulty of depth estimation tasks which stems from the limited correlation between the information inherent to objects in RGB images (such as shape and color) and their corresponding depths. The proposed method consists of two deep learning models, a size perspective model and a depth estimation model, The size-perspective model plays a role like that of the size perspective and estimates approximate depths for each object in the image based on the size of the object's bounding box and its actual size. Based on these rough depth estimation (pre-depth estimation) results, A depth image representing through depths of each object (pre-depth image) is generated and this image is input with the RGB image into the depth estimation model. The pre-depth image is used as a hint for depth estimation and improves the performance of the depth estimation model. With the proposed method, it becomes possible to obtain depth inputs for the depth estimation model without using any devices other than a monocular camera be forehand. The proposed method contributes to the improvement in accuracy when there are objects present in the image that can be detected by the object detection model. In the experiments using an original indoor scene dataset, the proposed method demonstrated improvement in accuracy compared to the method without pre-depth images.
基于尺寸透视预深度估计的深度学习模型的单目深度估计
在本文中,对于给定单个RGB图像的场景深度图的任务。我们提出了一种使用深度学习模型的估计方法,该模型包含了尺寸视角(尺寸恒定线索)。该方法利用尺寸视角,解决了由于RGB图像中物体固有信息(如形状和颜色)与其相应深度之间的相关性有限而导致深度估计困难的问题。该方法由两个深度学习模型组成,尺寸透视模型和深度估计模型,尺寸透视模型的作用类似于尺寸透视模型,它根据物体边界框的大小和物体的实际大小来估计图像中每个物体的近似深度。基于这些粗略的深度估计(预深度估计)结果,生成每个对象通过深度表示的深度图像(预深度图像),并将该图像与RGB图像一起输入深度估计模型。利用预深度图像作为深度估计的提示,提高了深度估计模型的性能。利用所提出的方法,可以在不使用除单目相机以外的任何设备的情况下获得深度估计模型的深度输入。当图像中存在可被目标检测模型检测到的目标时,该方法有助于提高精度。在使用原始室内场景数据集的实验中,与没有预深度图像的方法相比,该方法的精度得到了提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信