3-D Scene Reconstruction Using Depth from Defocus and Deep Learning

David R. Emerson, Lauren A. Christopher
{"title":"3-D Scene Reconstruction Using Depth from Defocus and Deep Learning","authors":"David R. Emerson, Lauren A. Christopher","doi":"10.1109/AIPR47015.2019.9174568","DOIUrl":null,"url":null,"abstract":"Depth estimation is becoming increasingly important in computer vision applications. As the commercial industry moves forward with autonomous vehicle research and development, there is a demand for these systems to be able to gauge their 3D surroundings in order to avoid obstacles, and react to threats. This need requires depth estimation systems, and current research in self-driving vehicles now use LIDAR for 3D awareness. However, as LIDAR becomes more prevalent there is the potential for an increased risk of interference between this type of active measurement system on multiple vehicles. Passive methods, on the other hand, do not require the transmission of a signal in order to measure depth. Instead, they estimate the depth by using specific cues in the scene. Previous research, using a Depth from Defocus (DfD) single passive camera system, has shown that an in-focus image and an out-of-focus image can be used to produce a depth measure. This research introduces a new Deep Learning (DL) architecture that is capable of ingesting these image pairs to produce a depth map of the given scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cut algorithms; the new DfD-Net produces a 63.7% and 33.6% improvement in the Normalized Root Mean Square Error (NRMSE) for the darkest and brightest images respectively. In addition to the NRMSE, an image quality metric (Structural Similarity Index (SSIM)) was also used to assess the DfD-Net performance. The DfD-Net produced a 3.6% increase (improvement) and a 2.3% reduction (slight decrease) in the SSIM metric for the darkest and brightest images respectively.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"02 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR47015.2019.9174568","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Depth estimation is becoming increasingly important in computer vision applications. As the commercial industry moves forward with autonomous vehicle research and development, there is a demand for these systems to be able to gauge their 3D surroundings in order to avoid obstacles, and react to threats. This need requires depth estimation systems, and current research in self-driving vehicles now use LIDAR for 3D awareness. However, as LIDAR becomes more prevalent there is the potential for an increased risk of interference between this type of active measurement system on multiple vehicles. Passive methods, on the other hand, do not require the transmission of a signal in order to measure depth. Instead, they estimate the depth by using specific cues in the scene. Previous research, using a Depth from Defocus (DfD) single passive camera system, has shown that an in-focus image and an out-of-focus image can be used to produce a depth measure. This research introduces a new Deep Learning (DL) architecture that is capable of ingesting these image pairs to produce a depth map of the given scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cut algorithms; the new DfD-Net produces a 63.7% and 33.6% improvement in the Normalized Root Mean Square Error (NRMSE) for the darkest and brightest images respectively. In addition to the NRMSE, an image quality metric (Structural Similarity Index (SSIM)) was also used to assess the DfD-Net performance. The DfD-Net produced a 3.6% increase (improvement) and a 2.3% reduction (slight decrease) in the SSIM metric for the darkest and brightest images respectively.
基于离焦深度和深度学习的三维场景重建
深度估计在计算机视觉应用中变得越来越重要。随着商业行业不断推进自动驾驶汽车的研发,这些系统需要能够测量其3D环境,以避开障碍物,并对威胁做出反应。这种需求需要深度估计系统,目前自动驾驶汽车的研究正在使用激光雷达进行3D感知。然而,随着激光雷达变得越来越普遍,这种类型的主动测量系统在多辆车上的干扰风险可能会增加。另一方面,无源方法不需要传输信号来测量深度。相反,他们通过使用场景中的特定线索来估计深度。先前的研究,使用离焦深度(DfD)单被动相机系统,已经表明,在焦点图像和失焦图像可以用来产生深度测量。本研究引入了一种新的深度学习(DL)架构,该架构能够摄取这些图像对以生成给定场景的深度图,从而在一系列照明条件下提高速度和性能。与以往最先进的多标签图切算法相比;新的DfD-Net在最暗和最亮图像的归一化均方根误差(NRMSE)上分别提高了63.7%和33.6%。除了NRMSE之外,还使用图像质量度量(结构相似指数(SSIM))来评估DfD-Net的性能。DfD-Net在最暗和最亮图像的SSIM度量中分别产生3.6%的增加(改进)和2.3%的减少(轻微减少)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信