Variable focus video: Reconstructing depth and video for dynamic scenes

Nitesh Shroff, A. Veeraraghavan, Yuichi Taguchi, Oncel Tuzel, Amit K. Agrawal, R. Chellappa
{"title":"Variable focus video: Reconstructing depth and video for dynamic scenes","authors":"Nitesh Shroff, A. Veeraraghavan, Yuichi Taguchi, Oncel Tuzel, Amit K. Agrawal, R. Chellappa","doi":"10.1109/ICCPhot.2012.6215219","DOIUrl":null,"url":null,"abstract":"Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical flow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical flow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth-of field effects, both extending and reducing the depth of field of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"02 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCPhot.2012.6215219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical flow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical flow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth-of field effects, both extending and reducing the depth of field of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.
可变焦点视频:动态场景的深度和视频重建
传统的离焦深度(DFD)算法假设相机和场景在采集过程中是静态的。在本文中,我们研究了相机和场景运动对DFD算法的影响。我们证明,在给定准确的光流估计的情况下,可以鲁棒地扭曲焦叠(FS)图像以获得虚拟静态FS,并在静态FS上应用传统的DFD算法。在不同焦距模糊的情况下获得准确的OF是一项具有挑战性的任务。我们展示了散焦模糊变化如何导致光流估计的固有偏差。然后,我们展示了如何稳健地处理这些偏差,并在不同焦点模糊的存在下计算准确的OF估计。这就产生了一种架构和算法,可以将传统的30 fps视频摄像机转换为30 fps图像和距离传感器。此外,提取图像和距离信息的能力使我们能够渲染具有艺术景深效果的图像,既可以扩展也可以减少捕获图像的景深。我们展示了使用相机原型捕获的具有挑战性的场景的实验结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信