Nitesh Shroff, A. Veeraraghavan, Yuichi Taguchi, Oncel Tuzel, Amit K. Agrawal, R. Chellappa
{"title":"可变焦点视频:动态场景的深度和视频重建","authors":"Nitesh Shroff, A. Veeraraghavan, Yuichi Taguchi, Oncel Tuzel, Amit K. Agrawal, R. Chellappa","doi":"10.1109/ICCPhot.2012.6215219","DOIUrl":null,"url":null,"abstract":"Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical flow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical flow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth-of field effects, both extending and reducing the depth of field of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"02 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"Variable focus video: Reconstructing depth and video for dynamic scenes\",\"authors\":\"Nitesh Shroff, A. Veeraraghavan, Yuichi Taguchi, Oncel Tuzel, Amit K. Agrawal, R. Chellappa\",\"doi\":\"10.1109/ICCPhot.2012.6215219\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical flow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical flow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth-of field effects, both extending and reducing the depth of field of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.\",\"PeriodicalId\":169984,\"journal\":{\"name\":\"2012 IEEE International Conference on Computational Photography (ICCP)\",\"volume\":\"02 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-04-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE International Conference on Computational Photography (ICCP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCPhot.2012.6215219\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCPhot.2012.6215219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Variable focus video: Reconstructing depth and video for dynamic scenes
Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical flow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical flow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth-of field effects, both extending and reducing the depth of field of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.