2020 IEEE International Conference on Computational Photography (ICCP)最新文献

筛选
英文 中文
Towards Reflectometry from Interreflections 从互反射走向反射测量
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105251
Kfir Shem-Tov, Sai Praveen Bangaru, Anat Levin, Ioannis Gkioulekas
{"title":"Towards Reflectometry from Interreflections","authors":"Kfir Shem-Tov, Sai Praveen Bangaru, Anat Levin, Ioannis Gkioulekas","doi":"10.1109/ICCP48838.2020.9105251","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105251","url":null,"abstract":"Reflectometry is the task for acquiring the bidirectional reflectance distribution function (BRDFs) of real-world materials. The typical reflectometry pipeline in computer vision, computer graphics, and computational imaging involves capturing images of a convex shape under multiple illumination and imaging conditions; due to the convexity of the shape, which implies that all paths from the light source to the camera perform a single reflection, the intensities in these images can subsequently be analytically mapped to BRDF values. We deviate from this pipeline by investigating the utility of higher-order light transport effects, such as the interreflections arising when illuminating and imaging a concave object, for reflectometry. We show that interreflections provide a rich set of contraints on the unknown BRDF, significantly exceeding those available in equivalent measurements of convex shapes. We develop a differentiable rendering pipeline to solve an inverse rendering problem that uses these constraints to produce high-fidelity BRDF estimates from even a single input image. Finally, we take first steps towards designing new concave shapes that maximize the amount of information about the unknown BRDF available in image measurements. We perform extensive simulations to validate the utility of this reflectometry from interreflections approach.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123812491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The role of Wigner Distribution Function in Non-Line-of-Sight Imaging 维格纳分布函数在非视距成像中的作用
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105266
Xiaochun Liu, A. Velten
{"title":"The role of Wigner Distribution Function in Non-Line-of-Sight Imaging","authors":"Xiaochun Liu, A. Velten","doi":"10.1109/ICCP48838.2020.9105266","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105266","url":null,"abstract":"Non-Line-of-Sight imaging has been linked to wave diffraction by the recent phasor field method. In wave optics, the Wigner Distribution Function description for an optical imaging system is a powerful analytical tool for modeling the imaging process with geometrical transformations. In this paper, we focus on illustrating the relation between captured signals and hidden objects in the Wigner Distribution domain. The Wigner Distribution Function is usually used together with approximated diffraction propagators, which is fine for most imaging problems. However, these approximated diffraction propagators are not valid for Non-Line-of-Sight imaging scenarios. We show that the exact phasor field propagator (Rayleigh-Sommerfeld Diffraction) does not have a standard geometrical transformation, as compared to approximated diffraction propagators (Fresnel, Fraunhofer diffraction) that can be represented as shearing or rotation in the Wigner Distribution Function domain. Then, we explore differences between the exact and approximated solutions by characterizing errors made in different spatial positions and acquisition methods (confocal, non-confocal scanning). We derive a lateral resolution based on the exact phasor field propagator, which can be used as a reference for theoretical evaluations and comparisons. For targets that lie laterally outside a relay wall, the loss of resolution is geometrically illustrated in the context of the Wigner Distribution Function.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128059655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
NLDNet++: A Physics Based Single Image Dehazing Network nldnet++:一个基于物理的单图像去雾网络
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105249
Iris Tal, Yael Bekerman, Avi Mor, Lior Knafo, J. Alon, S. Avidan
{"title":"NLDNet++: A Physics Based Single Image Dehazing Network","authors":"Iris Tal, Yael Bekerman, Avi Mor, Lior Knafo, J. Alon, S. Avidan","doi":"10.1109/ICCP48838.2020.9105249","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105249","url":null,"abstract":"Deep learning methods for image dehazing achieve impressive results. Yet, the task of collecting ground truth hazy/dehazed image pairs to train the network is cumbersome. We propose to use Non-Local Image Dehazing (NLD), an existing physics based technique, to provide the dehazed image required to training a network. Upon close inspection, we find that NLD suffers from several shortcomings and propose novel extensions to improve it. The new method, termed NLD++, consists of 1) denoising the input image as pre-processing step to avoid noise amplification, 2) introducing a constrained optimization that respects physical constraints. NLD++ produces superior results to NLD at the expense of increased computational cost. To offset that, we propose NLDNet++, a fully convolutional network that is trained on pairs of hazy images and images dehazed by NLD++. This eliminates the need of existing deep learning methods that require hazy/dehazed image pairs that are difficult to obtain. We evaluate the performance of NLDNet++ on standard data sets and find it to compare favorably with existing methods.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115531235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards Learning-based Inverse Subsurface Scattering 基于学习的逆次表面散射研究
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105209
Chengqian Che, Fujun Luan, Shuang Zhao, K. Bala, Ioannis Gkioulekas
{"title":"Towards Learning-based Inverse Subsurface Scattering","authors":"Chengqian Che, Fujun Luan, Shuang Zhao, K. Bala, Ioannis Gkioulekas","doi":"10.1109/ICCP48838.2020.9105209","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105209","url":null,"abstract":"Given images of translucent objects, of unknown shape and lighting, we aim to use learning to infer the optical parameters controlling subsurface scattering of light inside the objects. We introduce a new architecture, the inverse transport network (ITN), that aims to improve generalization of an encoder network to unseen scenes, by connecting it with a physically-accurate, differentiable Monte Carlo renderer capable of estimating image derivatives with respect to scattering material parameters. During training, this combination forces the encoder network to predict parameters that not only match groundtruth values, but also reproduce input images. During testing, the encoder network is used alone, without the renderer, to predict material parameters from a single input image. Drawing insights from the physics of radiative transfer, we additionally use material parameterizations that help reduce estimation errors due to ambiguities in the scattering parameter space. Finally, we augment the training loss with pixelwise weight maps that emphasize the parts of the image most informative about the underlying scattering parameters. We demonstrate that this combination allows neural networks to generalize to scenes with completely unseen geometries and illuminations better than traditional networks, with 38.06% reduced parameter error on average.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124323912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Programmable Spectrometry: Per-pixel Material Classification using Learned Spectral Filters 可编程光谱:使用学习光谱滤波器的逐像素材料分类
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105281
Vishwanath Saragadam, Aswin C. Sankaranarayanan
{"title":"Programmable Spectrometry: Per-pixel Material Classification using Learned Spectral Filters","authors":"Vishwanath Saragadam, Aswin C. Sankaranarayanan","doi":"10.1109/ICCP48838.2020.9105281","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105281","url":null,"abstract":"Many materials have distinct spectral profiles, which facilitates estimation of the material composition of a scene by processing its hyperspectral image (HSI). However, this process is inherently wasteful since high-dimensional HSIs are expensive to acquire and only a set of linear projections of the HSI contribute to the classification task. This paper proposes the concept of programmable spectrometry for per-pixel material classification, where instead of sensing the HSI of the scene and then processing it, we optically compute the spectrally-filtered images. This is achieved using a computational camera with a programmable spectral response. Our approach provides gains both in terms of acquisition speed - since only the relevant measurements are acquired - and in signal-to-noise ratio - since we invariably avoid narrowband filters that are light inefficient. Given ample training data, we use learning techniques to identify the bank of spectral profiles that facilitate material classification. We verify the method in simulations, as well as validate our findings using a lab prototype of the camera.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127314823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Modeling Defocus-Disparity in Dual-Pixel Sensors 双像素传感器离焦视差建模
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105278
Abhijith Punnappurath, Abdullah Abuolaim, M. Afifi, M. S. Brown
{"title":"Modeling Defocus-Disparity in Dual-Pixel Sensors","authors":"Abhijith Punnappurath, Abdullah Abuolaim, M. Afifi, M. S. Brown","doi":"10.1109/ICCP48838.2020.9105278","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105278","url":null,"abstract":"Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the image are out of focus. Recently, these DP views have been used for tasks beyond autofocus, such as synthetic bokeh, reflection removal, and depth reconstruction. These recent methods treat the two DP views as stereo image pairs and apply stereo matching algorithms to compute local disparity. However, dual-pixel disparity is not caused by view parallax as in stereo, but instead is attributed to defocus blur that occurs in out-of-focus regions in the image. This paper proposes a new parametric point spread function to model the defocus-disparity that occurs on DP sensors. We apply our model to the task of depth estimation from DP data. An important feature of our model is its ability to exploit the symmetry property of the DP blur kernels at each pixel. We leverage this symmetry property to formulate an unsupervised loss function that does not require ground truth depth. We demonstrate our method's effectiveness on both DSLR and smartphone DP data.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126178796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Raycast Calibration for Augmented Reality HMDs with Off-Axis Reflective Combiners 带离轴反射组合器的增强现实头戴式显示器的光线投射校准
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105134
Qi Guo, Huixuan Tang, Aaron Schmitz, Wenqi Zhang, Yang Lou, Alexander Fix, S. Lovegrove, H. Strasdat
{"title":"Raycast Calibration for Augmented Reality HMDs with Off-Axis Reflective Combiners","authors":"Qi Guo, Huixuan Tang, Aaron Schmitz, Wenqi Zhang, Yang Lou, Alexander Fix, S. Lovegrove, H. Strasdat","doi":"10.1109/ICCP48838.2020.9105134","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105134","url":null,"abstract":"Augmented reality overlays virtual objects on the real world. To do so, the head mounted display (HMD) needs to be calibrated to establish a mapping between 3D points in the real world with 2D pixels on display panels. This distortion is a high-dimensional function that also depends on pupil position and varifocal settings. We present Raycast calibration, an efficient approach to geometrically calibrate AR displays with off-axis reflective combiners. Our approach requires a small amount of data to estimate a compact, physics-based, and ray-traceable model of the HMD optics. We apply this technique to automatically calibrate an AR prototype with display, SLAM and eye-tracker, without user in the loop.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130373719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Awards [3 award winners] 奖项[3名得奖者]
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/iccp48838.2020.9105216
{"title":"Awards [3 award winners]","authors":"","doi":"10.1109/iccp48838.2020.9105216","DOIUrl":"https://doi.org/10.1109/iccp48838.2020.9105216","url":null,"abstract":"","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"10 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114134027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High Resolution Diffuse Optical Tomography using Short Range Indirect Subsurface Imaging 使用近距离间接地下成像的高分辨率漫射光学层析成像
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105173
Chao Liu, Akash K. Maity, A. Dubrawski, A. Sabharwal, S. Narasimhan
{"title":"High Resolution Diffuse Optical Tomography using Short Range Indirect Subsurface Imaging","authors":"Chao Liu, Akash K. Maity, A. Dubrawski, A. Sabharwal, S. Narasimhan","doi":"10.1109/ICCP48838.2020.9105173","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105173","url":null,"abstract":"Diffuse optical tomography (DOT) is an approach to recover subsurface structures beneath the skin by measuring light propagation beneath the surface. The method is based on optimizing the difference between the images collected and a forward model that accurately represents diffuse photon propagation within a heterogeneous scattering medium. However, to date, most works have used a few source-detector pairs and recover the medium at only a very low resolution. And increasing the resolution requires prohibitive computations/storage. In this work, we present a fast imaging and algorithm for high resolution diffuse optical tomography with a line imaging and illumination system. Key to our approach is a convolution approximation of the forward heterogeneous scattering model that can be inverted to produce deeper than ever before structured beneath the surface. We show that our proposed method can detect reasonably accurate boundaries and relative depth of heterogeneous structures up to a depth of 8 mm below highly scattering medium such as milk. This work can extend the potential of DOT to recover more intricate structures (vessels, tissue, tumors, etc.) beneath the skin for diagnosing many dermatological and cardio-vascular conditions.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126058614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
End-to-End Video Compressive Sensing Using Anderson-Accelerated Unrolled Networks 端到端视频压缩感知使用安德森加速展开网络
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105237
Yuqi Li, Miao Qi, Rahul Gulve, Mian Wei, R. Genov, Kiriakos N. Kutulakos, W. Heidrich
{"title":"End-to-End Video Compressive Sensing Using Anderson-Accelerated Unrolled Networks","authors":"Yuqi Li, Miao Qi, Rahul Gulve, Mian Wei, R. Genov, Kiriakos N. Kutulakos, W. Heidrich","doi":"10.1109/ICCP48838.2020.9105237","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105237","url":null,"abstract":"Compressive imaging systems with spatial-temporal encoding can be used to capture and reconstruct fast-moving objects. The imaging quality highly depends on the choice of encoding masks and reconstruction methods. In this paper, we present a new network architecture to jointly design the encoding masks and the reconstruction method for compressive high-frame-rate imaging. Unlike previous works, the proposed method takes full advantage of denoising prior to provide a promising frame reconstruction. The network is also flexible enough to optimize full-resolution masks and efficient at reconstructing frames. To this end, we develop a new dense network architecture that embeds Anderson acceleration, known from numerical optimization, directly into the neural network architecture. Our experiments show the optimized masks and the dense accelerated network respectively achieve 1.5 dB and 1 dB improvements in PSNR without adding training parameters. The proposed method outperforms other state-of-the-art methods both in simulations and on real hardware. In addition, we set up a coded two-bucket camera for compressive high-frame-rate imaging, which is robust to imaging noise and provides promising results when recovering nearly 1,000 frames per second.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123392188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信