2020 IEEE International Conference on Computational Photography (ICCP)最新文献

筛选
英文 中文
High Resolution Light Field Recovery with Fourier Disparity Layer Completion, Demosaicing, and Super-Resolution 高分辨率光场恢复与傅里叶视差层补全,去马赛克,和超分辨率
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105172
Mikael Le Pendu, A. Smolic
{"title":"High Resolution Light Field Recovery with Fourier Disparity Layer Completion, Demosaicing, and Super-Resolution","authors":"Mikael Le Pendu, A. Smolic","doi":"10.1109/ICCP48838.2020.9105172","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105172","url":null,"abstract":"In this paper, we present a novel approach for recovering high resolution light fields from input data with many types of degradation and challenges typically found in lenslet based plenoptic cameras. Those include the low spatial resolution, but also the irregular spatio-angular sampling and color sampling, the depth-dependent blur, and even axial chromatic aberrations. Our approach, based on the recent Fourier Disparity Layer representation of the light field, allows the construction of high resolution layers directly from the low resolution input views. High resolution light field views are then simply reconstructed by shifting and summing the layers. We show that when the spatial sampling is regular, the layer construction can be decomposed into linear optimization problems formulated in the Fourier domain for small groups of frequency components. We additionally propose a new preconditioning approach ensuring spatial consistency, and a color regularization term to simultaneously perform color demosaicing. For the general case of light field completion from an irregular sampling, we define a simple iterative version of the algorithm. Both approaches are then combined for an efficient super-resolution of the irregularly sampled data of plenoptic cameras. Finally, the Fourier Disparity Layer model naturally extends to take into account a depth-dependent blur and axial chromatic aberrations without requiring an estimation of depth or disparity maps.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129307503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
WISHED: Wavefront imaging sensor with high resolution and depth ranging wish:高分辨率深度测距波前成像传感器
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105280
Yicheng Wu, Fengqiang Li, F. Willomitzer, A. Veeraraghavan, O. Cossairt
{"title":"WISHED: Wavefront imaging sensor with high resolution and depth ranging","authors":"Yicheng Wu, Fengqiang Li, F. Willomitzer, A. Veeraraghavan, O. Cossairt","doi":"10.1109/ICCP48838.2020.9105280","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105280","url":null,"abstract":"Phase-retrieval based wavefront sensors have been shown to reconstruct the complex field from an object with a high spatial resolution. Although the reconstructed complex field encodes the depth information of the object, it is impractical to be used as a depth sensor for macroscopic objects, since the unambiguous depth imaging range is limited by the optical wavelength. To improve the depth range of imaging and handle depth discontinuities, we propose a novel three-dimensional sensor by leveraging wavelength diversity and wavefront sensing. Complex fields at two optical wavelengths are recorded, and a synthetic wavelength can be generated by correlating those wavefronts. The proposed system achieves high lateral and depth resolutions. Our experimental prototype shows an unambiguous range of more than 1,000 x larger compared with the optical wavelengths, while the depth precision is up to 9µm for smooth objects and up to 69µm for rough objects. We experimentally demonstrate 3D reconstructions for transparent, translucent, and opaque objects with smooth and rough surfaces.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132310772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Simulating Anisoplanatic Turbulence by Sampling Correlated Zernike Coefficients 用采样相关泽尼克系数模拟各向异性湍流
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105270
Nicholas Chimitt, Stanley H. Chan
{"title":"Simulating Anisoplanatic Turbulence by Sampling Correlated Zernike Coefficients","authors":"Nicholas Chimitt, Stanley H. Chan","doi":"10.1109/ICCP48838.2020.9105270","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105270","url":null,"abstract":"Simulating atmospheric turbulence is an essential task for evaluating turbulence mitigation algorithms and training learning-based methods. Advanced numerical simulators for atmospheric turbulence are available, but they require sophisticated wave propagations which are computationally very expensive. In this paper, we present a propagation-free method for simulating imaging through anisoplanatic atmospheric turbulence. The key innovation that enables this work is a new method to draw spatially correlated tilts and high-order abberations in the Zernike space. By establishing the equivalence between the angle-of-arrival correlation by Basu, McCrae and Fiorino (2015) and the multi-aperture correlation by Chanan (1992), we show that the Zernike coefficients can be drawn according to a covariance matrix defining the spatial correlations. We propose fast and scalable sampling strategies to draw these samples. The new method allows us to compress the wave propagation problem into a sampling problem, hence making the new simulator significantly faster than existing ones. Experimental results show that the simulator has an excellent match with the theory and real turbulence data.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131080163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Deep Adaptive LiDAR: End-to-end Optimization of Sampling and Depth Completion at Low Sampling Rates 深度自适应激光雷达:低采样率下采样和深度完成的端到端优化
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105252
Alexander W. Bergman, David B. Lindell, Gordon Wetzstein
{"title":"Deep Adaptive LiDAR: End-to-end Optimization of Sampling and Depth Completion at Low Sampling Rates","authors":"Alexander W. Bergman, David B. Lindell, Gordon Wetzstein","doi":"10.1109/ICCP48838.2020.9105252","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105252","url":null,"abstract":"Current LiDAR systems are limited in their ability to capture dense 3D point clouds. To overcome this challenge, deep learning-based depth completion algorithms have been developed to inpaint missing depth guided by an RGB image. However, these methods fail for low sampling rates. Here, we propose an adaptive sampling scheme for LiDAR systems that demonstrates state-of-the-art performance for depth completion at low sampling rates. Our system is fully differentiable, allowing the sparse depth sampling and the depth inpainting components to be trained end-to-end with an upstream task.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Comparing Vision-based to Sonar-based 3D Reconstruction 比较基于视觉和基于声纳的3D重建
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105273
Netanel Frank, Lior Wolf, D. Olshansky, A. Boonman, Y. Yovel
{"title":"Comparing Vision-based to Sonar-based 3D Reconstruction","authors":"Netanel Frank, Lior Wolf, D. Olshansky, A. Boonman, Y. Yovel","doi":"10.1109/ICCP48838.2020.9105273","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105273","url":null,"abstract":"Our understanding of sonar based sensing is very limited in comparison to light based imaging. In this work, we synthesize a ShapeNet variant in which echolocation replaces the role of vision. A new hypernetwork method is presented for 3D reconstruction from a single echolocation view. The success of the method demonstrates the ability to reconstruct a 3D shape from bat-like sonar, and not just obtain the relative position of the bat with respect to obstacles. In addition, it is shown that integrating information from multiple orientations around the same view point helps performance. The sonar-based method we develop is analog to the state-of-the-art single image reconstruction method, which allows us to directly compare the two imaging modalities. Based on this analysis, we learn that while 3D can be reliably reconstructed form sonar, as far as the current technology shows, the accuracy is lower than the one obtained based on vision, that the performance in sonar and in vision are highly correlated, that both modalities favor shapes that are not round, and that while the current vision method is able to better reconstruct the 3D shape, its advantage with respect to estimating the normal's direction is much lower.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123303239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Per-Image Super-Resolution for Material BTFs 材料btf的逐图超分辨率
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105256
D. D. Brok, S. Merzbach, Michael Weinmann, R. Klein
{"title":"Per-Image Super-Resolution for Material BTFs","authors":"D. D. Brok, S. Merzbach, Michael Weinmann, R. Klein","doi":"10.1109/ICCP48838.2020.9105256","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105256","url":null,"abstract":"Image-based appearance measurements are fundamentally limited in spatial resolution by the acquisition hardware. Due to the ever-increasing resolution of displaying hardware, high-resolution representations of digital material appearance are desireable for authentic renderings. In the present paper, we demonstrate that high-resolution bidirectional texture functions (BTFs) for materials can be obtained from low-resolution measurements using single-image convolutional neural network (CNN) architectures for image super-resolution. In particular, we show that this approach works for high-dynamic-range data and produces consistent BTFs, even though it operates on an image-by-image basis. Moreover, the CNN can be trained on down-sampled measured data, therefore no high-resolution ground-truth data, which would be difficult to obtain, is necessary. We train and test our method's performance on a large-scale BTF database and evaluate against the current state-of-the-art in BTF super-resolution, finding superior performance.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124791407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast confocal microscopy imaging based on deep learning 基于深度学习的快速共聚焦显微镜成像
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105215
Xiu Li, J. Dong, Bowen Li, Yi Zhang, Yongbing Zhang, A. Veeraraghavan, Xiangyang Ji
{"title":"Fast confocal microscopy imaging based on deep learning","authors":"Xiu Li, J. Dong, Bowen Li, Yi Zhang, Yongbing Zhang, A. Veeraraghavan, Xiangyang Ji","doi":"10.1109/ICCP48838.2020.9105215","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105215","url":null,"abstract":"Confocal microscopy is the de-facto standard technique in bio-imaging for acquiring 3D images in the presence of tissue scattering. However, the point-scanning mechanism inherent in confocal microscopy implies that the capture speed is much too slow for imaging dynamic objects at sufficient spatial resolution and signal to noise ratio(SNR). In this paper, we propose an algorithm for super-resolution confocal microscopy that allows us to capture high-resolution, high SNR confocal images at an order of magnitude faster acquisition speed. The proposed Back-Projection Generative Adversarial Network (BPGAN) consists of a feature extraction step followed by a back-projection feedback module (BPFM) and an associated reconstruction network, these together allow for super-resolution of low-resolution confocal scans. We validate our method using real confocal captures of multiple biological specimens and the results demonstrate that our proposed BPGAN is able to achieve similar quality to high-resolution confocal scans while the imaging speed can be up to 64 times faster.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117125048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
FoveaCam: A MEMS Mirror-Enabled Foveating Camera FoveaCam:一种MEMS反射镜支持的FoveaCam相机
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105183
Brevin Tilmon, Eakta Jain, S. Ferrari, S. Koppal
{"title":"FoveaCam: A MEMS Mirror-Enabled Foveating Camera","authors":"Brevin Tilmon, Eakta Jain, S. Ferrari, S. Koppal","doi":"10.1109/ICCP48838.2020.9105183","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105183","url":null,"abstract":"Most cameras today photograph their entire visual field. In contrast, decades of active vision research have proposed foveating camera designs, which allow for selective scene viewing. However, active vision's impact is limited by slow options for mechanical camera movement. We propose a new design, called FoveaCam, and which works by capturing reflections off a tiny, fast moving mirror. FoveaCams can obtain high resolution imagery on multiple regions of interest, even if these are at different depths and viewing directions. We first discuss our prototype and optical calibration strategies. We then outline a control algorithm for the mirror to track target pairs. Finally, we demonstrate a practical application of the full system to enable eye tracking at a distance for frontal faces.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129286468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
[Copyright notice] (版权)
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/iccp48838.2020.9105200
{"title":"[Copyright notice]","authors":"","doi":"10.1109/iccp48838.2020.9105200","DOIUrl":"https://doi.org/10.1109/iccp48838.2020.9105200","url":null,"abstract":"","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124582320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Action Recognition from a Single Coded Image 基于单个编码图像的动作识别
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105176
Tadashi Okawara, Michitaka Yoshida, Hajime Nagahara, Yasushi Yagi
{"title":"Action Recognition from a Single Coded Image","authors":"Tadashi Okawara, Michitaka Yoshida, Hajime Nagahara, Yasushi Yagi","doi":"10.1109/ICCP48838.2020.9105176","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105176","url":null,"abstract":"Cameras are prevalent in society at the present time, for example, surveillance cameras, and smartphones equipped with cameras and smart speakers. There is an increasing demand to analyze human actions from these cameras to detect unusual behavior or within a man-machine interface for Internet of Things (IoT) devices. For a camera, there is a trade-off between spatial resolution and frame rate. A feasible approach to overcome this trade-off is compressive video sensing. Compressive video sensing uses random coded exposure and reconstructs higher than read out of sensor frame rate video from a single coded image. It is possible to recognize an action in a scene from a single coded image because the image contains multiple temporal information for reconstructing a video. In this paper, we propose reconstruction-free action recognition from a single coded exposure image. We also proposed deep sensing framework which models camera sensing and classification models into convolutional neural network (CNN) and jointly optimize the coded exposure and classification model simultaneously. We demonstrated that the proposed method can recognize human actions from only a single coded image. We also compared it with competitive inputs, such as low-resolution video with a high frame rate and high-resolution video with a single frame in simulation and real experiments.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117043235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信