2020 IEEE International Conference on Computational Photography (ICCP)最新文献

筛选
英文 中文
Distributed Sky Imaging Radiometry and Tomography 分布式天空成像辐射测量和断层扫描
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105241
Amit Aides, Aviad Levis, Vadim Holodovsky, Y. Schechner, D. Althausen, Adi Vainiger
{"title":"Distributed Sky Imaging Radiometry and Tomography","authors":"Amit Aides, Aviad Levis, Vadim Holodovsky, Y. Schechner, D. Althausen, Adi Vainiger","doi":"10.1109/ICCP48838.2020.9105241","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105241","url":null,"abstract":"The composition of the atmosphere is significant to our ecosystem. Accordingly, there is a need to sense distributions of atmospheric scatterers such as aerosols and cloud droplets. There is growing interest in recovering these scattering fields in three-dimensions (3D). Even so, current atmospheric observations usually use expensive and unscalable equipment. Moreover, current analysis retrieves partial information (e.g., cloud-base altitudes, water droplet size at cloud tops) based on simplified 1D models. To advance observations and retrievals, we develop a new computational imaging approach for sensing and analyzing the atmosphere, volumetrically. Our approach comprises a ground-based network of cameras. We deployed it in conjunction with additional remote sensing equipment, including a Raman lidar and a sunphotometer, which provide initialization for algorithms and ground truth. The camera network is scalable, low cost, and enables 3D observations in high spatial and temporal resolution. We describe how the system is calibrated to provide absolute radiometric readouts of the light field. Consequently, we describe how to recover the volumetric field of scatterers, using tomography. The tomography process is adapted relative to prior art, to run on large-scale domains and being in-situ within scatterer fields. We empirically demonstrate the feasibility of tomography of clouds, using ground-based data.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134331588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Multiscale-VR: Multiscale Gigapixel 3D Panoramic Videography for Virtual Reality 多尺度vr:用于虚拟现实的多尺度千兆像素3D全景摄像
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105244
Jianing Zhang, Tianyi Zhu, Anke Zhang, Xiaoyun Yuan, Zihan Wang, Sebastian Beetschen, Lan Xu, Xing Lin, Qionghai Dai, Lu Fang
{"title":"Multiscale-VR: Multiscale Gigapixel 3D Panoramic Videography for Virtual Reality","authors":"Jianing Zhang, Tianyi Zhu, Anke Zhang, Xiaoyun Yuan, Zihan Wang, Sebastian Beetschen, Lan Xu, Xing Lin, Qionghai Dai, Lu Fang","doi":"10.1109/ICCP48838.2020.9105244","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105244","url":null,"abstract":"Creating virtual reality (VR) content with effective imaging systems has attracted significant attention worldwide following the broad applications of VR in various fields, including entertainment, surveillance, sports, etc. However, due to the inherent trade-off between field-of-view and resolution of the imaging system as well as the prohibitive computational cost, live capturing and generating multiscale 360° 3D video content at an eye-limited resolution to provide immersive VR experiences confront significant challenges. In this work, we propose Multiscale-VR, a multiscale unstructured camera array computational imaging system for high-quality gigapixel 3D panoramic videography that creates the six-degree-of-freedom multiscale interactive VR content. The Multiscale-VR imaging system comprises scalable cylindrical-distributed global and local cameras, where global stereo cameras are stitched to cover 360° field-of-view, and unstructured local monocular cameras are adapted to the global camera for flexible high-resolution video streaming arrangement. We demonstrate that a high-quality gigapixel depth video can be faithfully reconstructed by our deep neural network-based algorithm pipeline where the global depth via stereo matching and the local depth via high-resolution RGB-guided refinement are associated. To generate the immersive 3D VR content, we present a three-layer rendering framework that includes an original layer for scene rendering, a diffusion layer for handling occlusion regions, and a dynamic layer for efficient dynamic foreground rendering. Our multiscale reconstruction architecture enables the proposed prototype system for rendering highly effective 3D, 360° gigapixel live VR video at 30 fps from the captured high-throughput multiscale video sequences. The proposed multiscale interactive VR content generation approach by using a heterogeneous camera system design, in contrast to the existing single-scale VR imaging systems with structured homogeneous cameras, will open up new avenues of research in VR and provide an unprecedented immersive experience benefiting various novel applications.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123627409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Unveiling Optical Properties in Underwater Images 揭示水下图像的光学特性
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105267
Yael Bekerman, S. Avidan, T. Treibitz
{"title":"Unveiling Optical Properties in Underwater Images","authors":"Yael Bekerman, S. Avidan, T. Treibitz","doi":"10.1109/ICCP48838.2020.9105267","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105267","url":null,"abstract":"The appearance of underwater scenes is highly governed by the optical properties of the water (attenuation and scattering). However, most research effort in physics-based underwater image reconstruction methods is placed on devising image priors for estimating scene transmission, and less on estimating the optical properties. This limits the quality of the results. This work focuses on robust estimation of the water properties. First, as opposed to previous methods that used fixed values for attenuation, we estimate it from the color distribution in the image. Second, we estimate the veiling-light color from objects in the scene, contrary to looking at background pixels. We conduct an extensive qualitative and quantitative evaluation of our method vs. most recent methods on several datasets. As our estimation is more robust our method provides superior results including on challenging scenes.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126232506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Learning a Probabilistic Strategy for Computational Imaging Sensor Selection 学习计算成像传感器选择的概率策略
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2020-03-23 DOI: 10.1109/ICCP48838.2020.9105133
He Sun, Adrian V. Dalca, K. Bouman
{"title":"Learning a Probabilistic Strategy for Computational Imaging Sensor Selection","authors":"He Sun, Adrian V. Dalca, K. Bouman","doi":"10.1109/ICCP48838.2020.9105133","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105133","url":null,"abstract":"Optimized sensing is important for computational imaging in low-resource environments, when images must be recovered from severely limited measurements. In this paper, we propose a physics-constrained, fully differentiable, autoencoder that learns a probabilistic sensor-sampling strategy for optimized sensor design. The proposed method learns a system's preferred sampling distribution that characterizes the correlations between different sensor selections as a binary, fully-connected Ising model. The learned probabilistic model is achieved by using a Gibbs sampling inspired network architecture, and is trained end-to-end with a reconstruction network for efficient co-design. The proposed framework is applicable to sensor selection problems in a variety of computational imaging applications. In this paper, we demonstrate the approach in the context of a very-long-baseline-interferometry (VLBI) array design task, where sensor correlations and atmospheric noise present unique challenges. We demonstrate results broadly consistent with expectation, and draw attention to particular structures preferred in the telescope array geometry that can be leveraged to plan future observations and design array expansions.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130024600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
3D Face Reconstruction using Color Photometric Stereo with Uncalibrated Near Point Lights 使用未校准的近点光的彩色光度立体立体3D人脸重建
2020 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2019-04-04 DOI: 10.1109/ICCP48838.2020.9105199
Z. Chen, Yu Ji, Mingyuan Zhou, S. B. Kang, Jingyi Yu
{"title":"3D Face Reconstruction using Color Photometric Stereo with Uncalibrated Near Point Lights","authors":"Z. Chen, Yu Ji, Mingyuan Zhou, S. B. Kang, Jingyi Yu","doi":"10.1109/ICCP48838.2020.9105199","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105199","url":null,"abstract":"We present a new color photometric stereo (CPS) method that recovers high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. For robust self-calibration of the light sources, we use 3D morphable model (3DMM) [1] and semantic segmentation of facial parts. For reconstruction, we address the inherent spectral ambiguity in color photometric stereo by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. In this way, we jointly exploit multiple cues to resolve under-determinedness, without the need for spatial constancy of albedo. Experiments show that our new approach produces state-of-the-art results from single image with high-fidelity geometry that includes details such as wrinkles.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132697467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信