2017 IEEE International Conference on Computational Photography (ICCP)最新文献

筛选
英文 中文
Multiscale gigapixel video: A cross resolution image matching and warping approach 多尺度千兆像素视频:一种交叉分辨率图像匹配和扭曲方法
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951481
Xiaoyun Yuan, Lu Fang, Qionghai Dai, D. Brady, Yebin Liu
{"title":"Multiscale gigapixel video: A cross resolution image matching and warping approach","authors":"Xiaoyun Yuan, Lu Fang, Qionghai Dai, D. Brady, Yebin Liu","doi":"10.1109/ICCPHOT.2017.7951481","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951481","url":null,"abstract":"We present a multi-scale camera array to capture and synthesize gigapixel videos in an efficient way. Our acquisition setup contains a reference camera with a short-focus lens to get a large field-of-view video and a number of unstructured long-focus cameras to capture local-view details. Based on this new design, we propose an iterative feature matching and image warping method to independently warp each local-view video to the reference video. The key feature of the proposed algorithm is its robustness to and high accuracy for the huge resolution gap (more than 8x resolution gap between the reference and the local-view videos), camera parallaxes, complex scene appearances and color inconsistency among cameras. Experimental results show that the proposed multi-scale camera array and cross resolution video warping scheme is capable of generating seamless gigapixel video without the need of camera calibration and large overlapping area constraints between the local-view cameras.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115868529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Fast non-blind deconvolution via regularized residual networks with long/short skip-connections 基于长/短跳跃连接的正则化残差网络的快速非盲反卷积
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951480
Hyeongseok Son, Seungyong Lee
{"title":"Fast non-blind deconvolution via regularized residual networks with long/short skip-connections","authors":"Hyeongseok Son, Seungyong Lee","doi":"10.1109/ICCPHOT.2017.7951480","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951480","url":null,"abstract":"This paper proposes a novel framework for non-blind de-convolution using deep convolutional network. To deal with various blur kernels, we reduce the training complexity using Wiener filter as a preprocessing step in our framework. This step generates amplified noise and ringing artifacts, but the artifacts are little correlated with the shapes of blur kernels, making the input of our network independent of the blur kernel shape. Our network is trained to effectively remove those artifacts via a residual network with long/short skip-connections. We also add a regularization to help our network robustly process untrained and inaccurate blur kernels by suppressing abnormal weights of convolutional layers that may incur overfitting. Our postprocessing step can further improve the deconvolution quality. Experimental results demonstrate that our framework can process images blurred by a variety of blur kernels with faster speed and comparable image quality to the state-of-the-art methods.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121295338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Lensless light-field imaging with multi-phased fresnel zone aperture 多相菲涅耳带孔径无透镜光场成像
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951485
Kazuyuki Tajima, T. Shimano, Y. Nakamura, M. Sao, T. Hoshizawa
{"title":"Lensless light-field imaging with multi-phased fresnel zone aperture","authors":"Kazuyuki Tajima, T. Shimano, Y. Nakamura, M. Sao, T. Hoshizawa","doi":"10.1109/ICCPHOT.2017.7951485","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951485","url":null,"abstract":"We have been proposing lensless light-field imaging with Fresnel zone aperture (FZA) in front of an image sensor in a few millimeter spacing. Synthesized shadows of real FZA with the incident light generate moiré fringes interfering with another virtual FZA in a computer and result in reconstructed images by simple Fast Fourier Transformation (FFT). In order to obtain clear images in this configuration, it is necessary to cancel several kinds of noise component in the detected image signals. We describe details of the process and discuss its effectiveness theoretically and experimentally in this paper.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129191111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Air-light estimation using haze-lines 利用雾线进行空气光估计
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951489
Dana Berman, T. Treibitz, S. Avidan
{"title":"Air-light estimation using haze-lines","authors":"Dana Berman, T. Treibitz, S. Avidan","doi":"10.1109/ICCPHOT.2017.7951489","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951489","url":null,"abstract":"Outdoor images taken in bad weather conditions, such as haze and fog, look faded and have reduced contrast. Recently there has been great success in single image dehazing, i.e., improving the visibility and restoring the colors from a single image. A crucial step in these methods is the calculation of the air-light color, the color of an area of the image with no objects in line-of-sight. We propose a new method for calculating the air-light. The method relies on the haze-lines prior that was recently introduced. This prior is based on the observation that the pixel values of a hazy image can be modeled as lines in RGB space that intersect at the air-light. We use Hough transform in RGB space to vote for the location of the air-light. We evaluate the proposed method on an existing dataset of real world images, as well as some synthetic and other real images. Our method performs on-par with current state-of-the-art techniques and is more computationally efficient.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126847786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 131
Coherent inverse scattering via transmission matrices: Efficient phase retrieval algorithms and a public dataset 通过传输矩阵的相干逆散射:有效的相位检索算法和公共数据集
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951483
Christopher A. Metzler, M. Sharma, S. Nagesh, Richard Baraniuk, O. Cossairt, A. Veeraraghavan
{"title":"Coherent inverse scattering via transmission matrices: Efficient phase retrieval algorithms and a public dataset","authors":"Christopher A. Metzler, M. Sharma, S. Nagesh, Richard Baraniuk, O. Cossairt, A. Veeraraghavan","doi":"10.1109/ICCPHOT.2017.7951483","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951483","url":null,"abstract":"A transmission matrix describes the input-output relationship of a complex wavefront as it passes through/reflects off a multiple-scattering medium, such as frosted glass or a painted wall. Knowing a medium's transmission matrix enables one to image through the medium, send signals through the medium, or even use the medium as a lens. The double phase retrieval method is a recently proposed technique to learn a medium's transmission matrix that avoids difficult-to-capture interferometric measurements. Unfortunately, to perform high resolution imaging, existing double phase retrieval methods require (1) a large number of measurements and (2) an unreasonable amount of computation. In this work we focus on the latter of these two problems and reduce computation times with two distinct methods: First, we develop a new phase retrieval algorithm that is significantly faster than existing methods, especially when used with an amplitude-only spatial light modulator (SLM). Second, we calibrate the system using a phase-only SLM, rather than an amplitude-only SLM which was used in previous double phase retrieval experiments. This seemingly trivial change enables us to use a far faster class of phase retrieval algorithms. As a result of these advances, we achieve a 100x reduction in computation times, thereby allowing us to image through scattering media at state-of-the-art resolutions. In addition to these advances, we also release the first publicly available transmission matrix dataset. This contribution will enable phase retrieval researchers to apply their algorithms to real data. Of particular interest to this community, our measurement vectors are naturally i.i.d. subgaussian, i.e., no coded diffraction pattern is required.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124458158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Aperture interference and the volumetric resolution of light field fluorescence microscopy 光场荧光显微镜的孔径干涉与体积分辨率
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951486
Isaac Kauvar, Julie Chang, Gordon Wetzstein
{"title":"Aperture interference and the volumetric resolution of light field fluorescence microscopy","authors":"Isaac Kauvar, Julie Chang, Gordon Wetzstein","doi":"10.1109/ICCPHOT.2017.7951486","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951486","url":null,"abstract":"Light field microscopy (LFM) is an emerging technique for volumetric fluorescence imaging, but widespread use is hampered by its poor spatial resolution. Using diffraction-based analysis we show how this degraded resolution arises because conventional LFM aims to sample four dimensions of the light field. By instead prioritizing 3D volumetric information over 4D sampling, we can optically interfere certain redundant angular samples to allow higher spatial resolution while maintaining enough angular information for depth discrimination. With this in mind, we design a number of aperture plane sampling schemes, characterize their frequency support and invertibility, and describe how their relative performance depends on the operating signal-to-noise regime. With simulations and a prototype, we demonstrate a time-sequential amplitude mask-based acquisition approach that outperforms conventional LFM in terms of both spatial resolution and axial field of view.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"306 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114962481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The light field 3D scanner 光场3D扫描仪
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951484
Yingliang Zhang, Zhong Li, Wei Yang, Peihong Yu, Haiting Lin, Jingyi Yu
{"title":"The light field 3D scanner","authors":"Yingliang Zhang, Zhong Li, Wei Yang, Peihong Yu, Haiting Lin, Jingyi Yu","doi":"10.1109/ICCPHOT.2017.7951484","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951484","url":null,"abstract":"We present a novel light field structure-from-motion (SfM) framework for reliable 3D object reconstruction. Specifically, we use the light field (LF) camera such as Lytro and Raytrix as a virtual 3D scanner. We move an LF camera around the object and register between multiple LF shots. We show that applying conventional SfM on sub-aperture images is not only expensive but also unreliable due to ultra-small baseline and low image resolution. Instead, our LF-SfM scheme maps ray manifolds across LFs. Specifically, we show how rays passing through a common 3D point transform between two LFs and we develop reliable technique for extracting extrinsic parameters from this ray transform. Next, we apply a new edge-preserving stereo matching technique on individual LFs and conduct LF bundle adjustment to jointly optimize pose and geometry. Comprehensive experiments show our solution outperforms many state-of-the-art passive and even active techniques especially on topologically complex objects.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124573344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner 利用光子回波重建房间:一种基于平面的模型和用于寻找角落的重建算法
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-12 DOI: 10.1109/ICCPHOT.2017.7951478
A. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, A. Veeraraghavan
{"title":"Reconstructing rooms using photon echoes: A plane based model and reconstruction algorithm for looking around the corner","authors":"A. Pediredla, M. Buttafava, A. Tosi, O. Cossairt, A. Veeraraghavan","doi":"10.1109/ICCPHOT.2017.7951478","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951478","url":null,"abstract":"Can we reconstruct the entire internal shape of a room if all we can directly observe is a small portion of one internal wall, presumably through a window in the room? While conventional wisdom may indicate that this is not possible, motivated by recent work on ‘looking around corners’, we show that one can exploit light echoes to reconstruct the internal shape of hidden rooms. Existing techniques for looking around the corner using transient images model the hidden volume using voxels and try to explain the captured transient response as the sum of the transient responses obtained from individual voxels. Such a technique inherently suffers from challenges with regards to low signal to background ratios (SBR) and has difficulty scaling to larger volumes. In contrast, in this paper, we argue for using a plane-based model for the hidden surfaces. We demonstrate that such a plane-based model results in much higher SBR while simultaneously being amenable to larger spatial scales. We build an experimental prototype composed of a pulsed laser source and a single-photon avalanche detector (SPAD) that can achieve a time resolution of about 30ps and demonstrate high-fidelity reconstructions both of individual planes in a hidden volume and for reconstructing entire polygonal rooms composed of multiple planar walls.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126421385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Compressive spectral anomaly detection 压缩光谱异常检测
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-01 DOI: 10.1109/ICCPHOT.2017.7951482
Vishwanath Saragadam, Jian Wang, Xin Li, Aswin C. Sankaranarayanan
{"title":"Compressive spectral anomaly detection","authors":"Vishwanath Saragadam, Jian Wang, Xin Li, Aswin C. Sankaranarayanan","doi":"10.1109/ICCPHOT.2017.7951482","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951482","url":null,"abstract":"We propose a novel compressive imager for detecting anomalous spectral profiles in a scene. We model the background spectrum as a low-dimensional subspace while assuming the anomalies to form a spatially-sparse set of spectral profiles different from the background. Our core contributions are in the form of a two-stage sensing mechanism. In the first stage, we estimate the subspace for the background spectrum by acquiring spectral measurements at a few randomly-selected pixels. In the second stage, we acquire spatially-multiplexed spectral measurements of the scene. We remove the contributions of the background spectrum from the spatially-multiplexed measurements by projecting onto the complementary subspace of the background spectrum; the resulting measurements are of a sparse matrix that encodes the presence and spectra of anomalies, which can be recovered using a Multiple Measurement Vector formulation. Theoretical analysis and simulations show significant speed up in acquisition time over other anomaly detection techniques. A lab prototype based on a DMD and a visible spectrometer validates our proposed imager.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133574348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Computational multispectral flash 计算多光谱闪光
2017 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2017-05-01 DOI: 10.1109/ICCPHOT.2017.7951479
H. Blasinski, J. Farrell
{"title":"Computational multispectral flash","authors":"H. Blasinski, J. Farrell","doi":"10.1109/ICCPHOT.2017.7951479","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951479","url":null,"abstract":"Illumination plays an important role in the image capture process. Too little or too much energy in particular wavelengths can impact the scene appearance in a way that is difficult to manage by color constancy post processing methods. We use an adjustable multispectral flash to modify the spectral illumination of a scene. The flash is composed of a small number of narrowband lights, and the imaging system takes a sequence of images of the scene under each of those lights. Pixel data is used to estimate the spectral power distribution of the ambient light, and to adjust the flash spectrum either to match or to complement the ambient illuminant. The optimized flash spectrum can be used in subsequent captures, or a synthetic image can be computationally rendered from the available data. Under extreme illumination conditions images captured with the matching flash have no color cast, and the complementary flash produces more balanced colors. The proposed system also improves the quality of images captured in underwater environments.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122235846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信