2016 IEEE International Conference on Computational Photography (ICCP)最新文献

筛选
英文 中文
Fast, high dynamic range light field processing for pattern recognition 快速,高动态范围光场处理模式识别
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492873
Scott McCloskey, B. Miller
{"title":"Fast, high dynamic range light field processing for pattern recognition","authors":"Scott McCloskey, B. Miller","doi":"10.1109/ICCPHOT.2016.7492873","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492873","url":null,"abstract":"We present a light field processing method to quickly produce an image for pattern recognition. Unlike processing for aesthetic purposes, our objective is not to produce the best-looking image, but to produce a recognizable image as fast as possible. By leveraging the recognition algorithm's dynamic range and robustness to optical defocus, we develop carefully-chosen tradeoffs to ensure recognition at a much lower level of computational complexity. Capitalizing on the algorithm's dynamic range yields large speedups by minimizing the number of light field views used in refocusing. Robustness to optical defocus allows us to quantize the refocus parameter and minimize the number of interpolations. The resulting joint optimization is performed via dynamic programming to choose the set of views which, when combined, produce a recognizable refocused image in the least possible computing time. We demonstrate the improved recognition dynamic range of barcode scanning using a Lytro camera, and dramatic reductions in computational complexity on a low-power embedded processor.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116423196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
High-speed imaging using CMOS image sensor with quasi pixel-wise exposure 采用准像素曝光的CMOS图像传感器进行高速成像
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1117/12.2270485
H. Nagahara, Toshiki Sonoda, K. Endo, Y. Sugiyama, R. Taniguchi
{"title":"High-speed imaging using CMOS image sensor with quasi pixel-wise exposure","authors":"H. Nagahara, Toshiki Sonoda, K. Endo, Y. Sugiyama, R. Taniguchi","doi":"10.1117/12.2270485","DOIUrl":"https://doi.org/10.1117/12.2270485","url":null,"abstract":"Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8×8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary. The proposed method also removes the rolling shutter effect from the reconstructed video.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129139040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Occlusion-robust 3D sensing using aerial imaging 使用航空成像的闭塞鲁棒3D传感
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492883
M. Yasui, Yoshihiro Watanabe, M. Ishikawa
{"title":"Occlusion-robust 3D sensing using aerial imaging","authors":"M. Yasui, Yoshihiro Watanabe, M. Ishikawa","doi":"10.1109/ICCPHOT.2016.7492883","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492883","url":null,"abstract":"Conventional active 3D sensing systems do not work well when other objects get between the measurement target and the measurement equipment, occluding the line of sight. In this paper, we propose an active 3D sensing method that solves this occlusion problem by using a light field created using aerial imaging. In this light field, aerial luminous spots can be formed by focusing rays of light from multiple directions. Towards the occlusion problem, this configuration is effective, because even if some of the rays are occluded, the rays of other directions keep the spots. Our results showed that this method was able to measure the position and inclination of a target by using an aerial image of a single point light source and was robust against occlusions. In addition, we confirmed that multiple point light sources also worked well.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129245137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Single-shot diffuser-encoded light field imaging 单镜头漫射编码光场成像
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492880
N. Antipa, Sylvia Necula, Ren Ng, L. Waller
{"title":"Single-shot diffuser-encoded light field imaging","authors":"N. Antipa, Sylvia Necula, Ren Ng, L. Waller","doi":"10.1109/ICCPHOT.2016.7492880","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492880","url":null,"abstract":"We capture 4D light field data in a single 2D sensor image by encoding spatio-angular information into a speckle field (causticpattern) through a phase diffuser. Using wave-optics theory and a coherent phase retrieval method, we calibrate the system by measuring the diffuser surface height from through-focus images. Wave-optics theory further informs the design of system geometry such that a purely additive ray-optics model is valid. Light field reconstruction is done using nonlinear matrix inversion methods, including ℓ1 minimization. We demonstrate a prototype system and present empirical results of 4D light field reconstruction and computational refocusing from a single diffuser-encoded 2D image.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114848469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
White balance under mixed illumination using flash photography 混合照明下使用闪光灯摄影的白平衡
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492879
Zhuo Hui, Aswin C. Sankaranarayanan, Kalyan Sunkavalli, Sunil Hadap
{"title":"White balance under mixed illumination using flash photography","authors":"Zhuo Hui, Aswin C. Sankaranarayanan, Kalyan Sunkavalli, Sunil Hadap","doi":"10.1109/ICCPHOT.2016.7492879","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492879","url":null,"abstract":"Real-world illumination is often a complex spatially-varying combination of multiple illuminants. In this work, we present a technique to white-balance images captured in such illumination by leveraging flash photography. Even though this problem is severely ill-posed, we show that using two images — captured with and without flash lighting — leads to a closed form solution for spatially-varying mixed illumination. Our solution is completely automatic and makes no assumptions about the number or nature of the illuminants. We also propose an extension of our scheme to handle practical challenges such as shadows, specularities, as well as the camera and scene motion. We evaluate our technique on datasets captured in both the laboratory and the real-world, and show that it significantly outperforms a number of previous white balance algorithms.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122974065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Sensor-level privacy for thermal cameras 热像仪的传感器级隐私
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492877
F. Pittaluga, A. Zivkovic, S. Koppal
{"title":"Sensor-level privacy for thermal cameras","authors":"F. Pittaluga, A. Zivkovic, S. Koppal","doi":"10.1109/ICCPHOT.2016.7492877","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492877","url":null,"abstract":"As cameras turn ubiquitous, balancing privacy and utility becomes crucial. To achieve both, we enforce privacy at the sensor level, as incident photons are converted into an electrical signal and then digitized into image measurements. We present sensor protocols and accompanying algorithms that degrade facial information for thermal sensors, where there is usually a clear distinction between humans and the scene. By manipulating the sensor processes of gain, digitization, exposure time, and bias voltage, we are able to provide privacy during the actual image formation process and the original face data is never directly captured or stored. We show privacy-preserving thermal imaging applications such as temperature segmentation, night vision, gesture recognition and HDR imaging.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115866149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Correcting perceived perspective distortions using object specific planar transformations 使用对象特定的平面变换纠正感知到的透视畸变
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492868
M. A. Tehrani, A. Majumder, M. Gopi
{"title":"Correcting perceived perspective distortions using object specific planar transformations","authors":"M. A. Tehrani, A. Majumder, M. Gopi","doi":"10.1109/ICCPHOT.2016.7492868","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492868","url":null,"abstract":"Distortions due to perspective projection is often described under the umbrella term of foreshortening in computer graphics and are treated the same way. However, a large body of literature from artists, perceptual psychologists and perception scientists have shown that the perception of these distortions is different in different situations. While the distortions themselves depend on both the depth and the orientation of the object with respect to the camera image plane, the perception of these distortions depends on other depth cues present in the image. In the absence of any depth cue or prior knowledge about the objects in the scene, the visual system finds it hard to correct the foreshortening automatically and such images need user input and external algorithmic distortion correction.In this paper, we claim that the shape distortion is more perceptible than area distortion, and quantify such perceived foreshortening as the non-uniformity across the image, of the ratio e of the differential areas of an object in the scene and its projection. We also categorize foreshortening into uniform and non-uniform foreshortening. Uniform foreshortening is perceived by our visual system as a distortion, even if e is uniform across the image, only when comparative objects of known sizes are present in the image. Non-uniform foreshortening is perceived when there is no other depth cue in the scene that can help the brain to correct for the distortion. We present a unified solution to correct these distortions in one or more non-occluded foreground objects by applying object-specific segmentation and affine transformation of the segmented camera image plane. Our method also ensures that the background undergoes minimal distortion and preserves background features during this process. This is achieved efficiently by solving Laplace's equations with Dirichlet boundary conditions, assisted by a simple and intuitive user interface.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127658001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Passive light and viewpoint sensitive display of 3D content 被动光和视点敏感显示3D内容
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492881
Anat Levin, Haggai Maron, Michal Yarom
{"title":"Passive light and viewpoint sensitive display of 3D content","authors":"Anat Levin, Haggai Maron, Michal Yarom","doi":"10.1109/ICCPHOT.2016.7492881","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492881","url":null,"abstract":"We present a 3D light-sensitive display. The display is capable of presenting simple opaque 3D surfaces without self occlusions, while reproducing both viewpoint-sensitive depth parallax and illumination-sensitive variations such as shadows and highlights. Our display is passive in the sense that it does not rely on illumination sensors and on-the-fly rendering of the image content. Rather, it consists of optical elements that produce light transport paths approximating those present in the real scene. Our display uses two layers of Spatial Light Modulators (SLMs) whose micron-sized elements allow us to digitally simulate thin optical surfaces with flexible shapes. We derive a simple content creation algorithm utilizing geometric optics tools to design optical surfaces that can mimic the ray transfer of target virtual 3D scenes. We demonstrate a possible implementation of a small prototype, and present a number of simple virtual 3D scenes.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129192518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Blind dehazing using internal patch recurrence 内部补片复现盲目除雾
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492870
Yuval Bahat, M. Irani
{"title":"Blind dehazing using internal patch recurrence","authors":"Yuval Bahat, M. Irani","doi":"10.1109/ICCPHOT.2016.7492870","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492870","url":null,"abstract":"Images of outdoor scenes are often degraded by haze, fog and other scattering phenomena. In this paper we show how such images can be dehazed using internal patch recurrence. Small image patches tend to repeat abundantly inside a natural image, both within the same scale, as well as across different scales. This behavior has been used as a strong prior for image denoising, super-resolution, image completion and more. Nevertheless, this strong recurrence property significantly diminishes when the imaging conditions are not ideal, as is the case in images taken under bad weather conditions (haze, fog, underwater scattering, etc.). In this paper we show how we can exploit the deviations from the ideal patch recurrence for \"Blind De-hazing\" — namely, recovering the unknown haze parameters and reconstructing a haze-free image. We seek the haze parameters that, when used for dehazing the input image, will maximize the patch recurrence in the dehazed output image. More specifically, pairs of co-occurring patches at different depths (hence undergoing different degrees of haze) allow recovery of the airlight color, as well as the relative-transmission of each such pair of patches. This in turn leads to dense recovery of the scene structure, and to full image dehazing.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133988346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
A picture is worth a billion bits: Real-time image reconstruction from dense binary threshold pixels 一张图片价值十亿比特:从密集的二值阈值像素实时图像重建
2016 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2016-05-13 DOI: 10.1109/ICCPHOT.2016.7492874
Tal Remez, O. Litany, A. Bronstein
{"title":"A picture is worth a billion bits: Real-time image reconstruction from dense binary threshold pixels","authors":"Tal Remez, O. Litany, A. Bronstein","doi":"10.1109/ICCPHOT.2016.7492874","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2016.7492874","url":null,"abstract":"The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artefacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125121890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信