{"title":"Transfer efficiency and depth invariance in computational cameras","authors":"Jongmin Baek","doi":"10.1109/ICCPHOT.2010.5585098","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585098","url":null,"abstract":"Recent advances in computational cameras achieve extension of depth of field by modulating the aperture of an imaging system, either spatially or temporally. They are, however, accompanied by loss of image detail, the chief cause of which is low and/or depth-varying frequency response of such systems. In this paper, we examine the tradeoff between achieving depth invariance and maintaining high transfer efficiency by providing a mathematical framework for analyzing the transfer function of these computational cameras. Using this framework, we prove mathematical bounds on the efficacy of the tradeoff. These bounds lead to observations on the fundamental limitations of computational cameras. In particular, we show that some existing designs are already near-optimal in our metrics.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131336104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spectral Focal Sweep: Extended depth of field from chromatic aberrations","authors":"O. Cossairt, S. Nayar","doi":"10.1109/ICCPHOT.2010.5585101","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585101","url":null,"abstract":"In recent years, many new camera designs have been proposed which preserve image detail over a larger depth range than conventional cameras. These methods rely on either mechanical motion or a custom optical element placed in the pupil plane of a camera lens to create the desired point spread function (PSF). This work introduces a new Spectral Focal Sweep (SFS) camera which can be used to extend depth of field (DOF) when some information about the reflectance spectra of objects being imaged is known. Our core idea is to exploit the principle that for a lens without chromatic correction, the focal length varies with wavelength. We use a SFS camera to capture an image that effectively “sweeps” the focal plane continuously through a scene without the need for either mechanical motion or custom optical elements. We demonstrate that this approach simplifies lens design constraints, enabling an inexpensive implementation to be constructed with off-the-shelf components. We verify the effectiveness of our implementation and show several example images illustrating a significant increase in DOF over conventional cameras.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"35 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116426062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rich image capture with plenoptic cameras","authors":"Todor Georgiev, A. Lumsdaine","doi":"10.1109/ICCPHOT.2010.5585092","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585092","url":null,"abstract":"The plenoptic function was originally defined as a record of both the 3D structure of the lightfield and of its dependence on parameters such as wavelength, polarization, etc. Still, most work on these ideas has emphasized the 3D aspect of lightfield capture and manipulation, with less attention paid to other parameters. In this paper, we leverage the high resolution and flexible sampling trade-offs of the focused plenoptic camera to perform high-resolution capture of the rich “non 3D” structure of the plenoptic function. Two different techniques are presented and analyzed, using extended dynamic range photography as a particular example. The first technique simultaneously captures multiple exposures with a microlens array that has an interleaved set of different filters. The second technique places multiple filters at the main lens aperture. Experimental results validate our approach, producing 1.3Mpixel HDR images with a single capture.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127862855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image upsampling via texture hallucination","authors":"Yoav HaCohen, Raanan Fattal, Dani Lischinski","doi":"10.1109/ICCPHOT.2010.5585097","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585097","url":null,"abstract":"Image upsampling is a common yet challenging task, since it is severely underconstrained. While considerable progress was made in preserving the sharpness of salient edges, current methods fail to reproduce the fine detail typically present in the textured regions bounded by these edges, resulting in unrealistic appearance. In this paper we address this fundamental shortcoming by integrating higher-level image analysis and custom low-level image synthesis. Our approach extends and refines the patch-based image model of Freeman et al. [10] and interprets the image as a tiling of distinct textures, each of which is matched to an example in a database of relevant textures. The matching is not done at the patch level, but rather collectively, over entire segments. Following this model fitting stage, which requires some user guidance, a higher-resolution image is synthesized using a hybrid approach that incorporates principles from example-based texture synthesis. We show that for images that comply with our model, our method is able to reintroduce consistent fine-scale detail, resulting in enhanced appearance textured regions.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121229008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion blur removal with orthogonal parabolic exposures","authors":"T. Cho, Anat Levin, F. Durand, W. Freeman","doi":"10.1109/ICCPHOT.2010.5585100","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585100","url":null,"abstract":"Object movement during exposure generates blur. Removing blur is challenging because one has to estimate the motion blur, which can spatially vary over the image. Even if the motion is successfully identified, blur removal can be unstable because the blur kernel attenuates high frequency image contents. We address the problem of removing blur from objects moving at constant velocities in arbitrary 2D directions. Our solution captures two images of the scene with a parabolic motion in two orthogonal directions. We show that our strategy near-optimally preserves image content, and allows for stable blur inversion. Taking two images of a scene helps us estimate spatially varying object motions. We present a prototype camera and demonstrate successful motion deblurring on real motions.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115112318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Search-and-replace editing for personal photo collections","authors":"S. Hasinoff, M. Jóźwiak, F. Durand, W. Freeman","doi":"10.1109/ICCPHOT.2010.5585099","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585099","url":null,"abstract":"We propose a new system for editing personal photo collections, inspired by search-and-replace editing for text. In our system, local edits specified by the user in a single photo (e.g., using the “clone brush” tool) can be propagated automatically to other photos in the same collection, by matching the edited region across photos. To achieve this, we build on tools from computer vision for image matching. Our experimental results on real photo collections demonstrate the feasibility and potential benefits of our approach.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115532224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coded rolling shutter photography: Flexible space-time sampling","authors":"Jinwei Gu, Y. Hitomi, T. Mitsunaga, S. Nayar","doi":"10.1109/ICCPHOT.2010.5585094","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585094","url":null,"abstract":"We propose a novel readout architecture called coded rolling shutter for complementary metal-oxide semiconductor (CMOS) image sensors. Rolling shutter has traditionally been considered as a disadvantage to image quality since it often introduces skew artifact. In this paper, we show that by controlling the readout timing and the exposure length for each row, the row-wise exposure discrepancy in rolling shutter can be exploited to flexibly sample the 3D space-time volume of scene appearance, and can thus be advantageous for computational photography. The required controls can be readily implemented in standard CMOS sensors by altering the logic of the control unit. We propose several coding schemes and applications: (1) coded readout allows us to better sample time dimension for high-speed photography and optical flow based applications; and (2) row-wise control enables capturing motion-blur free high dynamic range images from a single shot. While a prototype chip is currently in development, we demonstrate the benefits of coded rolling shutter via simulation using images of real scenes.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123305078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oliver Wang, M. Fuchs, Christian Fuchs, H. Lensch, James Davis, H. Seidel
{"title":"A context-aware light source","authors":"Oliver Wang, M. Fuchs, Christian Fuchs, H. Lensch, James Davis, H. Seidel","doi":"10.1109/ICCPHOT.2010.5585091","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585091","url":null,"abstract":"We present a technique that combines the visual benefits of virtual enhancement with the intuitive interaction of the real world. We accomplish this by introducing the concept of a context-aware light source. This light source provides illumination based on scene context in real-time. This allows us to project feature enhancement in-place onto an object while it is being manipulated by the user. A separate proxy light source can be employed to enable freely programmable shading responses for interactive scene analysis. We created a prototype hardware setup and have implemented several applications that demonstrate the approach, such as a sharpening light, an edge highlighting light, an accumulation light, and a light with a programmable, nonlinear shading response.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121590055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seeing Mt. Rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal","authors":"Neel Joshi, Michael F. Cohen","doi":"10.1109/ICCPHOT.2010.5585096","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585096","url":null,"abstract":"Photographing distant objects is challenging for a number of reasons. Even on a clear day, atmospheric haze often represents the majority of light received by a camera. Unfortunately, dehazing alone cannot create a clean image. The combination of shot noise and quantization noise is exacerbated when the contrast is expanded after haze removal. Dust on the sensor that may be unnoticeable in the original images creates serious artifacts. Multiple images can be averaged to overcome the noise, but the combination of long lenses and small camera motion as well as time varying atmospheric refraction results in large global and local shifts of the images on the sensor. An iconic example of a distant object is Mount Rainier, when viewed from Seattle, which is 90 kilometers away. This paper demonstrates a methodology to pull out a clean image of Mount Rainier from a series of images. Rigid and non-rigid alignment steps brings individual pixels into alignment. A novel local weighted averaging method based on ideas from “lucky imaging” minimizes blur, resampling and alignment errors, as well as effects of sensor dust, to maintain the sharpness of the original pixel grid. Finally, dehazing and contrast expansion results in a sharp clean image.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134123926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High resolution large format tile-scan camera: Design, calibration, and extended depth of field","authors":"M. Ben-Ezra","doi":"10.1109/ICCPHOT.2010.5585095","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2010.5585095","url":null,"abstract":"Emerging applications in virtual museums, cultural heritage, and digital art preservation require very high quality and high resolution imaging of objects with fine structure, shape, and texture. To this end we propose to use large format digital photography. We analyze and resolve some of the unique challenges that are presented by digital large format photography, in particular sensor-lens mismatch and extended depth of field. Based on our analysis we have designed and built a digital tile-scan large format camera capable of acquiring high quality and high resolution images of static scenes. We also developed calibration techniques that are specific to our camera as well as a novel and simple algorithm for focal stack processing of very large images with significant magnification variations.","PeriodicalId":248821,"journal":{"name":"2010 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}