2015 IEEE International Conference on Computational Photography (ICCP)最新文献

筛选
英文 中文
Contrast-Use Metrics for Tone Mapping Images 色调映射图像的对比度使用度量
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-07-30 DOI: 10.1109/ICCPHOT.2015.7168364
Miguel Granados, T. Aydin, J. Tena, Jean-François Lalonde, C. Theobalt
{"title":"Contrast-Use Metrics for Tone Mapping Images","authors":"Miguel Granados, T. Aydin, J. Tena, Jean-François Lalonde, C. Theobalt","doi":"10.1109/ICCPHOT.2015.7168364","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168364","url":null,"abstract":"Existing tone mapping operators (TMOs) provide good results in well-lit scenes, but often perform poorly on images in low light conditions. In these scenes, noise isprevalent and gets amplified by TMOs, as they confuse contrast created by noise with contrast created by the scene. This paper presents a principled approach to produce tone mapped images with less visible noise. For this purpose, we leverage established models of camera noise and human contrast perception to design two new quality scores: contrast waste and contrast loss, which measure image quality as a function of contrast allocation. To produce tone mappings with less visible noise, we apply these scores in two ways: first, to automatically tune the parameters of existing TMOs to reduce the amount of noise they produce; and second, to propose a new noise-aware tone curve.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115356051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Linking Past to Present: Discovering Style in Two Centuries of Architecture 连接过去与现在:发现两个世纪的建筑风格
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168368
Stefan Lee, N. Maisonneuve, David J. Crandall, Alexei A. Efros, Josef Sivic
{"title":"Linking Past to Present: Discovering Style in Two Centuries of Architecture","authors":"Stefan Lee, N. Maisonneuve, David J. Crandall, Alexei A. Efros, Josef Sivic","doi":"10.1109/ICCPHOT.2015.7168368","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168368","url":null,"abstract":"With vast quantities of imagery now available online, researchers have begun to explore whether visual patterns can be discovered automatically. Here we consider the particular domain of architecture, using huge collections of street-level imagery to find visual patterns that correspond to semantic-level architectural elements distinctive to particular time periods. We use this analysis both to date buildings, as well as to discover how functionally-similar architectural elements (e.g. windows, doors, balconies, etc.) have changed over time due to evolving styles. We validate the methods by combining a large dataset of nearly 150,000 Google Street View images from Paris with a cadastre map to infer approximate construction date for each facade. Not only could our analysis be used for dating or geo- localizing buildings based on architectural features, but it also could give architects and historians new tools for confirming known theories or even discovering new ones.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116083690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Performance Characterization of Reactive Visual Systems 反应性视觉系统的性能表征
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168371
Subhagato Dutta, Abhishek Chugh, R. Tamburo, Anthony G. Rowe, S. Narasimhan
{"title":"Performance Characterization of Reactive Visual Systems","authors":"Subhagato Dutta, Abhishek Chugh, R. Tamburo, Anthony G. Rowe, S. Narasimhan","doi":"10.1109/ICCPHOT.2015.7168371","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168371","url":null,"abstract":"We consider the class of projector-camera systems that adaptively image and illuminate a dynamic environment. Examples include adaptive front lighting in vehicles, dynamic stage performance lighting, adaptive dynamic range imaging and volumetric displays. A simulator is developed to explore the design space of such Reactive Visual Systems. Simulations are conducted to characterize system performance by analyzing the effects of end-to-end latency, jitter, and prediction algorithm complexity. Key operating points are identified where systems with simple prediction algorithms can outperform systems with more complex prediction algorithms. Based on the lessons learned from simulations, a low latency and low jitter, tight closed-loop reactive visual system is built. For the first time, we measure end-to-end latency, perform jitter analysis, investigate various prediction algorithms and their effect on system performance, compare our system's performance to previous work, and demonstrate dis-illumination of falling snow-like particles and photography of fast moving scenes.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121868256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Calibrating Imaging Polarimetry 自校准成像偏振法
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168374
Y. Schechner
{"title":"Self-Calibrating Imaging Polarimetry","authors":"Y. Schechner","doi":"10.1109/ICCPHOT.2015.7168374","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168374","url":null,"abstract":"To map the polarization state (Stokes vector) of objects in a scene, images are typically acquired using a polarization filter (analyzer), set at different orientations. Usually these orientations are assumed to be all known. Often, however, the angles are unknown: most photographers manually rotate the filter in coarse undocumented angles. Deviations in motorized stages or remote-sensing equipment are caused by device drift and environmental changes. This work keeps the simplicity of uncontrolled uncalibrated photography, and still extracts from the photographs accurate polarimetry. This is achieved despite unknown analyzer angles and the objects' Stokes vectors. The paper derives modest conditions on the data size, to make this task well-posed and even over-constrained. The paper then proposes an estimation algorithm, and tests it in real experiments. The algorithm demonstrates high accuracy, speed, simplicity and robustness to strong noise and other signal disruptions.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123369068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Single-Shot Reflectance Measurement from Polarized Color Gradient Illumination 偏振色梯度照明的单镜头反射率测量
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168375
Graham Fyffe, P. Debevec
{"title":"Single-Shot Reflectance Measurement from Polarized Color Gradient Illumination","authors":"Graham Fyffe, P. Debevec","doi":"10.1109/ICCPHOT.2015.7168375","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168375","url":null,"abstract":"We present a method for acquiring the per-pixel diffuse albedo, specular albedo, and surface normal maps of a subject at a single instant in time. The method is single-shot, requiring no optical flow, and per-pixel, making no assumptions regarding albedo statistics or surface connectivity. We photograph the subject inside a spherical illumination device emitting a static lighting pattern of vertically polarized RGB color gradients aligned with the XYZ axes, and horizontally polarized RGB color gradients inversely aligned with the XYZ axes. We capture simultaneous photographs using one of two possible setups: a single-view setup using a coaxially aligned camera pair with a polarizing beam splitter, and a multi-view stereo setup with different orientations of linear polarizing filters placed on the cameras, enabling high-quality geometry reconstruction. From this lighting we derive full-color diffuse albedo, single-channel specular albedo suitable for dielectric materials, and polarization-preserving surface normals which are free of corruption from subsurface scattering. We provide simple formulae to estimate the diffuse albedo, specular albedo, and surface normal maps in the single-view and multi-view cases and show error bounds which are smallfor many common subjects including faces.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114953398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
MC3D: Motion Contrast 3D Scanning MC3D:运动对比3D扫描
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168370
N. Matsuda, O. Cossairt, Mohit Gupta
{"title":"MC3D: Motion Contrast 3D Scanning","authors":"N. Matsuda, O. Cossairt, Mohit Gupta","doi":"10.1109/ICCPHOT.2015.7168370","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168370","url":null,"abstract":"Structured light 3D scanning systems are fundamentally constrained by limited sensor bandwidth and light source power, hindering their performance in real-world applications where depth information is essential, such as industrial automation, autonomous transportation, robotic surgery, and entertainment. We present a novel structured light technique called Motion Contrast 3D scanning (MC3D) that maximizes bandwidth and light source power to avoid performance trade-offs. The technique utilizes motion contrast cameras that sense temporal gradients asynchronously, i.e., independently for each pixel, a property that minimizes redundant sampling. This allows laser scanning resolution with single-shot speed, even in the presence of strong ambient illumination, significant inter-reflections, and highly reflective surfaces. The proposed approach will allow 3D vision systems to be deployed in challenging and hitherto inaccessible real-world scenarios requiring high performance using limited power and bandwidth.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125637570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
What Is a Good Day for Outdoor Photometric Stereo? 什么是户外光度立体的好日子?
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168379
Yannick Hold-Geoffroy, Jinsong Zhang, P. Gotardo, Jean-François Lalonde
{"title":"What Is a Good Day for Outdoor Photometric Stereo?","authors":"Yannick Hold-Geoffroy, Jinsong Zhang, P. Gotardo, Jean-François Lalonde","doi":"10.1109/ICCPHOT.2015.7168379","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168379","url":null,"abstract":"Photometric Stereo has been explored extensively in laboratory conditions since its inception. Recently, attempts have been made at applying this technique under natural outdoor lighting. Outdoor photometric stereo presents additional challenges as one does not have control over illumination anymore. In this paper, we explore the stability of surface normals reconstructed outdoors. We present a data-driven analysis based on a large database of outdoor HDR environment maps. Given a sequence of object images and corresponding sky maps captured in a single day, we investigate natural factors that impact the uncertainty in the estimated surface normals. Quantitative evidence reveals strong dependencies between expected accuracy and the normal orientation, cloud coverage, and sun elevation. In particular, we show that partially cloudy days yield greater accuracy than sunny days with clear skies; furthermore, high sun elevation-recommended in previous work-is in fact not necessarily optimal when taking more elaborate illumination models into account.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Robust Radiometric Calibration for Dynamic Scenes in the Wild 野外动态场景的鲁棒辐射校准
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168373
Abhishek Badki, N. Kalantari, P. Sen
{"title":"Robust Radiometric Calibration for Dynamic Scenes in the Wild","authors":"Abhishek Badki, N. Kalantari, P. Sen","doi":"10.1109/ICCPHOT.2015.7168373","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168373","url":null,"abstract":"The camera response function (CRF) that maps linear irradiance to pixel intensities must be known for computational imaging applications that match features in images with different exposures. This function is scene dependent and is difficult to estimate in scenes with significant motion. In this paper, we present a novel algorithm for radiometric calibration from multiple exposure images of a dynamic scene. Our approach is based on two key ideas from the literature: (1) intensity mapping functions which map pixel values in one image to the other without the need for pixel correspondences, and (2) a rank minimization algorithm for radiometric calibration. Although each method has its problems, we show how to combine them in aformulation that leverages their benefits. Our algorithm recovers the CRFs for dynamic scenes better than previous methods, and we show how it can be applied to existing algorithms such as those for high-dynamic range imaging to improve their results.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124880682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
High Resolution Photography with an RGB-Infrared Camera 高分辨率摄影与rgb红外相机
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168367
Huixuan Tang, Xiaopeng Zhang, Shaojie Zhuo, F. Chen, Kiriakos N. Kutulakos, Liang Shen
{"title":"High Resolution Photography with an RGB-Infrared Camera","authors":"Huixuan Tang, Xiaopeng Zhang, Shaojie Zhuo, F. Chen, Kiriakos N. Kutulakos, Liang Shen","doi":"10.1109/ICCPHOT.2015.7168367","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168367","url":null,"abstract":"A convenient solution to RGB-Infrared photography is to extend the basic RGB mosaic with a fourth filter type with high transmittance in the near-infrared band. Unfortunately, applying conventional demosaicing algorithms to RGB-IR sensors is not possible for two reasons. First, the RGB and near-infrared image are differently focused due to different refractive indices of each band. Second, manufacturing constraints introduce crosstalk between RGB and IR channels. In this paper we propose a novel image formation model for RGB-IR cameras that can be easily calibrated, and propose an efficient algorithm that jointly addresses three restoration problems - channel deblurring, channel separation and pixel demosaicing - using quadratic image regularizers. We also extend our algorithm to handle more general regularizers and pixel saturation. Experiments show that our method produces sharp, full-resolution images of pure RGB color and IR.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130860515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Dual Aperture Photography: Image and Depth from a Mobile Camera 双光圈摄影:移动相机的图像和深度
2015 IEEE International Conference on Computational Photography (ICCP) Pub Date : 2015-04-24 DOI: 10.1109/ICCPHOT.2015.7168366
M. Martinello, A. Wajs, Shuxue Quan, Hank Lee, C. Lim, Taekun Woo, Wonho Lee, Sang-Sik Kim, David Lee
{"title":"Dual Aperture Photography: Image and Depth from a Mobile Camera","authors":"M. Martinello, A. Wajs, Shuxue Quan, Hank Lee, C. Lim, Taekun Woo, Wonho Lee, Sang-Sik Kim, David Lee","doi":"10.1109/ICCPHOT.2015.7168366","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2015.7168366","url":null,"abstract":"Conventional cameras capture images with limited depth of field and no depth information. Camera systems have been proposed that enable additional depth information to be captured with the image. These systems reduce the resolution of the captured image or result in reduced sensitivity of the lens. We demonstrate a camera that is able to capture extended depth of field images together with depth information at each single frame while requiring minimal impact on the physical design of the camera or its performance. In this paper we show results with a camera for mobile devices, but this technology (named dual aperture to recall the major change in the camera model) can be applied with even greater effect in larger form factor cameras.","PeriodicalId":302766,"journal":{"name":"2015 IEEE International Conference on Computational Photography (ICCP)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133453156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信