{"title":"In situ target-less calibration of turbid media","authors":"Ori Spier, T. Treibitz, Guy Gilboa","doi":"10.1109/ICCPHOT.2017.7951491","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951491","url":null,"abstract":"The color of an object imaged in a turbid medium varies with distance and medium properties, deeming color an unstable source of information. Assuming 3D scene structure has become relatively easy to estimate, the main challenge in color recovery is calibrating medium properties in situ, at the time of acquisition. Existing attenuation calibration methods use either color charts, external hardware, or multiple images of an object. Here we show none of these is needed for calibration. We suggest a method for estimating the medium properties (both attenuation and scattering) using only images of backscattered light from the system's light sources. This is advantageous in turbid media where the object signal is noisy, and also alleviates the need for correspondence matching, which can be difficult in high turbidity. We demonstrate the advantages of our method through simulations and in a real-life experiment at sea.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128177481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Schwartzman, Marina Alterman, Rotem Zamir, Y. Schechner
{"title":"Turbulence-induced 2D correlated image distortion","authors":"A. Schwartzman, Marina Alterman, Rotem Zamir, Y. Schechner","doi":"10.1109/ICCPHOT.2017.7951490","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951490","url":null,"abstract":"Due to atmospheric turbulence, light randomly refracts in three dimensions (3D), eventually entering a camera at a perturbed angle. Each viewed object point thus has a distorted projection in a two-dimensional (2D) image. Simulating 3D random refraction for all viewed points via complex simulated 3D random turbulence is computationally expensive. We derive an efficient way to render 2D image distortions, consistent with turbulence. Our approach bypasses 3D numerical calculations altogether We directly create 2D random physics-based distortion vector fields, where correlations are derived in closed form from turbulence theory. The correlations are nontrivial: they depend on the perturbation directions relative to the orientation of all object-pairs, simultaneously. Hence, we develop a theory characterizing and rendering such a distortion field. The theory is turned to a few simple 2D operations, which render images based on camera and atmospheric properties.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"332 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115976720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time panoramic tracking for event cameras","authors":"Christian Reinbacher, Gottfried Munda, T. Pock","doi":"10.1109/ICCPHOT.2017.7951488","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951488","url":null,"abstract":"Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset [18] and self-recorded sequences.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122103468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Wengrowski, Kristin J. Dana, M. Gruteser, N. Mandayam
{"title":"Reading between the pixels: Photographic steganography for camera display messaging","authors":"Eric Wengrowski, Kristin J. Dana, M. Gruteser, N. Mandayam","doi":"10.1109/ICCPHOT.2017.7951487","DOIUrl":"https://doi.org/10.1109/ICCPHOT.2017.7951487","url":null,"abstract":"We exploit human color metamers to send light-modulated messages decipherable by cameras, but camouflaged to human vision. These time-varying messages are concealed in ordinary images and videos. Unlike previous methods which rely on visually obtrusive intensity modulation, embedding with color reduces visible artifacts. The mismatch in human and camera spectral sensitivity creates a unique opportunity for hidden messaging. Each color pixel in an electronic display image is modified by shifting the base color along a particular color gradient. The challenge is to find the set of color gradients that maximizes camera response and minimizes human response. Our approach does not require a priori measurement of these sensitivity curves. We learn an ellipsoidal partitioning of the 6-dimensional space of base colors and color gradients. This partitioning creates metamer sets defined by the base color of each display pixel and the corresponding color gradient for message encoding. We sample from the learned metamer sets to find optimal color steps for arbitrary base colors. Ordinary displays and cameras are used, so there is no need for high speed cameras or displays. Our primary contribution is a method to map pixels in an arbitrary image to metamer pairs for steganographic camera-display messaging.","PeriodicalId":276755,"journal":{"name":"2017 IEEE International Conference on Computational Photography (ICCP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127013321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}