{"title":"Image deconvolution using a stochastic differential equation approach","authors":"X. Descombes, M. Lebellego, E. Zhizhina","doi":"10.5220/0002064701570164","DOIUrl":"https://doi.org/10.5220/0002064701570164","url":null,"abstract":"Abstract: We consider the problem of image deconvolution. We foccus on a Bayesian approach which consists of maximizing an energy obtained by a Markov Random Field modeling. MRFs are classically optimized by a MCMC sampler embeded into a simulated annealing scheme. In a previous work, we have shown that, in the context of image denoising, a diffusion process can outperform the MCMC approach in term of computational time. Herein, we extend this approach to the case of deconvolution. We first study the case where the kernel is known. Then, we address the myopic and blind deconvolutions.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132455263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Disjunctive Normal Form of Weak Classifiers for Online Learning based Object Tracking","authors":"Zhu Teng, D. Kang","doi":"10.5220/0004240501380146","DOIUrl":"https://doi.org/10.5220/0004240501380146","url":null,"abstract":"The use of a strong classifier that is combined by an ensemble of weak classifiers has been prevalent in tracking, classification etc. In the conventional ensemble tracking, one weak classifier selects a 1D feature, and the strong classifier is combined by a number of 1D weak classifiers. In this paper, we present a novel tracking algorithm where weak classifiers are 2D disjunctive normal form (DNF) of these 1D weak classifiers. The final strong classifier is then a linear combination of weak classifiers and 2D DNF cell classifiers. We treat tracking as a binary classification problem, and one full DNF can express any particular Boolean function; therefore 2D DNF classifiers have the capacity to represent more complex distributions than original weak classifiers. This can strengthen any original weak classifier. We implement the algorithm and run the experiments on several video sequences.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114676782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integration of Tracked and Recognized Features for Locally and Globally Robust Structure from Motion","authors":"C. Engels, F. Fraundorfer, D. Nistér","doi":"10.5220/0002341800130022","DOIUrl":"https://doi.org/10.5220/0002341800130022","url":null,"abstract":"We present a novel approach to structure from motion that integrates wide baseline local features with tracked features to rapidly and robustly reconstruct scenes from image sequences. Rather than assume that we can create and maintain a consistent and drift-free reconstructed map over an arbitrarily long sequence, we instead create small, independent submaps generated over short periods of time and attempt to link the submaps together via recognized features. The tracked features provide accurate pose estimates frame to frame, while the recognizable local features stabilize the estimate over larger baselines and provide a context for linking submaps together. As each frame in the submap is inserted, we apply real-time bundle adjustment to maintain a high accuracy for the submaps. Recent advances in feature-based object recognition enable us to efficiently localize and link new submaps into a reconstructed map within a localization and mapping context. Because our recognition system can operate efficiently on many more features than previous systems, our approach easily scales to larger maps. We provide results that show that accurate structure and motion estimates can be produced from a handheld camera under shaky camera motion.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133353053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Bianchi, Riccardo Gatti, L. Lombardi, L. Cinque
{"title":"Parallel Lossy Compression for HD Images - A New Fast Image Magnification Algorithm for Lossy HD Video Decompression Over Commodity GPU","authors":"L. Bianchi, Riccardo Gatti, L. Lombardi, L. Cinque","doi":"10.5220/0001767900160021","DOIUrl":"https://doi.org/10.5220/0001767900160021","url":null,"abstract":"Today High Definition (HD) for video contents is one of the biggest challenges in computer vision. The 1080i standard defines the minimum image resolution required to be classified as HD mode. At the same time bandwidth constraints and latency don’t allow the transmission of uncompressed, high resolution images. Often lossy compression algorithms are involved in the process of providing HD video streams, because of their high compression rate capabilities. The main issue concerned to these methods, while processing frames, is that high frequencies components in the image are neither conserved nor reconstructed. Our approach uses a simple downsampling algorithm for compression, but a new, very accurate method for decompression which is capable of high frequencies restoration. Our solution Is also highly parallelizable and can be efficiently implemented on a commodity parallel computing architecture, such as GPU, obtaining extremely fast performances.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130248621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An active stereoscopic system for iterative 3D surface reconstruction","authors":"Wanjing Li, F. Marzani, Y. Voisin, F. Boochs","doi":"10.5220/0002065500780084","DOIUrl":"https://doi.org/10.5220/0002065500780084","url":null,"abstract":"For most traditional active 3D surface reconstruction methods, a common feature is that the object surface is scanned uniformly, so that the final 3D model contains a very large number of points, which requires huge storage space, and makes the transmission and visualization time-consuming. A post-process then is necessary to reduce the data by decimation. In this paper, we present a newly active stereoscopic system based on iterative spot pattern projection. The 3D surface reconstruction process begins with a regular spot pattern, and then the pattern is modified progressively according to the object’s surface geometry. The adaptation is controlled by the estimation of the local surface curvature of the actual reconstructed 3D surface. The reconstructed 3D model is optimized: it retains all the morphological information about the object with a minimal number of points. Therefore, it requires little storage space, and no further mesh simplification is needed.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126937309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mutual Calibration of a Camera and a Laser Rangefinder","authors":"V. Caglioti, A. Giusti, D. Migliore","doi":"10.5220/0002341700330042","DOIUrl":"https://doi.org/10.5220/0002341700330042","url":null,"abstract":"We present a novel geometrical method for mutually calibrating a camera and a laser rangefinder by exploiting the image of the laser dot in relation to the rangefinder reading. Our method simultaneously estimates all intrinsic parameters of a pinhole natural camera, its position and orientation w.r.t. the rangefinder axis, and four parameters of a very generic rangefinder model with one rotational degree of freedom. The calibration technique uses data from at least 5 different rangefinder rotations: for each rotation, at least 3 different observations of the laser dot and the respective rangefinder reading are needed. Data collection is simply performed by generically moving the rangefinder-camera system, and does not require any calibration target, nor any knowledge of environment or motion. We investigate the theoretical limits of the technique as well as its practical application; we also show extensions to using more data than strictly necessary or exploit a priori knowledge of some parameters.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125209957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sharon I. Greenblum, M. Krucoff, J. Furst, D. Raicu
{"title":"Automated image analysis of noisy microarrays","authors":"Sharon I. Greenblum, M. Krucoff, J. Furst, D. Raicu","doi":"10.5220/0002038603710375","DOIUrl":"https://doi.org/10.5220/0002038603710375","url":null,"abstract":"A recent extension of DNA microarray technology has been its use in DNA fingerprinting. Our research involved developing an algorithm that automatically analyzes microarray images by extracting useful information while ignoring the large amounts of noise. Our data set consisted of slides generated from DNA strands of 24 different cultures of anthrax from isolated locations (all the same strain that differ only in origin-specific neutral mutations). The data set was provided by Argonne National Laboratories in Illinois. Here we present a fully automated method that classifies these isolates at least as well as the published AMIA (Automated Microarray Image Analysis) Toolbox for MATLAB with virtually no required user interaction or external information, greatly increasing efficiency of the image analysis.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133956192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiresolution text detection in video frames","authors":"M. Anthimopoulos, B. Gatos, I. Pratikakis","doi":"10.5220/0002057301610166","DOIUrl":"https://doi.org/10.5220/0002057301610166","url":null,"abstract":"This paper proposes an algorithm for detecting artificial text in video frames using edge information. First, an edge map is created using the Canny edge detector. Then, morphological dilation and opening are used in order to connect the vertical edges and eliminate false alarms. Bounding boxes are determined for every non-zero valued connected component, consisting the initial candidate text areas. Finally, an edge projection analysis is applied, refining the result and splitting text areas in text lines. The whole algorithm is applied in different resolutions to ensure text detection with size variability. Experimental results prove that the method is highly effective and efficient for artificial text detection.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122390502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth Inpainting with Tensor Voting using Local Geometry","authors":"Mandar Kulkarni, A. Rajagopalan, G. Rigoll","doi":"10.5220/0003840100220030","DOIUrl":"https://doi.org/10.5220/0003840100220030","url":null,"abstract":"Range images captured from range scanning devices or reconstructed form optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections etc. In this paper, we propose a fast and simple algorithm for range map inpainting using Tensor Voting (TV) framework. From a single range image, we gather and analyze geometric information so as to estimate missing depth values. To deal with large missing regions, TV-based segmentation is initially employed as a cue for a region filling. Subsequently, we use 3D tensor voting for estimating different plane equations and pass depth estimates from all possible local planes that pass through a missing region. A final pass of tensor voting is performed to choose the best depth estimate for each point in the missing region. We demonstrate the effectiveness of our approach on synthetic as well as real data.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117062603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mean Shift Object Tracking using a 4D Kernel and Linear Prediction","authors":"Katharina Quast, Christof Kobylko, André Kaup","doi":"10.5220/0003327305880593","DOIUrl":"https://doi.org/10.5220/0003327305880593","url":null,"abstract":"A new mean shift tracker which tracks not only the position but also the size and orientation of an object is presented. By using a four-dimensional kernel, the mean shift iterations are performed in a four-dimensional search space consisting of the image coordinates, a scale and an orientation dimension. Thus, the enhanced mean shift tracker tracks the position, size and orientation of an object simultaneously. To increase the tracking performance by using the information about the position, size and orientation of the object in the previous frames, a linear prediction is also integrated into the 4D kernel tracker. The tracking performance is further improved by considering the gradient norm as an additional object feature.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121228536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}