International Conference on Computer Vision Theory and Applications最新文献

筛选
英文 中文
Image deconvolution using a stochastic differential equation approach 使用随机微分方程方法的图像反卷积
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-29 DOI: 10.5220/0002064701570164
X. Descombes, M. Lebellego, E. Zhizhina
{"title":"Image deconvolution using a stochastic differential equation approach","authors":"X. Descombes, M. Lebellego, E. Zhizhina","doi":"10.5220/0002064701570164","DOIUrl":"https://doi.org/10.5220/0002064701570164","url":null,"abstract":"Abstract: We consider the problem of image deconvolution. We foccus on a Bayesian approach which consists of maximizing an energy obtained by a Markov Random Field modeling. MRFs are classically optimized by a MCMC sampler embeded into a simulated annealing scheme. In a previous work, we have shown that, in the context of image denoising, a diffusion process can outperform the MCMC approach in term of computational time. Herein, we extend this approach to the case of deconvolution. We first study the case where the kernel is known. Then, we address the myopic and blind deconvolutions.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132455263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Disjunctive Normal Form of Weak Classifiers for Online Learning based Object Tracking 基于在线学习目标跟踪的弱分类器析取范式
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-28 DOI: 10.5220/0004240501380146
Zhu Teng, D. Kang
{"title":"Disjunctive Normal Form of Weak Classifiers for Online Learning based Object Tracking","authors":"Zhu Teng, D. Kang","doi":"10.5220/0004240501380146","DOIUrl":"https://doi.org/10.5220/0004240501380146","url":null,"abstract":"The use of a strong classifier that is combined by an ensemble of weak classifiers has been prevalent in tracking, classification etc. In the conventional ensemble tracking, one weak classifier selects a 1D feature, and the strong classifier is combined by a number of 1D weak classifiers. In this paper, we present a novel tracking algorithm where weak classifiers are 2D disjunctive normal form (DNF) of these 1D weak classifiers. The final strong classifier is then a linear combination of weak classifiers and 2D DNF cell classifiers. We treat tracking as a binary classification problem, and one full DNF can express any particular Boolean function; therefore 2D DNF classifiers have the capacity to represent more complex distributions than original weak classifiers. This can strengthen any original weak classifier. We implement the algorithm and run the experiments on several video sequences.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114676782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Integration of Tracked and Recognized Features for Locally and Globally Robust Structure from Motion 运动中局部和全局鲁棒结构跟踪特征与识别特征的集成
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-26 DOI: 10.5220/0002341800130022
C. Engels, F. Fraundorfer, D. Nistér
{"title":"Integration of Tracked and Recognized Features for Locally and Globally Robust Structure from Motion","authors":"C. Engels, F. Fraundorfer, D. Nistér","doi":"10.5220/0002341800130022","DOIUrl":"https://doi.org/10.5220/0002341800130022","url":null,"abstract":"We present a novel approach to structure from motion that integrates wide baseline local features with tracked features to rapidly and robustly reconstruct scenes from image sequences. Rather than assume that we can create and maintain a consistent and drift-free reconstructed map over an arbitrarily long sequence, we instead create small, independent submaps generated over short periods of time and attempt to link the submaps together via recognized features. The tracked features provide accurate pose estimates frame to frame, while the recognizable local features stabilize the estimate over larger baselines and provide a context for linking submaps together. As each frame in the submap is inserted, we apply real-time bundle adjustment to maintain a high accuracy for the submaps. Recent advances in feature-based object recognition enable us to efficiently localize and link new submaps into a reconstructed map within a localization and mapping context. Because our recognition system can operate efficiently on many more features than previous systems, our approach easily scales to larger maps. We provide results that show that accurate structure and motion estimates can be produced from a handheld camera under shaky camera motion.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133353053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Parallel Lossy Compression for HD Images - A New Fast Image Magnification Algorithm for Lossy HD Video Decompression Over Commodity GPU 高清图像的并行有损压缩——一种基于商用GPU的有损高清视频压缩的快速图像放大算法
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-26 DOI: 10.5220/0001767900160021
L. Bianchi, Riccardo Gatti, L. Lombardi, L. Cinque
{"title":"Parallel Lossy Compression for HD Images - A New Fast Image Magnification Algorithm for Lossy HD Video Decompression Over Commodity GPU","authors":"L. Bianchi, Riccardo Gatti, L. Lombardi, L. Cinque","doi":"10.5220/0001767900160021","DOIUrl":"https://doi.org/10.5220/0001767900160021","url":null,"abstract":"Today High Definition (HD) for video contents is one of the biggest challenges in computer vision. The 1080i standard defines the minimum image resolution required to be classified as HD mode. At the same time bandwidth constraints and latency don’t allow the transmission of uncompressed, high resolution images. Often lossy compression algorithms are involved in the process of providing HD video streams, because of their high compression rate capabilities. The main issue concerned to these methods, while processing frames, is that high frequencies components in the image are neither conserved nor reconstructed. Our approach uses a simple downsampling algorithm for compression, but a new, very accurate method for decompression which is capable of high frequencies restoration. Our solution Is also highly parallelizable and can be efficiently implemented on a commodity parallel computing architecture, such as GPU, obtaining extremely fast performances.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130248621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An active stereoscopic system for iterative 3D surface reconstruction 一种用于迭代三维曲面重建的主动立体系统
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-26 DOI: 10.5220/0002065500780084
Wanjing Li, F. Marzani, Y. Voisin, F. Boochs
{"title":"An active stereoscopic system for iterative 3D surface reconstruction","authors":"Wanjing Li, F. Marzani, Y. Voisin, F. Boochs","doi":"10.5220/0002065500780084","DOIUrl":"https://doi.org/10.5220/0002065500780084","url":null,"abstract":"For most traditional active 3D surface reconstruction methods, a common feature is that the object surface is scanned uniformly, so that the final 3D model contains a very large number of points, which requires huge storage space, and makes the transmission and visualization time-consuming. A post-process then is necessary to reduce the data by decimation. In this paper, we present a newly active stereoscopic system based on iterative spot pattern projection. The 3D surface reconstruction process begins with a regular spot pattern, and then the pattern is modified progressively according to the object’s surface geometry. The adaptation is controlled by the estimation of the local surface curvature of the actual reconstructed 3D surface. The reconstructed 3D model is optimized: it retains all the morphological information about the object with a minimal number of points. Therefore, it requires little storage space, and no further mesh simplification is needed.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126937309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutual Calibration of a Camera and a Laser Rangefinder 照相机和激光测距仪的相互校准
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-26 DOI: 10.5220/0002341700330042
V. Caglioti, A. Giusti, D. Migliore
{"title":"Mutual Calibration of a Camera and a Laser Rangefinder","authors":"V. Caglioti, A. Giusti, D. Migliore","doi":"10.5220/0002341700330042","DOIUrl":"https://doi.org/10.5220/0002341700330042","url":null,"abstract":"We present a novel geometrical method for mutually calibrating a camera and a laser rangefinder by exploiting the image of the laser dot in relation to the rangefinder reading. Our method simultaneously estimates all intrinsic parameters of a pinhole natural camera, its position and orientation w.r.t. the rangefinder axis, and four parameters of a very generic rangefinder model with one rotational degree of freedom. The calibration technique uses data from at least 5 different rangefinder rotations: for each rotation, at least 3 different observations of the laser dot and the respective rangefinder reading are needed. Data collection is simply performed by generically moving the rangefinder-camera system, and does not require any calibration target, nor any knowledge of environment or motion. We investigate the theoretical limits of the technique as well as its practical application; we also show extensions to using more data than strictly necessary or exploit a priori knowledge of some parameters.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125209957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automated image analysis of noisy microarrays 噪声微阵列的自动图像分析
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-26 DOI: 10.5220/0002038603710375
Sharon I. Greenblum, M. Krucoff, J. Furst, D. Raicu
{"title":"Automated image analysis of noisy microarrays","authors":"Sharon I. Greenblum, M. Krucoff, J. Furst, D. Raicu","doi":"10.5220/0002038603710375","DOIUrl":"https://doi.org/10.5220/0002038603710375","url":null,"abstract":"A recent extension of DNA microarray technology has been its use in DNA fingerprinting. Our research involved developing an algorithm that automatically analyzes microarray images by extracting useful information while ignoring the large amounts of noise. Our data set consisted of slides generated from DNA strands of 24 different cultures of anthrax from isolated locations (all the same strain that differ only in origin-specific neutral mutations). The data set was provided by Argonne National Laboratories in Illinois. Here we present a fully automated method that classifies these isolates at least as well as the published AMIA (Automated Microarray Image Analysis) Toolbox for MATLAB with virtually no required user interaction or external information, greatly increasing efficiency of the image analysis.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133956192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multiresolution text detection in video frames 视频帧中的多分辨率文本检测
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-25 DOI: 10.5220/0002057301610166
M. Anthimopoulos, B. Gatos, I. Pratikakis
{"title":"Multiresolution text detection in video frames","authors":"M. Anthimopoulos, B. Gatos, I. Pratikakis","doi":"10.5220/0002057301610166","DOIUrl":"https://doi.org/10.5220/0002057301610166","url":null,"abstract":"This paper proposes an algorithm for detecting artificial text in video frames using edge information. First, an edge map is created using the Canny edge detector. Then, morphological dilation and opening are used in order to connect the vertical edges and eliminate false alarms. Bounding boxes are determined for every non-zero valued connected component, consisting the initial candidate text areas. Finally, an edge projection analysis is applied, refining the result and splitting text areas in text lines. The whole algorithm is applied in different resolutions to ensure text detection with size variability. Experimental results prove that the method is highly effective and efficient for artificial text detection.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122390502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Depth Inpainting with Tensor Voting using Local Geometry 深度绘制与张量投票使用局部几何
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-24 DOI: 10.5220/0003840100220030
Mandar Kulkarni, A. Rajagopalan, G. Rigoll
{"title":"Depth Inpainting with Tensor Voting using Local Geometry","authors":"Mandar Kulkarni, A. Rajagopalan, G. Rigoll","doi":"10.5220/0003840100220030","DOIUrl":"https://doi.org/10.5220/0003840100220030","url":null,"abstract":"Range images captured from range scanning devices or reconstructed form optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections etc. In this paper, we propose a fast and simple algorithm for range map inpainting using Tensor Voting (TV) framework. From a single range image, we gather and analyze geometric information so as to estimate missing depth values. To deal with large missing regions, TV-based segmentation is initially employed as a cue for a region filling. Subsequently, we use 3D tensor voting for estimating different plane equations and pass depth estimates from all possible local planes that pass through a missing region. A final pass of tensor voting is performed to choose the best depth estimate for each point in the missing region. We demonstrate the effectiveness of our approach on synthetic as well as real data.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117062603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Mean Shift Object Tracking using a 4D Kernel and Linear Prediction 使用四维核和线性预测的Mean Shift目标跟踪
International Conference on Computer Vision Theory and Applications Pub Date : 2016-11-24 DOI: 10.5220/0003327305880593
Katharina Quast, Christof Kobylko, André Kaup
{"title":"Mean Shift Object Tracking using a 4D Kernel and Linear Prediction","authors":"Katharina Quast, Christof Kobylko, André Kaup","doi":"10.5220/0003327305880593","DOIUrl":"https://doi.org/10.5220/0003327305880593","url":null,"abstract":"A new mean shift tracker which tracks not only the position but also the size and orientation of an object is presented. By using a four-dimensional kernel, the mean shift iterations are performed in a four-dimensional search space consisting of the image coordinates, a scale and an orientation dimension. Thus, the enhanced mean shift tracker tracks the position, size and orientation of an object simultaneously. To increase the tracking performance by using the information about the position, size and orientation of the object in the previous frames, a linear prediction is also integrated into the 4D kernel tracker. The tracking performance is further improved by considering the gradient norm as an additional object feature.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121228536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信