{"title":"The Effect of Colour Space on Image Sharpening Algorithms","authors":"M. Wirth, D. Nikitenko","doi":"10.1109/CRV.2010.17","DOIUrl":"https://doi.org/10.1109/CRV.2010.17","url":null,"abstract":"The processing of colour images to improve sharpness is nearly always been realized in RGB colour space. This paper explores the effects of using different colour spaces on the application of image sharpening algorithms. Part of the goal is to determine which colour space provides a result which does not differ immeasurably from the original with respect to chromaticity. Unsharp masking, and fuzzy morphological sharpening will be tested in RGB, YIQ and CIELab colour spaces.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"164 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120867066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Retina Vessel Detection Using Fuzzy Ant Colony Algorithm","authors":"S. Hooshyar, R. Khayati","doi":"10.1109/CRV.2010.38","DOIUrl":"https://doi.org/10.1109/CRV.2010.38","url":null,"abstract":"Vessel extraction in retina images is a primary and important step in studying diseases including vasculature changes. In this paper, a fuzzy clustering method based on Ant Colony Algorithm, inspired by food-searching natural behavior of ants, is described. Features of color retina images are extracted by eigenvalues analysis of Hessian matrix and Gabor filter bank. Artificial ants in the image use these features for searching and clustering processes. Experiments and results of proposed algorithm show its good performance in vessel extraction. This algorithm is tested on DRIVE database and its results are compared with other works using the same database. The accuracy of our method is 0.933 versus 0.947 for a second observer.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132813253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Roth, R. Laganière, P. Lambert, Ilias Lakhmiri, Tarik Janati
{"title":"A Simple but Effective Approach to Video Copy Detection","authors":"G. Roth, R. Laganière, P. Lambert, Ilias Lakhmiri, Tarik Janati","doi":"10.1109/CRV.2010.15","DOIUrl":"https://doi.org/10.1109/CRV.2010.15","url":null,"abstract":"Video copy detection is an important task with many applications, especially since detecting copies is an alternative to watermarking. In this paper we describe a simple, but efficient approach that is easy to parallelize, works well, and has low storage requirements. We represent each video frame by a count of the number of SURF interest points in each of 4 by 4 quadrants, a total of 16 bytes per frame. This representation is tolerant of the typical transformations that exist in video, but is still computationally efficient and compact. The approach was tested on the TRECVID copy detection task, for which approximately 15 different groups submitted a solution. Performance was among the best for localization, and was approximately equal to the median with regards to the false positive/negative rate. However, performance varies significantly with the video transformation. We believe that the change in gamma, and decrease in video quality transformations are the most common in practice. For these transformations our method works well.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128107770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Window-Based Range Flow with an Isometry Constraint","authors":"Ting Yu, J. Lang","doi":"10.1109/CRV.2010.50","DOIUrl":"https://doi.org/10.1109/CRV.2010.50","url":null,"abstract":"This paper proposes a simple window-based range flow method which uses isometry of the observed surface as its primary matching constraint. The method uses feature points as anchoring references of the surface deformation. Given a set of matched features no other intensity information is used and hence the method can tolerate intensity changes over time. The range-flow equation is only required for a final verification step making the method robust to poor quality range images. This allows us to use the popular Point Grey Research Bumblebee 2 stereo-head to acquire our range data. The approach is shown to work well on two example scenes which capture non-rigid isometric and general deformations. The paper also presents experiments demonstrating the stability of the geodesic approximation employed in the isometry-based matching when the 3D point clouds are sparse.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131832409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Matching Images Using Invariant Level-line Primitives under Projective Transformation","authors":"Yasser Almehio, S. Bouchafa","doi":"10.1109/CRV.2010.24","DOIUrl":"https://doi.org/10.1109/CRV.2010.24","url":null,"abstract":"This paper deals with a new registration method based on a specific level-line grouping. Because of its contrast-change invariance, our approach is an appropriate method for matching outdoor image sequences. Moreover, it does not require any estimation of the unknown transformation between images and handle well the critical cases that usually lead to pairing ambiguities, such as repetitive patterns in the images. This study focuses on invariants primitive construction under projective transformation, using level-lines. The registration by itself is performed through an efficient level-line cumulative matching based on a multi-stage primitive election procedure. Each stage provides a coarse estimate of the transformation that the next stage gets to refine. Experiments on real outdoor road scene show the accuracy and efficiency of this approach, using several image sequences covering different pertaining cases with different type of motion.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127374201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Filter Parameter Selection Using Measures of Noiseness","authors":"Ajit Rajwade, Anand Rangarajan, Arunava Banerjee","doi":"10.1109/CRV.2010.18","DOIUrl":"https://doi.org/10.1109/CRV.2010.18","url":null,"abstract":"Despite the vast body of literature on image denoising, relatively little work has been done in the area of automatically choosing the filter parameters that yield optimal filter performance. The choice of these parameters is crucial for the performance of any filter. In the literature, some independence-based criteria have been proposed, which measure the degree of independence between the denoised image and the residual image (defined as the difference between the noisy image and the denoised one). We contribute to these criteria and point out an important deficiency inherent in all of them. We also propose a new criterion which quantifies the inherent ‘noiseness’ of the residual image without referring to the denoised image, starting with the assumption of an additive and i.i.d. noise model, with a loose lower bound on the noise variance. Several empirical results are demonstrated on two well-known algorithms: NL-means and total variation, on a database of 13 images at six different noise levels, and for three types of noise distributions.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131593044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. S. Allili, D. Ziou, N. Bouguila, S. Boutemedjet
{"title":"Unsupervised Feature Selection and Learning for Image Segmentation","authors":"M. S. Allili, D. Ziou, N. Bouguila, S. Boutemedjet","doi":"10.1109/CRV.2010.44","DOIUrl":"https://doi.org/10.1109/CRV.2010.44","url":null,"abstract":"In this paper we investigate the integration of feature selection in segmentation through an unsupervised learning approach. We propose a clustering algorithm that efficiently mitigates image under/over-segmentation, by combining generalized Gaussian mixture modeling and feature selection. The algorithm is based on generalized Gaussian mixture modeling which is less prone to region number over-estimation in case of noisy and heavy-tailed image distributions. On the other hand, our feature selection mechanism allows to automatically discard uninformative features, which leads to better discrimination and localization of regions in high-dimensional spaces. Experimental results on a large database of real-world images showed us the effectiveness of the proposed approach.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125709174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Virtual Viewpoint Generation on the GPU for Scene Navigation","authors":"Shanat Kolhatkar, R. Laganière","doi":"10.1109/CRV.2010.14","DOIUrl":"https://doi.org/10.1109/CRV.2010.14","url":null,"abstract":"In this paper we present a method for achieving real-time view interpolation in a virtual navigation application that uses a collection of pre-captured panoramic views as a representation of the environment. In this context, viewpoint interpolation is essential to achieve smooth and realistic viewpoint transition while the user is moving from one panorama to another. In this proposed approach, view interpolation is achieved by first computing the optical flow field between a pair of adjacent panoramas. This flow field can then be used by the view morphing algorithm to generate, on-the-fly, virtual viewpoints in-between existing views. Realistic interpolation is obtained by taking into account both scene geometry and color information. To achieve real-time viewpoint interpolation, a GPU implementation of the viewpoint interpolation algorithm has been developed. We ran our algorithm on multiple interior and exterior scenes and we were able to produce smooth and realistic viewpoint transitions by generating virtual views at a rate of more than 300 panoramas per second.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128788323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quasi-Random Scale Space Approach to Robust Keypoint Extraction in High-Noise Environments","authors":"A. Wong, A. Mishra, David A Clausi, P. Fieguth","doi":"10.1109/CRV.2010.11","DOIUrl":"https://doi.org/10.1109/CRV.2010.11","url":null,"abstract":"A novel multi-scale approach is presented for the purpose of robust keypoint extraction in high-noise environments. A multi-scale representation of the noisy scene is computed using quasi-random scale space theory. A gradient second-order moment analysis is employed at each quasi random scale to identify initial keypoint candidates. Final keypoints and their characteristic scales are selected based on the local Hessian trace extrema over all quasi-random scales. The proposed keypoint extraction method is designed to reduce noise sensitivity by taking advantage of the structural localization and noise robustness gained through the use of quasi-random scale space theory. Experimental results using scenes under different high noise conditions, as well as real synthetic aperture sonar imagery, show the effectiveness of the proposed method for noise robust keypoint extraction when compared to existing keypoint extraction techniques.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126517099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Global Context Descriptors for SURF and MSER Feature Descriptors","authors":"Gail Carmichael, R. Laganière, P. Bose","doi":"10.1109/CRV.2010.47","DOIUrl":"https://doi.org/10.1109/CRV.2010.47","url":null,"abstract":"Global context descriptors are vectors of additional information appended to an existing descriptor, and are computed as a log-polar histogram of nearby curvature values. These have been proposed in the past to make Scale Invariant Feature Transform (SIFT) matching more robust. This additional information improved matching results especially for images with repetitive features. We propose a similar global context descriptor for Speeded Up Robust Features (SURFs) and Maximally Stable Extremal Regions (MSERs). Our experiments show some improvement for SURFs when using the global context, and much improvement for MSER.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130053803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}