{"title":"A PCA-Based Binning Approach for Matching to Large SIFT Database","authors":"Geoffrey Treen, A. Whitehead","doi":"10.1109/CRV.2010.9","DOIUrl":"https://doi.org/10.1109/CRV.2010.9","url":null,"abstract":"A method for efficiently finding SIFT correspondences in large keypoint archives by separating a database into bins – using the principal components of the SIFT descriptor vector as the binning criteria – is proposed. This technique builds upon our previous efforts to improve SIFT matching speed in image pairs, and will find correspondences approximately three times faster than FLANN – the approximate nearest-neighbor search library that implements the existing state of the art – for the same recall-precision performance.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133432266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Gingras, T. Lamarche, Jean-Luc Bedwani, E. Dupuis
{"title":"Rough Terrain Reconstruction for Rover Motion Planning","authors":"D. Gingras, T. Lamarche, Jean-Luc Bedwani, E. Dupuis","doi":"10.1109/CRV.2010.32","DOIUrl":"https://doi.org/10.1109/CRV.2010.32","url":null,"abstract":"A two-step approach is presented to generate a 3D navigable terrain model for robots operating in natural and uneven environment. First an unstructured surface is built from a 360 degrees field of view LIDAR scan. Second the reconstructed surface is analyzed and the navigable space is extracted to keep only the safe area as a compressed irregular triangular mesh. The resulting mesh is a compact terrain representation and allows point-robot assumption for further motion planning tasks. The proposed algorithm has been validated using a large database containing 688 LIDAR scans collected on an outdoor rough terrain. The mesh simplification error was evaluated using the approximation of Hausdorff distance. In average, for a compression level of 93.5%, the error was of the order of 0.5 cm. This terrain modeler was deployed on a rover controlled from the International Space Station (ISS) during the Avatar Explore Space Mission carried out by the Canadian Space Agency in 2009.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134233114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Organ Recognition Using Gabor Filters","authors":"S. Zaboli, Arash Tabibiazar, P. Fieguth","doi":"10.1109/CRV.2010.19","DOIUrl":"https://doi.org/10.1109/CRV.2010.19","url":null,"abstract":"The aim of this research is to investigate the possibility of using medical image information to extract unique features and classify different patients’ organ tissues, such as the prostate, based on concepts related to what is already done in iris recognition. This paper therefore presents a new approach in medical imaging, an organ recognition system, tested on a standard database of grey scale prostate images in order to validate its performance. In this research, features of the prostate image were encoded by convolving the normalized organ region with a 2D Gabor filter and then quantizing its phase in order to produce a bit-wise biometric template. Our experiments prove that prostate patterns have a low degree of freedom to be used in organ recognition systems and inter-class and intraclass distributions are highly correlated. However, there are still open issues that need to be addressed for future work on organ recognition, including precise segmentation and intensive computation cost.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"135 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114146183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bahman Yari Saeed Khanloo, Ferdinand Stefanus, Mani Ranjbar, Ze-Nian Li, N. Saunier, T. Sayed, Greg Mori
{"title":"Max-Margin Offline Pedestrian Tracking with Multiple Cues","authors":"Bahman Yari Saeed Khanloo, Ferdinand Stefanus, Mani Ranjbar, Ze-Nian Li, N. Saunier, T. Sayed, Greg Mori","doi":"10.1109/CRV.2010.52","DOIUrl":"https://doi.org/10.1109/CRV.2010.52","url":null,"abstract":"In this paper, we introduce MMTrack, a hybrid single pedestrian tracking algorithm that puts together the advantages of descriptive and discriminative approaches for tracking. Specifically, we combine the idea of cluster-based appearance modeling and online tracking and employ a max-margin criterion for jointly learning the relative importance of different cues to the system. We believe that the proposed framework for tracking can be of general interest since one can add or remove components or even use other trackers as features in it which can lead to more robustness against occlusion, drift and appearance change. Finally, we demonstrate the effectiveness of our method quantitatively on a real-world data set.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114163953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis M. Rekleitis, G. Dudek, Yasmina Schoueri, P. Giguère, Junaed Sattar
{"title":"Telepresence across the Ocean","authors":"Ioannis M. Rekleitis, G. Dudek, Yasmina Schoueri, P. Giguère, Junaed Sattar","doi":"10.1109/CRV.2010.41","DOIUrl":"https://doi.org/10.1109/CRV.2010.41","url":null,"abstract":"We describe the development and deployment of a system for long-distance remote observation of robotic operations. The system we have developed is targeted to exploration, multi-participant interaction, and tele-learning. In particular, we used this system with a robot deployed in an underwater environment in order to produce interactive web-casts of scientific material. The system used a combination of robotic and networking technologies and was deployed and evaluated in a context where students in a classroom were able to observe and participate to a limited degree in the operation of a distant robot being used for environmental assessment.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116516712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flame Region Detection Based on Histogram Backprojection","authors":"M. Wirth, Ryan Zaremba","doi":"10.1109/CRV.2010.29","DOIUrl":"https://doi.org/10.1109/CRV.2010.29","url":null,"abstract":"Fire detection using video offers a novel way of detecting fire in spaces where conventional smoke-based fire detectors tend to exhibit high false alarm behavior. This paper explores a simple algorithm for flame detection based on the use of a modified histogram back projection algorithm in YCbCr colour space.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122306860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Akila, J. Samarabandu, J. Knoll, Wahab Khan, P. Rogan
{"title":"An Accurate Image Processing Algorithm for Detecting FISH Probe Locations Relative to Chromosome Landmarks on DAPI Stained Metaphase Chromosome Images","authors":"S. Akila, J. Samarabandu, J. Knoll, Wahab Khan, P. Rogan","doi":"10.1109/CRV.2010.36","DOIUrl":"https://doi.org/10.1109/CRV.2010.36","url":null,"abstract":"With the increasing use of Fluorescence In Situ Hybridization (FISH) probes as markers for certain genetic sequences, the requirement of a proper image processing framework is becoming a necessity to accurately detect these probe signal locations in relation to the centerline of the chromosome. Although many image processing techniques have been developed for chromosomal analysis, they fail to provide reliable results in segmenting and extracting the centerline of chromosomes due to the high variability in shape of chromosomes on microscope slides. In this paper we propose a hybrid algorithm that utilizes Gradient Vector Flow active contours, Discrete Curve Evolution based skeleton pruning and morphological thinning to provide a robust and accurate centerline of the chromosome, which is then used for the measurement of the FISH probe signals. The ability to accurately detect FISH probe locations with respective to the centerline and other landmarks can provide the cytogeneticists with detailed information that could lead to a faster diagnosis.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"5 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120818649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Tracking Using Spatialized Multi-level Histogram and Mean Shift","authors":"A. Shabani, M. H. Ghaeminia, S. B. Shokouhi","doi":"10.1109/CRV.2010.27","DOIUrl":"https://doi.org/10.1109/CRV.2010.27","url":null,"abstract":"Sequential object tracking using mean shift method has become a convenient approach. In this method, an object of interest is represented by its global feature such as a color histogram. The next position of the target is then estimated through a constraint histogram matching. The linearization of the histogram matching metric might not work properly, especially when the target undergoes occlusion, there is an abrupt motion, or when multiple objects exist with similar global but different local structures. We propose a multi-level global-to-local histogramming approach in which the associated spatial information is also encoded in the object’s representation. Specifically, for human shape/appearance encoding, the global histogram resembles the main root and the local histograms correspond to the body parts. In an experiment on a publically available CAVIAR dataset, the proposed representation provides an appropriate sequential matching of a human with abrupt motion and partial occlusion. In addition to a better localization, the proposed approach handles the situations in which the standard mean shift fails.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126931151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Binocular Camera Calibration Using Rectification Error","authors":"D. Bradley, W. Heidrich","doi":"10.1109/CRV.2010.31","DOIUrl":"https://doi.org/10.1109/CRV.2010.31","url":null,"abstract":"Reprojection error is a commonly used measure for comparing the quality of different camera calibrations, for example when choosing the best calibration from a set. While this measure is suitable for single cameras, we show that we can improve calibrations in a binocular or multi-camera setup by calibrating the cameras in pairs using a rectification error. The rectification error determines the mismatch in epipolar constraints between a pair of cameras, and it can be used to calibrate binocular camera setups more accurately than using the reprojection error. We provide a quantitative comparison of the reprojection and rectification errors, and also demonstrate our result with examples of binocular stereo reconstruction.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"403 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114932838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thermal Imaging as a Way to Classify Cognitive Workload","authors":"John Stemberger, R. Allison, T. Schnell","doi":"10.1109/CRV.2010.37","DOIUrl":"https://doi.org/10.1109/CRV.2010.37","url":null,"abstract":"As epitomized in DARPA's 'Augmented Cognition' program, next generation avionics suites are envisioned as sensing, inferring, responding to and ultimately enhancing the cognitive state and capabilities of the pilot. Inferring such complex behavioural states from imagery of the face is a challenging task and multimodal approaches have been favoured for robustness. We have developed and evaluated the feasibility of a system for estimation of cognitive workload levels based on analysis of facial skin temperature. The system is based on thermal infrared imaging of the face, head pose estimation, measurement of the temperature variation across regions of the face and an artificial neural network classifier. The technique was evaluated in a controlled laboratory experiment using subjective measures of workload across tasks as a standard. The system was capable of accurately classifying mental workload into high, medium and low workload levels 81% of the time. The suitability of facial thermography for integration into a multimodal augmented cognition sensor suite is discussed.","PeriodicalId":358821,"journal":{"name":"2010 Canadian Conference on Computer and Robot Vision","volume":"68 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113992780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}