Tsong-Liang Huang, Che-Wei Liu, Jui-Peng Lin, Chien-Ying Li, Ting-Yi Kuo
{"title":"A novel scheme for fingerprint identification","authors":"Tsong-Liang Huang, Che-Wei Liu, Jui-Peng Lin, Chien-Ying Li, Ting-Yi Kuo","doi":"10.1109/CRV.2005.10","DOIUrl":"https://doi.org/10.1109/CRV.2005.10","url":null,"abstract":"Fingerprint recognition is one of the most reliable and popular biometric recognition methods in these days. In this paper, we describe a fingerprint recognition system consisting of three main steps - fingerprint image preprocessing, feature extraction and feature matching. The image preprocessing step enhances fingerprint image to obtain binarized ridges, which are needed for feature extraction. Feature points which are also called minutiae such as ridge endings, ridge bifurcations are then extracted, followed by false minutiae elimination. The novel matching algorithm is proposed, which is a fast and robust minutiae-based method.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115023620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Upper body pose estimation from stereo and hand-face tracking","authors":"J. Mulligan","doi":"10.1109/CRV.2005.83","DOIUrl":"https://doi.org/10.1109/CRV.2005.83","url":null,"abstract":"In applications such as immersive telepresence we want to extract high quality 3D models of collaborators in real time from multiview image sequences. One way to improve the quality of stereo or visual hull based models is to estimate the kinematic pose of the user first and then constrain 3D reconstruction accordingly. To serve as a preprocessing step such pose extraction must be very fast, precluding the usual generate and test techniques. We examine a method based on psychophysical evidence that known relative hand position can be used to directly compute the pose of the arm. First we explore a number of possible models for this relationship using motion capture data. We then examine how reconstruction of face and hand position as well as a patch on the torso, allow us to exploit these simple direct calculations to estimate the pose of a user in a desktop collaboration environment.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129995969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variational Principles for Diffusion Weighted MRI Restoration and Segmentation","authors":"B. Vemuri","doi":"10.1109/CRV.2005.85","DOIUrl":"https://doi.org/10.1109/CRV.2005.85","url":null,"abstract":"Diffusion tensor MRI is a relatively new MR image modality from which anisotropy of water diffusion can be inferred quantitatively, thus providing a method to study the tissue micro-structure e.g., white matter connectivity in the brain in vivo. Diffusion weighted echo intensity image Sl and the diffusion tensor D are related through the Stejskal-Tanner equation 3","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130286221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Histogram equalization using neighborhood metrics","authors":"M. Eramian, D. Mould","doi":"10.1109/CRV.2005.47","DOIUrl":"https://doi.org/10.1109/CRV.2005.47","url":null,"abstract":"We present a refinement of histogram equalization which uses both global and local information to remap the image grey levels. Local image properties, which we generally call neighborhood metrics, are used to subdivide histogram bins that would be otherwise indivisible using classical histogram equalization (HE). Choice of the metric influences how the bins are subdivided, affording the opportunity for additional contrast enhancement. We present experimental results for two specific neighborhood metrics and compare the results to classical histogram equalization and local histogram equalization (LHE). We find that our methods can provide an improvement in contrast enhancement versus HE, while avoiding undesirable over-enhancement that can occur with LHE and other methods. Moreover, the improvement over HE is achieved with only a small increase in computation time.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using vanishing points to correct camera rotation in images","authors":"Andrew C. Gallagher","doi":"10.1109/CRV.2005.84","DOIUrl":"https://doi.org/10.1109/CRV.2005.84","url":null,"abstract":"Vanishing points provide valuable information regarding the camera model used to capture an image. To explore the relationship between classes of camera models and the location of vanishing points, typical consumer photographic behavior is considered. Based on these findings, an algorithm is presented that can automatically remove the tilted appearance of an image captured with a camera rotated about the principal axis. The algorithm includes detecting vanishing points in an image, determining if any vanishing points are associated with vertical lines in the scene, computing the angle of rotation, and rotating the image. Results of the algorithm are shown for a set of images. The algorithm performs well and produces pleasing images from original images that contain undesirable levels of tilt.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114134346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of multi-part objects by top-down perceptual grouping","authors":"Vénérée Randrianarisoa, J. Bernier, R. Bergevin","doi":"10.1109/CRV.2005.34","DOIUrl":"https://doi.org/10.1109/CRV.2005.34","url":null,"abstract":"In this paper, a top-down approach based on perceptual grouping is proposed for multi-part objects detection. The abstract conceptual category of multi-part objects is formalized by a set of global criteria. These criteria will enable the evaluation of the segmentation quality in order to determine if the whole grouping is perceptually significant and if it has a good perceptual shape. A new cognitive vision methodology, called SAFE (subjectivity and formalism explicitly), is presented. Its goal is to help identify the proper global criteria and to validate the judgment derived from formal calculations of these criteria by human judgment.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128433068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Immersive panoramic imagery","authors":"M. Fiala","doi":"10.1109/CRV.2005.49","DOIUrl":"https://doi.org/10.1109/CRV.2005.49","url":null,"abstract":"An immersive experience where one views imagery captured from a panoramic camera with a head mounted display (HMD) or \"cave\" display system is an example of image processing that would be appreciated by the masses. In such a system, a user can see what would be seen from several viewpoints in a natural way by simply moving their head around. Virtual perspective views would be generated from recorded imagery collected by a panoramic camera from a set of locations. With image based rendering techniques, the user could also see views from viewpoints different from where the panoramic camera was placed. This paper proposes a simple framework for designing such systems based on image cubes which has the benefits of fast low latency operation and an efficient way to create intermediate images. The image cube method de-couples the image creation from the output image generation for the low latency required for realistic HMD immersive viewing. A fast algorithm for generating intermediate views along linear paths between capture sites based on pre-calculated disparity maps is also presented. A prototype system is shown.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122020304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Background subtraction using self-identifying patterns","authors":"M. Fiala, Chang Shu","doi":"10.1109/CRV.2005.23","DOIUrl":"https://doi.org/10.1109/CRV.2005.23","url":null,"abstract":"Separation of object foreground from background is used in 3D model creation and matting in video production. Robust background subtraction techniques that function in uncontrolled lighting environments would be useful for many applications. We introduce a method using bi-tonal self-identifying patterns as a background that can be used to recognize the foreground object despite the background intensity and colour being non-uniform across the image. Detected pattern points are used to sample the black and white colour levels in several image points. A surface is fitted to both the black and white colour levels allowing an estimated background image to be created. The background image is then subtracted from the original image to isolate the foreground objects. The method of using self-identifying patterns also provides the camera-pattern pose for use in 3D model creation. A visual hull 3D model can be created by identifying the outline of an object from several known camera poses. Examples of this method applied to both matting and 3D model creation are given. Experimental results are shown.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127365105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Auto-correlation wavelet support vector machine and its applications to regression","authors":"Guangyi Chen, G. Dudek","doi":"10.1109/CRV.2005.19","DOIUrl":"https://doi.org/10.1109/CRV.2005.19","url":null,"abstract":"A support vector machine (SVM) with the autocorrelation of compactly supported wavelet as kernel is proposed in this paper. It is proved that this kernel is an admissible support vector kernel. The main advantage of the auto-correlation of a compactly supported wavelet is that it satisfies the translation invariant property, which is very important for signal processing. Also, we can choose a better wavelet from different choices of wavelet families for our auto-correlation wavelet kernel. Experiments on signal regression show that this method is better than the existing SVM function regression with the scalar wavelet kernel, the Gaussian kernel, and the exponential radial basis function kernel It can be easily extended to other applications such as pattern recognition by using this newly developed auto-correlation wavelet SVM.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122932087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dry granular flows need special tools","authors":"A. Biancardi, P. Ghilardi, M. Pagliardi","doi":"10.1109/CRV.2005.37","DOIUrl":"https://doi.org/10.1109/CRV.2005.37","url":null,"abstract":"Owing to their destructive power, debris flow are the subjects of extensive investigations aiming at characterizing and modeling their inner nature. Laboratory experiments on simulated dam breaks are one important way to trigger and study debris-flow waves. High-speed recordings of granular flows arising from a dam-break-like event can be processed to extract useful information about the flow dynamics. Although the techniques for measuring of velocities in liquid and gas flows are well established, they cannot be used directly on the flows arising from a dam break: this paper presents two required additions that were made to compute such flows. All the resulting quantities are being used to tune a mathematical model describing the observed flows.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"62 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114421397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}