{"title":"Subspace Methods with Globally/Locally Weighted Correlation Matrix","authors":"Yukihiko Yamashita, T. Wakahara","doi":"10.1109/ICPR.2010.1035","DOIUrl":"https://doi.org/10.1109/ICPR.2010.1035","url":null,"abstract":"The discriminant function of a subspace method is provided by using correlation matrices that reflect the averaged feature of a category. As a result, it will not work well on unknown input patterns that are far from the average. To address this problem, we propose two kinds of weighted correlation matrices for subspace methods. The globally weighted correlation matrix (GWCM) attaches importance to training patterns that are far from the average. Then, it can reflect the distribution of patterns around the category boundary more precisely. The computational cost of a subspace method using GWCMs is almost the same as that using ordinary correlation matrices. The locally weighted correlation matrix (LWCM) attaches importance to training patterns that arenear to an input pattern to be classified. Then, it can reflect the distribution of training patterns around the input pattern in more detail. The computational cost of a subspace method with LWCM at the recognition stage does not depend on the number of training patterns, while those of the conventional adaptive local and the nonlinear subspace methods do. We show the advantages of the proposed methods by experiments made on the MNIST database of handwritten digits.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116956631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Narongsak Putpuek, N. Cooharojananone, C. Lursinsap, S. Satoh
{"title":"Unified Approach to Detection and Identification of Commercial Films by Temporal Occurrence Pattern","authors":"Narongsak Putpuek, N. Cooharojananone, C. Lursinsap, S. Satoh","doi":"10.1109/ICPR.2010.804","DOIUrl":"https://doi.org/10.1109/ICPR.2010.804","url":null,"abstract":"In this paper, we propose a method to detect and identify commercial films from broadcast videos by using Temporal Occurrence Pattern (TOP). Our method uses the characteristic of broadcast videos in Japan that each individual commercial film appears multiple times in broadcast stream and typically has the same duration (e.g., 15 seconds). Using this characteristic, the method can detect as well as identify individual commercial films within given video archive. Based on simple signature (global feature) for each frame image, the method first puts all frames into numbers of buckets where each bucket contains frames having the same signature, and thus they appear the same. For each bucket, TOP as a binary sequence representing the occurrence time within video archive is then generated. All buckets are then clustered using simple hierarchical clustering with similarity between TOPs allowing possible temporal offset. This clustering stage can stitch up all frames for each commercial film and identify multiple occurrence of the same commercial film at the same time. We tested our method using actual broadcast video archive and confirmed good performance in detecting and identifying commercial films.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116987620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Battiato, G. Farinella, M. Guarnera, G. Messina, D. Ravì
{"title":"Boosting Gray Codes for Red Eyes Removal","authors":"S. Battiato, G. Farinella, M. Guarnera, G. Messina, D. Ravì","doi":"10.1109/ICPR.2010.1024","DOIUrl":"https://doi.org/10.1109/ICPR.2010.1024","url":null,"abstract":"Since the large diffusion of digital camera and mobile devices with embedded camera and flashgun, the red-eyes artifacts have de-facto become a critical problem. The technique herein described makes use of three main steps to identify and remove red-eyes. First, red eyes candidates are extracted from the input image by using an image filtering pipeline. A set of classifiers is then learned on gray code features extracted in the clustered patches space, and hence employed to distinguish between eyes and non-eyes patches. Once red-eyes are detected, artifacts are removed through desaturation and brightness reduction. The proposed method has been tested on large dataset of images achieving effective results in terms of hit rates maximization, false positives reduction and quality measure.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117178877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Entropy of Feature Point-Based Retina Templates","authors":"J. Jeffers, A. Arakala, K. Horadam","doi":"10.1109/ICPR.2010.61","DOIUrl":"https://doi.org/10.1109/ICPR.2010.61","url":null,"abstract":"This paper studies the amount of distinctive information contained in a privacy protecting and compact template of a retinal image created from the locations of crossings and bifurcations in the choroidal vasculature, otherwise called feature points. Using a training set of 20 different retina, we build a template generator that simulates one million imposter comparisons and computes the number of imposter retina comparisons that successfully matched at various thresholds. The template entropy thus computed was used to validate a theoretical model of imposter comparisons. The simulator and the model both estimate that 20 bits of entropy can be achieved by the feature point-based template. Our results reveal the distinctiveness of feature point-based retinal templates, hence establishing their potential as a biometric identifier for high security and memory intensive applications.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaoyuan Jing, Sheng Li, Yong-Fang Yao, Lu-Sha Bian, Jing-yu Yang
{"title":"Kernel Uncorrelated Adjacent-class Discriminant Analysis","authors":"Xiaoyuan Jing, Sheng Li, Yong-Fang Yao, Lu-Sha Bian, Jing-yu Yang","doi":"10.1109/ICPR.2010.178","DOIUrl":"https://doi.org/10.1109/ICPR.2010.178","url":null,"abstract":"In this paper, a kernel uncorrelated adjacent-class discriminant analysis (KUADA) approach is proposed for image recognition. The optimal nonlinear discriminant vector obtained by this approach can differentiate one class and its adjacent classes, i.e., its nearest neighbor classes, by constructing the specific between-class and within-class scatter matrices in kernel space using the Fisher criterion. In this manner, KUADA acquires all discriminant vectors class by class. Furthermore, KUADA makes every discriminant vector satisfy locally statistical uncorrelated constraints by using the corresponding class and part of its most adjacent classes. Experimental results on the public AR and CAS-PEAL face databases demonstrate that the proposed approach outperforms several representative nonlinear discriminant methods.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120947462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoshihiko Mochizuki, Yusuke Kameda, A. Imiya, T. Sakai, Takashi Imaizumi
{"title":"An Iterative Method for Superresolution of Optical Flow Derived by Energy Minimisation","authors":"Yoshihiko Mochizuki, Yusuke Kameda, A. Imiya, T. Sakai, Takashi Imaizumi","doi":"10.1109/ICPR.2010.556","DOIUrl":"https://doi.org/10.1109/ICPR.2010.556","url":null,"abstract":"Super resolution is a technique to recover a high resolution image from a low resolution image. We develop a variational super resolution method for the subpixel accurate optical flow computation using variational optimisation. We combine variational super resolution and the variational optical flow computation for the super resolution optical flow computation.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121008558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Holistic Urdu Handwritten Word Recognition Using Support Vector Machine","authors":"M. W. Sagheer, C. He, N. Nobile, C. Suen","doi":"10.1109/ICPR.2010.468","DOIUrl":"https://doi.org/10.1109/ICPR.2010.468","url":null,"abstract":"Since the Urdu language has more isolated letters than Arabic and Farsi, a research on Urdu handwritten word is desired. This is a novel approach to use the compound features and a Support Vector Machine (SVM) in offline Urdu word recognition. Due to the cursive style in Urdu, a classification using a holistic approach is adapted efficiently. Compound feature sets, which involves in structural and gradient features (directional features), are extracted on each Urdu word. Experiments have been conducted on the CENPARMI Urdu Words Database, and a high recognition accuracy of 97.00% has been achieved.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121017509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Abnormal Traffic Detection Using Intelligent Driver Model","authors":"Waqas Sultani, J. Choi","doi":"10.1109/ICPR.2010.88","DOIUrl":"https://doi.org/10.1109/ICPR.2010.88","url":null,"abstract":"We present a novel approach for detecting and localizing abnormal traffic using intelligent driver model. Specifically, we advect particles over video sequence. By treating each particle as a car, we compute driver behavior using intelligent driver model. The behaviors are learned using latent dirichlet allocation and frames are classified as abnormal using likelihood threshold criteria. In order to localize the abnormality; we compute spatial gradients of behaviors and construct Finite Time Lyaponov Field. Finally the region of abnormality is segmented using watershed algorithm. The effectiveness of proposed approach is validated using videos from stock footage websites.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121042933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape-Based Image Retrieval Using a New Descriptor Based on the Radon and Wavelet Transforms","authors":"Nafaa Nacereddine, S. Tabbone, D. Ziou, L. Hamami","doi":"10.1109/ICPR.2010.492","DOIUrl":"https://doi.org/10.1109/ICPR.2010.492","url":null,"abstract":"-In this paper, the Radon transform is used to design a new descriptor called Phi-signature invariant to usual geometric transformations. Experiments show the effectiveness of the multilevel representation of the descriptor built from Phi-signature and R-","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127263797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On-Line Video Recognition and Counting of Harmful Insects","authors":"Ikhlef Bechar, S. Moisan, M. Thonnat, F. Brémond","doi":"10.1109/ICPR.2010.989","DOIUrl":"https://doi.org/10.1109/ICPR.2010.989","url":null,"abstract":"This article is concerned with on-line counting of harmful insects of certain species in videos in the framework of in situ video-surveillance that aims at the early detection of prominent pest attacks in greenhouse crops. The video-processing challenges that need to be coped with concern mainly the low spatial resolution and color contrast of the objects of interest in the videos, the outdoor issues and the video-processing which needs to be done in quasi-real time. Thus, we propose an approach which makes use of a pattern recognition algorithm to extract the locations of the harmful insects of interest in a video, which we combine with some video-processing algorithms in order to achieve an on-line video-surveillance solution. The system has been validated off-line on the whiteflie species (one potential harmful insect) and has shown acceptable performance in terms of accuracy versus computational time.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124979227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}