2010 20th International Conference on Pattern Recognition最新文献

筛选
英文 中文
Subspace Methods with Globally/Locally Weighted Correlation Matrix 具有全局/局部加权相关矩阵的子空间方法
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.1035
Yukihiko Yamashita, T. Wakahara
{"title":"Subspace Methods with Globally/Locally Weighted Correlation Matrix","authors":"Yukihiko Yamashita, T. Wakahara","doi":"10.1109/ICPR.2010.1035","DOIUrl":"https://doi.org/10.1109/ICPR.2010.1035","url":null,"abstract":"The discriminant function of a subspace method is provided by using correlation matrices that reflect the averaged feature of a category. As a result, it will not work well on unknown input patterns that are far from the average. To address this problem, we propose two kinds of weighted correlation matrices for subspace methods. The globally weighted correlation matrix (GWCM) attaches importance to training patterns that are far from the average. Then, it can reflect the distribution of patterns around the category boundary more precisely. The computational cost of a subspace method using GWCMs is almost the same as that using ordinary correlation matrices. The locally weighted correlation matrix (LWCM) attaches importance to training patterns that arenear to an input pattern to be classified. Then, it can reflect the distribution of training patterns around the input pattern in more detail. The computational cost of a subspace method with LWCM at the recognition stage does not depend on the number of training patterns, while those of the conventional adaptive local and the nonlinear subspace methods do. We show the advantages of the proposed methods by experiments made on the MNIST database of handwritten digits.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116956631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unified Approach to Detection and Identification of Commercial Films by Temporal Occurrence Pattern 基于时间发生模式的商业电影检测与识别的统一方法
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.804
Narongsak Putpuek, N. Cooharojananone, C. Lursinsap, S. Satoh
{"title":"Unified Approach to Detection and Identification of Commercial Films by Temporal Occurrence Pattern","authors":"Narongsak Putpuek, N. Cooharojananone, C. Lursinsap, S. Satoh","doi":"10.1109/ICPR.2010.804","DOIUrl":"https://doi.org/10.1109/ICPR.2010.804","url":null,"abstract":"In this paper, we propose a method to detect and identify commercial films from broadcast videos by using Temporal Occurrence Pattern (TOP). Our method uses the characteristic of broadcast videos in Japan that each individual commercial film appears multiple times in broadcast stream and typically has the same duration (e.g., 15 seconds). Using this characteristic, the method can detect as well as identify individual commercial films within given video archive. Based on simple signature (global feature) for each frame image, the method first puts all frames into numbers of buckets where each bucket contains frames having the same signature, and thus they appear the same. For each bucket, TOP as a binary sequence representing the occurrence time within video archive is then generated. All buckets are then clustered using simple hierarchical clustering with similarity between TOPs allowing possible temporal offset. This clustering stage can stitch up all frames for each commercial film and identify multiple occurrence of the same commercial film at the same time. We tested our method using actual broadcast video archive and confirmed good performance in detecting and identifying commercial films.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116987620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Boosting Gray Codes for Red Eyes Removal 增强消除红眼的灰色代码
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.1024
S. Battiato, G. Farinella, M. Guarnera, G. Messina, D. Ravì
{"title":"Boosting Gray Codes for Red Eyes Removal","authors":"S. Battiato, G. Farinella, M. Guarnera, G. Messina, D. Ravì","doi":"10.1109/ICPR.2010.1024","DOIUrl":"https://doi.org/10.1109/ICPR.2010.1024","url":null,"abstract":"Since the large diffusion of digital camera and mobile devices with embedded camera and flashgun, the red-eyes artifacts have de-facto become a critical problem. The technique herein described makes use of three main steps to identify and remove red-eyes. First, red eyes candidates are extracted from the input image by using an image filtering pipeline. A set of classifiers is then learned on gray code features extracted in the clustered patches space, and hence employed to distinguish between eyes and non-eyes patches. Once red-eyes are detected, artifacts are removed through desaturation and brightness reduction. The proposed method has been tested on large dataset of images achieving effective results in terms of hit rates maximization, false positives reduction and quality measure.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117178877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Entropy of Feature Point-Based Retina Templates 基于特征点的视网膜模板熵
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.61
J. Jeffers, A. Arakala, K. Horadam
{"title":"Entropy of Feature Point-Based Retina Templates","authors":"J. Jeffers, A. Arakala, K. Horadam","doi":"10.1109/ICPR.2010.61","DOIUrl":"https://doi.org/10.1109/ICPR.2010.61","url":null,"abstract":"This paper studies the amount of distinctive information contained in a privacy protecting and compact template of a retinal image created from the locations of crossings and bifurcations in the choroidal vasculature, otherwise called feature points. Using a training set of 20 different retina, we build a template generator that simulates one million imposter comparisons and computes the number of imposter retina comparisons that successfully matched at various thresholds. The template entropy thus computed was used to validate a theoretical model of imposter comparisons. The simulator and the model both estimate that 20 bits of entropy can be achieved by the feature point-based template. Our results reveal the distinctiveness of feature point-based retinal templates, hence establishing their potential as a biometric identifier for high security and memory intensive applications.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Kernel Uncorrelated Adjacent-class Discriminant Analysis 核不相关邻接类判别分析
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.178
Xiaoyuan Jing, Sheng Li, Yong-Fang Yao, Lu-Sha Bian, Jing-yu Yang
{"title":"Kernel Uncorrelated Adjacent-class Discriminant Analysis","authors":"Xiaoyuan Jing, Sheng Li, Yong-Fang Yao, Lu-Sha Bian, Jing-yu Yang","doi":"10.1109/ICPR.2010.178","DOIUrl":"https://doi.org/10.1109/ICPR.2010.178","url":null,"abstract":"In this paper, a kernel uncorrelated adjacent-class discriminant analysis (KUADA) approach is proposed for image recognition. The optimal nonlinear discriminant vector obtained by this approach can differentiate one class and its adjacent classes, i.e., its nearest neighbor classes, by constructing the specific between-class and within-class scatter matrices in kernel space using the Fisher criterion. In this manner, KUADA acquires all discriminant vectors class by class. Furthermore, KUADA makes every discriminant vector satisfy locally statistical uncorrelated constraints by using the corresponding class and part of its most adjacent classes. Experimental results on the public AR and CAS-PEAL face databases demonstrate that the proposed approach outperforms several representative nonlinear discriminant methods.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120947462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Iterative Method for Superresolution of Optical Flow Derived by Energy Minimisation 能量最小化法求光流超分辨的迭代方法
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.556
Yoshihiko Mochizuki, Yusuke Kameda, A. Imiya, T. Sakai, Takashi Imaizumi
{"title":"An Iterative Method for Superresolution of Optical Flow Derived by Energy Minimisation","authors":"Yoshihiko Mochizuki, Yusuke Kameda, A. Imiya, T. Sakai, Takashi Imaizumi","doi":"10.1109/ICPR.2010.556","DOIUrl":"https://doi.org/10.1109/ICPR.2010.556","url":null,"abstract":"Super resolution is a technique to recover a high resolution image from a low resolution image. We develop a variational super resolution method for the subpixel accurate optical flow computation using variational optimisation. We combine variational super resolution and the variational optical flow computation for the super resolution optical flow computation.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121008558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Holistic Urdu Handwritten Word Recognition Using Support Vector Machine 整体乌尔都语手写字识别使用支持向量机
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.468
M. W. Sagheer, C. He, N. Nobile, C. Suen
{"title":"Holistic Urdu Handwritten Word Recognition Using Support Vector Machine","authors":"M. W. Sagheer, C. He, N. Nobile, C. Suen","doi":"10.1109/ICPR.2010.468","DOIUrl":"https://doi.org/10.1109/ICPR.2010.468","url":null,"abstract":"Since the Urdu language has more isolated letters than Arabic and Farsi, a research on Urdu handwritten word is desired. This is a novel approach to use the compound features and a Support Vector Machine (SVM) in offline Urdu word recognition. Due to the cursive style in Urdu, a classification using a holistic approach is adapted efficiently. Compound feature sets, which involves in structural and gradient features (directional features), are extracted on each Urdu word. Experiments have been conducted on the CENPARMI Urdu Words Database, and a high recognition accuracy of 97.00% has been achieved.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121017509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Abnormal Traffic Detection Using Intelligent Driver Model 基于智能驾驶员模型的异常交通检测
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.88
Waqas Sultani, J. Choi
{"title":"Abnormal Traffic Detection Using Intelligent Driver Model","authors":"Waqas Sultani, J. Choi","doi":"10.1109/ICPR.2010.88","DOIUrl":"https://doi.org/10.1109/ICPR.2010.88","url":null,"abstract":"We present a novel approach for detecting and localizing abnormal traffic using intelligent driver model. Specifically, we advect particles over video sequence. By treating each particle as a car, we compute driver behavior using intelligent driver model. The behaviors are learned using latent dirichlet allocation and frames are classified as abnormal using likelihood threshold criteria. In order to localize the abnormality; we compute spatial gradients of behaviors and construct Finite Time Lyaponov Field. Finally the region of abnormality is segmented using watershed algorithm. The effectiveness of proposed approach is validated using videos from stock footage websites.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121042933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Shape-Based Image Retrieval Using a New Descriptor Based on the Radon and Wavelet Transforms 基于Radon变换和小波变换的基于形状的图像检索
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.492
Nafaa Nacereddine, S. Tabbone, D. Ziou, L. Hamami
{"title":"Shape-Based Image Retrieval Using a New Descriptor Based on the Radon and Wavelet Transforms","authors":"Nafaa Nacereddine, S. Tabbone, D. Ziou, L. Hamami","doi":"10.1109/ICPR.2010.492","DOIUrl":"https://doi.org/10.1109/ICPR.2010.492","url":null,"abstract":"-In this paper, the Radon transform is used to design a new descriptor called Phi-signature invariant to usual geometric transformations. Experiments show the effectiveness of the multilevel representation of the descriptor built from Phi-signature and R-","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127263797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
On-Line Video Recognition and Counting of Harmful Insects 有害昆虫的在线视频识别和计数
2010 20th International Conference on Pattern Recognition Pub Date : 2010-08-23 DOI: 10.1109/ICPR.2010.989
Ikhlef Bechar, S. Moisan, M. Thonnat, F. Brémond
{"title":"On-Line Video Recognition and Counting of Harmful Insects","authors":"Ikhlef Bechar, S. Moisan, M. Thonnat, F. Brémond","doi":"10.1109/ICPR.2010.989","DOIUrl":"https://doi.org/10.1109/ICPR.2010.989","url":null,"abstract":"This article is concerned with on-line counting of harmful insects of certain species in videos in the framework of in situ video-surveillance that aims at the early detection of prominent pest attacks in greenhouse crops. The video-processing challenges that need to be coped with concern mainly the low spatial resolution and color contrast of the objects of interest in the videos, the outdoor issues and the video-processing which needs to be done in quasi-real time. Thus, we propose an approach which makes use of a pattern recognition algorithm to extract the locations of the harmful insects of interest in a video, which we combine with some video-processing algorithms in order to achieve an on-line video-surveillance solution. The system has been validated off-line on the whiteflie species (one potential harmful insect) and has shown acceptable performance in terms of accuracy versus computational time.","PeriodicalId":309591,"journal":{"name":"2010 20th International Conference on Pattern Recognition","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124979227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信