2013 2nd IAPR Asian Conference on Pattern Recognition最新文献

筛选
英文 中文
A Robust and Efficient Minutia-Based Fingerprint Matching Algorithm 一种鲁棒高效的基于细节的指纹匹配算法
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.43
Wen Wen, Zhi Qi, Zhi Li, Junhao Zhang, Yuguang Gong, Peng Cao
{"title":"A Robust and Efficient Minutia-Based Fingerprint Matching Algorithm","authors":"Wen Wen, Zhi Qi, Zhi Li, Junhao Zhang, Yuguang Gong, Peng Cao","doi":"10.1109/ACPR.2013.43","DOIUrl":"https://doi.org/10.1109/ACPR.2013.43","url":null,"abstract":"In this paper, we propose a novel robust and efficient minutia-based fingerprint matching algorithm. There are two key contributions. First, we apply a set of global level minutia dependent features, i.e., the qualities that measure the reliabilities of the extracted minutiae and the area of overlapping regions between the query and template images of fingerprints. The implementation of these easy-to-get minutia dependent features presents coherence to the well-accepted fingerprint template standards. Besides, the reasonable combination of them results in the robustness to poor quality fingerprint images. Second, we implement a hierarchical recognition strategy, which applies a procedure of global matching that refines the local matching decision towards a genuine result over the entire images. Other than the much improved accuracy, our algorithm also promotes the efficiency, because compared with other state-of-the-art matching approaches, it does not make use of any time-consuming operations or any complex feature structures. The experimental results demonstrate the proposed method exhibits an excellent accuracy that exceeds the performances of well-known minutia based matchers. Meanwhile, the proposed algorithm presents potentials to serve a real-time fingerprint recognition system.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"48 82","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114088056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Mammary Gland Tumor Detection in Cats Using Ant Colony Optimisation 基于蚁群优化的猫乳腺肿瘤检测
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.173
Hossam M. Moftah, Mohammad Ibrahim, A. Hassanien, G. Schaefer
{"title":"Mammary Gland Tumor Detection in Cats Using Ant Colony Optimisation","authors":"Hossam M. Moftah, Mohammad Ibrahim, A. Hassanien, G. Schaefer","doi":"10.1109/ACPR.2013.173","DOIUrl":"https://doi.org/10.1109/ACPR.2013.173","url":null,"abstract":"Mammary gland tumors are among the most common tumors in cats. Over 85 percent of mammary tumors in cats are malignant and they tend to grow and metastasize quickly to different organs like lungs and lymph nodes. Similar to breast tumors in humans, they start as a small lump in a mammary gland and then grow and increase in size unless detected and treated. In this paper, we present an approach to detect broadenoma mammary gland tumors in cats using ant colony optimisation. Image features can then be extracted from the segmented image regions. To evaluate the performance of our presented approach, 25 microscopical images were taken from tissue slides of broadenomas from three cat cases. The experimental results obtained confirm that the effectiveness and performance of the proposed system is high.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121567544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bag-of-Words Against Nearest-Neighbor Search for Visual Object Retrieval 针对最近邻搜索的词袋视觉对象检索
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.56
Cai-Zhi Zhu, Xiaoping Zhou, S. Satoh
{"title":"Bag-of-Words Against Nearest-Neighbor Search for Visual Object Retrieval","authors":"Cai-Zhi Zhu, Xiaoping Zhou, S. Satoh","doi":"10.1109/ACPR.2013.56","DOIUrl":"https://doi.org/10.1109/ACPR.2013.56","url":null,"abstract":"We compare the Bag-of-Words (BoW) framework with the Approximate Nearest-Neighbor (ANN) based system in the context of visual object retrieval. This comparison is motivated by the implicit connection between these two methods: generally speaking, the BoW framework can be regarded as a quantization-guided ANN voting system. The value of establishing such comparison lies in: first, by comparing with other quantization-free ANN system, the performance loss caused by the quantization error in the BoW framework can be estimated quantitatively. Second, this comparison completely inspects the pros and cons of both ANN and BoW methods, thus to facilitate new algorithm design. In this study, by taking an independent dataset as the reference to validate matches, we design an ANN voting system that outperforms all other methods. Comprehensive and computationally intensive experiments are conducted on two Oxford datasets and two TrecVid instance search datasets, and the new state-of-the-art is achieved.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121644210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic Segmentation and Classification of Liver Abnormalities Using Fractal Dimension 基于分形维数的肝脏异常自动分割与分类
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.172
A. Anter, A. Hassanien, G. Schaefer
{"title":"Automatic Segmentation and Classification of Liver Abnormalities Using Fractal Dimension","authors":"A. Anter, A. Hassanien, G. Schaefer","doi":"10.1109/ACPR.2013.172","DOIUrl":"https://doi.org/10.1109/ACPR.2013.172","url":null,"abstract":"Abnormalities in the liver include masses which can be benign or malignant. Due to the presence of these abnormalities, the regularity of the liver structure is altered, which changes its fractal dimension. In this paper, we present a computer aided diagnostic system for classifying liver abnormalities from abdominal CT images using fractal dimension features. We integrate different methods for liver segmentation and abnormality classification and propose an attempt that combines different techniques in order to compensate their individual weaknesses and to exploit their strengths. Classification is based on fractal dimension, with six different features being employed for extracted regions of interest. Experimental results confirm that our approach is robust, fast and able to effectively detect the presence of abnormalities in the liver.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Aircraft Detection by Deep Belief Nets 基于深度信念网的飞机检测
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.5
Xueyun Chen, Shiming Xiang, Cheng-Lin Liu, Chunhong Pan
{"title":"Aircraft Detection by Deep Belief Nets","authors":"Xueyun Chen, Shiming Xiang, Cheng-Lin Liu, Chunhong Pan","doi":"10.1109/ACPR.2013.5","DOIUrl":"https://doi.org/10.1109/ACPR.2013.5","url":null,"abstract":"Aircraft detection is a difficult task in high-resolution remote sensing images, due to the variable sizes, colors, orientations and complex backgrounds. In this paper, an effective aircraft detection method is proposed which exactly locates the object by outputting its geometric center, orientation, position. To reduce the influence of background, multi-images including gradient image and gray thresholding images of the object were input to a Deep Belief Net (DBN), which was pre-trained first to learn features and later fine-tuned by back-propagation to yield a robust detector. Experimental results show that DBNs can detecte the tiny blurred aircrafts correctly in many difficult airport images, DBNs outperform the traditional Feature Classifier methods in robustness and accuracy, and the multi-images help improve the detection precision of DBN than using only single-image.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132237081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Emotional Speech Recognition Using Acoustic Models of Decomposed Component Words 基于分解成分词声学模型的情绪语音识别
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.13
Vivatchai Kaveeta, K. Patanukhom
{"title":"Emotional Speech Recognition Using Acoustic Models of Decomposed Component Words","authors":"Vivatchai Kaveeta, K. Patanukhom","doi":"10.1109/ACPR.2013.13","DOIUrl":"https://doi.org/10.1109/ACPR.2013.13","url":null,"abstract":"This paper presents a novel approach for emotional speech recognition. Instead of using a full length of speech for classification, the proposed method decomposes speech signals into component words, groups the words into segments and generates an acoustic model for each segment by using features such as audio power, MFCC, log attack time, spectrum spread and segment duration. Based on the proposed segment-based classification, unknown speech signals can be recognized into sequences of segment emotions. Emotion profiles (EPs) are extracted from the emotion sequences. Finally, speech emotion can be determined by using EP as features. Experiments are conducted by using 6,810 training samples and 722 test samples which are composed of eight emotional classes from IEMOCAP database. In comparison with a conventional method, the proposed method can improve recognition rate from 46.81% to 58.59% in eight emotion classification and from 60.18% to 71.25% in four emotion classification.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131752889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze Estimation in Children's Peer-Play Scenarios 儿童同伴游戏情境中的凝视估计
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.178
Dingrui Duan, Lu Tian, J. Cui, Li Wang, H. Zha, H. Aghajan
{"title":"Gaze Estimation in Children's Peer-Play Scenarios","authors":"Dingrui Duan, Lu Tian, J. Cui, Li Wang, H. Zha, H. Aghajan","doi":"10.1109/ACPR.2013.178","DOIUrl":"https://doi.org/10.1109/ACPR.2013.178","url":null,"abstract":"Gaze is a powerful cue for children's social behavior analysis. In this paper, a novel method is proposed to estimate children's gaze orientation in the experimental data of developmental psychology based on head pose estimation. In consideration of the possible errors of head pose estimation results, temporal information and potential targets are both introduced to improve the results of gaze estimation. At last, this method is evaluated by a dataset of children's peer-play scenarios and the results show that this method has a good performance. According to the experimental valuation and analysis, in a certain peer-play scenario, potential targets are powerful spatial cues for children's gaze estimation and temporal information also provides some cues to improve the estimation results.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130887038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Transparent Text Detection and Background Recovery 透明文本检测和背景恢复
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.57
Xinhao Liu, N. Chiba
{"title":"Transparent Text Detection and Background Recovery","authors":"Xinhao Liu, N. Chiba","doi":"10.1109/ACPR.2013.57","DOIUrl":"https://doi.org/10.1109/ACPR.2013.57","url":null,"abstract":"We propose two methods for detecting transparent text in images and recovering the background behind the text. Although text detection in natural scenes is an active research area, most current methods are focused on non-transparent text. To detect transparent text, we developed an adaptive edge detection method for edge-based text detection that can accurately detect text even under low contrast, which is common among transparent text images and a method for recovering the original background content behind the detected transparent text. Experiments using real images show the effectiveness of the proposed methods.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133562602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Direct Ego-Motion Estimation Using Normal Flows 使用正常流的直接自我运动估计
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.130
Ding Yuan, Miao Liu, Hong Zhang
{"title":"Direct Ego-Motion Estimation Using Normal Flows","authors":"Ding Yuan, Miao Liu, Hong Zhang","doi":"10.1109/ACPR.2013.130","DOIUrl":"https://doi.org/10.1109/ACPR.2013.130","url":null,"abstract":"In this paper we present a novel method that estimates the motion parameters of a monocular camera, which is under unconstrained movement. Different from the traditional works which tackle the problem by establishing motion correspondences, or by calculating optical flows within the image sequence, the proposed method estimates the motion parameters directly by using the information of spatio-temporal gradient of the image intensity. Hence, our method requires no specific assumptions about the captured scene, like it is smooth almost everywhere or it must contain distinct features etc. We have tested the methods on both synthetic image data and real image sequences. Experimental results show that the developed methods are effective in determining the camera motion parameters.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133092510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real-Time Foreground Segmentation from Moving Camera Based on Case-Based Trajectory Classification 基于案例轨迹分类的运动摄像机实时前景分割
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.146
Yosuke Nonaka, Atsushi Shimada, H. Nagahara, R. Taniguchi
{"title":"Real-Time Foreground Segmentation from Moving Camera Based on Case-Based Trajectory Classification","authors":"Yosuke Nonaka, Atsushi Shimada, H. Nagahara, R. Taniguchi","doi":"10.1109/ACPR.2013.146","DOIUrl":"https://doi.org/10.1109/ACPR.2013.146","url":null,"abstract":"Recently, several methods for foreground segmentation from moving camera have been proposed. A trajectory-based method is one of typical approaches to segment video frames into foreground and background regions. The method obtains long term trajectories from entire of video frame and segments them by learning pixel or motion based object features. However, it often needs large amount of computational cost and memory resource to maintain trajectories. We present a trajectory-based method which aims for real-time foreground segmentation from moving camera. Unlike conventional methods, we use trajectories which are sparsely obtained from two successive video frames. In addition, our method enables using spatio-temporal feature of trajectories by introducing case-based approach to improve detection results. We compare our method with previous approaches and show results on challenging video sequences.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131696696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信