2013 2nd IAPR Asian Conference on Pattern Recognition最新文献

筛选
英文 中文
Melanin and Hemoglobin Identification for Skin Disease Analysis 黑色素和血红蛋白在皮肤病分析中的鉴定
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.9
Zhao Liu, J. Zerubia
{"title":"Melanin and Hemoglobin Identification for Skin Disease Analysis","authors":"Zhao Liu, J. Zerubia","doi":"10.1109/ACPR.2013.9","DOIUrl":"https://doi.org/10.1109/ACPR.2013.9","url":null,"abstract":"This paper proposes a novel method to extract melanin and hemoglobin concentrations of human skin, using bilateral decomposition with the knowledge of a multiple layered skin model and absorbance characteristics of major chromophores. Different from state-of-art approaches, the proposed method enables to address highlight and strong shading usually existing in skin color images captured under uncontrolled environment. The derived melanin and hemoglobin indices, directly related to the pathological tissue conditions, tend to be less influenced by external imaging factors and are effective for describing pigmentation distributions. Experiments demonstrate the value of the proposed method for computer-aided diagnosis of different skin diseases. The diagnostic accuracy of melanoma increases by 9-15% for conventional RGB lesion images, compared to techniques using other color descriptors. The discrimination of inflammatory acne and hyper pigmentation reveals acne stage, which would be useful for acne severity evaluation. It is expected that this new method will prove useful for other skin disease analysis.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128187082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Towards Robust Gait Recognition 稳健的步态识别
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.211
Yasushi Makihara
{"title":"Towards Robust Gait Recognition","authors":"Yasushi Makihara","doi":"10.1109/ACPR.2013.211","DOIUrl":"https://doi.org/10.1109/ACPR.2013.211","url":null,"abstract":"Gait recognition is a method of biometric person authentication from his/her unconscious walking manner. Unlike the other biometrics such as DNA, fingerprint, vein, and iris, the gait can be recognized even at a distance from a camera without subjects' cooperation, and hence it is expected to be applied to many fields: criminal investigation, forensic science, and surveillance. However, the absence of the subjects' cooperation may sometimes induces large intra-subject variations of the gait due to the changes of viewpoints, walking directions, speeds, clothes, and shoes. We therefore develop methods of robust gait recognition with (1) an appearance-based view transformation model, (2) a kinematics-based speed transformation model. Moreover, CCTV footages are often stored as low frame-rate videos due to limitation of communication bandwidth and storage size, which makes it much more difficult to observe a continuous gait motion and hence significantly degrades the gait recognition performance. We therefore solve this problem with (3) a technique of periodic temporal super resolution from a low frame-rate video. We show the efficiency of the proposed methods with our constructed gait databases.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129056148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Compacting Large and Loose Communities 压缩大型和松散的社区
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.137
V. Chandrashekar, Shailesh Kumar, C. V. Jawahar
{"title":"Compacting Large and Loose Communities","authors":"V. Chandrashekar, Shailesh Kumar, C. V. Jawahar","doi":"10.1109/ACPR.2013.137","DOIUrl":"https://doi.org/10.1109/ACPR.2013.137","url":null,"abstract":"Detecting compact overlapping communities in large networks is an important pattern recognition problem with applications in many domains. Most community detection algorithms trade-off between community sizes, their compactness and the scalability of finding communities. Clique Percolation Method (CPM) and Local Fitness Maximization (LFM) are two prominent and commonly used overlapping community detection methods that scale with large networks. However, significant number of communities found by them are large, noisy, and loose. In this paper, we propose a general algorithm that takes such large and loose communities generated by any method and refines them into compact communities in a systematic fashion. We define a new measure of community-ness based on eigenvector centrality, identify loose communities using this measure and propose an algorithm for partitioning such loose communities into compact communities. We refine the communities found by CPM and LFM using our method and show their effectiveness compared to the original communities in a recommendation engine task.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129125574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Elements Extraction of Chinese Web News Using Prior Information of Content and Structure 基于内容和结构先验信息的中文网络新闻元素自动提取
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.52
Chengru Song, Shifeng Weng, Changshui Zhang
{"title":"Automatic Elements Extraction of Chinese Web News Using Prior Information of Content and Structure","authors":"Chengru Song, Shifeng Weng, Changshui Zhang","doi":"10.1109/ACPR.2013.52","DOIUrl":"https://doi.org/10.1109/ACPR.2013.52","url":null,"abstract":"We propose a set of efficient processes for extracting all four elements of Chinese news web pages, namely news title, release date, news source and the main text. Our approach is based on a deep analysis of content and structure features of current Chinese news. We take content indicators as the key to recover tree structure of the main text. Additionally, we come up with the concept of Length-Distance Ratio to help improve performance. Our method rarely depends on selection of samples and has strong generalization ability regardless of training process, distinguishing itself from most existing methods. We have tested our approach on 1721 labeled Chinese news pages from 429 web sites. Results show that an 87% accuracy was achieved for news source extraction, and over 95% accuracy for other three elements.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129354926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure Feature Extraction for Finger-Vein Recognition 手指静脉识别的结构特征提取
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.113
Di Cao, Jinfeng Yang, Yihua Shi, Chenghua Xu
{"title":"Structure Feature Extraction for Finger-Vein Recognition","authors":"Di Cao, Jinfeng Yang, Yihua Shi, Chenghua Xu","doi":"10.1109/ACPR.2013.113","DOIUrl":"https://doi.org/10.1109/ACPR.2013.113","url":null,"abstract":"A new finger-vein image matching method based on structure feature is proposed in this paper. To describe the finger-vein structures conveniently, the vein skeletons are firstly extracted and used as the primitive information. Based on the skeletons, a curve tracing scheme depended on junction points is proposed for curve segment extraction. Next, the curve segments are encoded piecewise using a modified included angle chain, and the structure feature code of a vein network are generated sequentially. Finally, a dynamic scheme is adopted for structure feature matching. Experimental results show that the proposed method perform well in improving finger-vein matching accuracy.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131014866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Adaptive CFA Demosaicking Using Bilateral Filters for Colour Edge Preservation 使用双边滤波器进行颜色边缘保存的自适应CFA去马赛克
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.75
J. S. J. Li, S. Randhawa
{"title":"Adaptive CFA Demosaicking Using Bilateral Filters for Colour Edge Preservation","authors":"J. S. J. Li, S. Randhawa","doi":"10.1109/ACPR.2013.75","DOIUrl":"https://doi.org/10.1109/ACPR.2013.75","url":null,"abstract":"Colour Filter Array (CFA) demosaicking is a process to interpolate missing colour values in order to produce a full colour image when a single image sensor is used. For smooth regions, a higher order of interpolation will usually achieve higher accuracy. However when there is a colour edge, a lower order of interpolation is desirable as it will avoid interpolation across an edge without blurring it. In this paper, a bilateral filter, which has been known to preserve sharp edges, is used to adaptively modify the weights for interpolation. When there is a colour edge, the weights will bias towards a lower order of interpolation using closer pixel values only. Otherwise, the weights will bias towards a higher interpolation for smooth regions. In order to avoid interpolation across a possible edge adjacent to the missing pixel location, four estimates using the adaptive bilateral filter are first determined for each cardinal direction. A classifier comprising a weighted median filter together with a bilateral filter is then used to produce an output of the missing colour pixel value from the four estimates. It has been shown that our proposed method has improved performance in preserving sharp colour edges with minimal colour artifacts, and it outperforms other existing demosaicking methods for most images.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131043700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Saliency Detection Using Color Spatial Variance Weighted Graph Model 基于颜色空间方差加权图模型的显著性检测
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.93
Xiaoyun Yan, Yuehuang Wang, Mengmeng Song, Man Jiang
{"title":"Saliency Detection Using Color Spatial Variance Weighted Graph Model","authors":"Xiaoyun Yan, Yuehuang Wang, Mengmeng Song, Man Jiang","doi":"10.1109/ACPR.2013.93","DOIUrl":"https://doi.org/10.1109/ACPR.2013.93","url":null,"abstract":"Saliency detection as a recently active research field of computer vision, has a wide range of applications, such as pattern recognition, image retrieval, adaptive compression, target detection, etc. In this paper, we propose a saliency detection method based on color spatial variance weighted graph model, which is designed rely on a background prior. First, the original image is partitioned into small patches, then we use mean-shift clustering algorithm on this patches to get sorts of clustering centers that represents the main colors of whole image. In modeling stage, all patches and the clustering centers are denoted as nodes on a specific graph model. The saliency of each patch is defined as weighted sum of weights on shortest paths from the patch to all clustering centers, each shortest path is weighted according to color spatial variance. Our saliency detection method is computational efficient and outperformed the state of art methods by higher precision and better recall rates, when we took evaluation on the popular MSRA1000 database.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129293559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Binary Descriptor Based Background Modeling 基于实时二进制描述符的背景建模
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.125
Wan-Chen Liu, Shu-Zhe Lin, Min-Hsiang Yang, Chun-Rong Huang
{"title":"Real-Time Binary Descriptor Based Background Modeling","authors":"Wan-Chen Liu, Shu-Zhe Lin, Min-Hsiang Yang, Chun-Rong Huang","doi":"10.1109/ACPR.2013.125","DOIUrl":"https://doi.org/10.1109/ACPR.2013.125","url":null,"abstract":"In this paper, we propose a new binary descriptor based background modeling approach which is robust to lighting changes and dynamic backgrounds in the environment. Instead of using traditional parametric models, our background models are constructed by background instances using binary descriptors computed from observed backgrounds. As shown in the experiments, our method can achieve better foreground detection results and fewer false alarms compared to the state-of-the-art methods.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122795885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A Multi-resolution Action Recognition Algorithm Using Wavelet Domain Features 使用小波域特征的多分辨率动作识别算法
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.143
H. Imtiaz, U. Mahbub, G. Schaefer, Md Atiqur Rahman Ahad
{"title":"A Multi-resolution Action Recognition Algorithm Using Wavelet Domain Features","authors":"H. Imtiaz, U. Mahbub, G. Schaefer, Md Atiqur Rahman Ahad","doi":"10.1109/ACPR.2013.143","DOIUrl":"https://doi.org/10.1109/ACPR.2013.143","url":null,"abstract":"This paper proposes a novel approach for human action recognition using multi-resolution feature extraction based on the two-dimensional discrete wavelet transform (2D-DWT). Action representations can be considered as image templates, which can be useful for understanding various actions or gestures as well as for recognition and analysis. An action recognition scheme is developed based on extracting features from the frames of a video sequence. The proposed feature selection algorithm offers the advantage of very low feature dimensionality and therefore lower computational burden. It is shown that the use of wavelet-domain features enhances the distinguish ability of different actions, resulting in a very high within-class compactness and between-class separability of the extracted features, while certain undesirable phenomena, such as camera movement and change in camera distance from the subject, are less severe in the frequency domain. Principal component analysis is performed to further reduce the dimensionality of the feature space. Extensive experimentations on a standard benchmark database confirm that the proposed approach offers not only computational savings but also a very recognition accuracy.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126963557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning Fingerprint Orientation Fields Using Continuous Restricted Boltzmann Machines 使用连续受限玻尔兹曼机学习指纹方向场
2013 2nd IAPR Asian Conference on Pattern Recognition Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.37
M. Sahasrabudhe, A. Namboodiri
{"title":"Learning Fingerprint Orientation Fields Using Continuous Restricted Boltzmann Machines","authors":"M. Sahasrabudhe, A. Namboodiri","doi":"10.1109/ACPR.2013.37","DOIUrl":"https://doi.org/10.1109/ACPR.2013.37","url":null,"abstract":"We aim to learn local orientation field patterns in fingerprints and correct distorted field patterns in noisy fingerprint images. This is formulated as a learning problem and achieved using two continuous restricted Boltzmann machines. The learnt orientation fields are then used in conjunction with traditional Gabor based algorithms for fingerprint enhancement. Orientation fields extracted by gradient-based methods are local, and do not consider neighboring orientations. If some amount of noise is present in a fingerprint, then these methods perform poorly when enhancing the image, affecting fingerprint matching. This paper presents a method to correct the resulting noisy regions over patches of the fingerprint by training two continuous restricted Boltzmann machines. The continuous RBMs are trained with clean fingerprint images and applied to overlapping patches of the input fingerprint. Experimental results show that one can successfully restore patches of noisy fingerprint images.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126147391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信