2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)最新文献

筛选
英文 中文
A Deep Learning Approach to Appearance-Based Gaze Estimation under Head Pose Variations 头部姿态变化下基于外观的凝视估计的深度学习方法
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.155
Hsin-Pei Sun, Cheng-Hsun Yang, S. Lai
{"title":"A Deep Learning Approach to Appearance-Based Gaze Estimation under Head Pose Variations","authors":"Hsin-Pei Sun, Cheng-Hsun Yang, S. Lai","doi":"10.1109/ACPR.2017.155","DOIUrl":"https://doi.org/10.1109/ACPR.2017.155","url":null,"abstract":"In this paper, we propose a deep learning based gaze estimation algorithm that estimates the gaze direction from a single face image. The proposed gaze estimation algorithm is based on using multiple convolutional neural networks (CNN) to learn the regression networks for gaze estimation from the eye images. The proposed algorithm can provide accurate gaze estimation for users with different head poses, since it explicitly includes the head pose information into the proposed gaze estimation framework. The proposed algorithm can be widely used for appearance-based gaze estimation in practice. Our experimental results show that the proposed gaze estimation system improves the accuracy of appearance-based gaze estimation under head pose variations compared to the previous methods.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132214696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Local Behavior Analysis for Trajectory Classification Using Graph Embedding 基于图嵌入的轨迹分类局部行为分析
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.27
Rajkumar Saini, Pradeep Kumar, S. Dutta, P. Roy, U. Pal
{"title":"Local Behavior Analysis for Trajectory Classification Using Graph Embedding","authors":"Rajkumar Saini, Pradeep Kumar, S. Dutta, P. Roy, U. Pal","doi":"10.1109/ACPR.2017.27","DOIUrl":"https://doi.org/10.1109/ACPR.2017.27","url":null,"abstract":"Understanding motion patterns is of great importance to analyze the behavior of objects in the vigilance area. Grouping the motion patterns into clusters in such a way that similar motion patterns lie in same cluster and the inter-cluster variance is maximized. Variation in the duration of trajectory patterns in terms of time or number of points in them (even in the trajectories from same cluster) make it more difficult to correctly classify in respective clusters as a bijective mapping is not possible in such cases. In this paper, we have formulated the trajectory classification problem into graph based similarity problem using Douglas-Peucker (DP) algorithm and complete bipartite graphs. Local behavior of objects has been analyzed using their motion segments and Dynamic Time Warping (DTW) has been used for finding similarity among motion trajectories. Class-wise global and local costs have been computed using DTW and their fusion has been done using Particle Swarm Optimization (PSO) to improve the classification rate. Experiments have been performed using two public trajectory datasets, namely T15 and LabOmni. The proposed method yields encouraging results and outperforms the state of the art techniques.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132542400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Holistic Handwritten Uyghur Word Recognition Using Convolutional Neural Networks 基于卷积神经网络的整体手写维吾尔语词识别
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.104
Wujiahemaiti Simayi, A. Hamdulla, Cheng-Lin Liu
{"title":"Holistic Handwritten Uyghur Word Recognition Using Convolutional Neural Networks","authors":"Wujiahemaiti Simayi, A. Hamdulla, Cheng-Lin Liu","doi":"10.1109/ACPR.2017.104","DOIUrl":"https://doi.org/10.1109/ACPR.2017.104","url":null,"abstract":"This paper presents an approach for holistic handwritten Uyghur word recognition using convolutional neural networks (CNNs). For a large number of word classes, it is hard to collect sufficient samples for each class. To overcome the insufficient training samples, we propose data augmentation techniques to increase samples by stroke deformation and whole shape rotation. The CNN has 8 convolutional layers for feature extraction and one full connection layer for classification. We evaluated the performance on a dataset of online handwritten Uyghur words with 2344 classes and obtained recognition accuracies over 99% on the test set. The performance is superior to those of handwritten Uyghur word recognition reported in the literature. Our results demonstrate that CNN is useful for holistic word recognition with large number of word classes.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"31 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120986788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simple and Effective Speech Enhancement for Visual Microphone 简单有效的视觉麦克风语音增强
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.41
Juhyun Ahn, Daijin Kim
{"title":"Simple and Effective Speech Enhancement for Visual Microphone","authors":"Juhyun Ahn, Daijin Kim","doi":"10.1109/ACPR.2017.41","DOIUrl":"https://doi.org/10.1109/ACPR.2017.41","url":null,"abstract":"Visual microphone is a technique that recovers the sound from a silent video. The simplest way to improve sound recovery performance of the visual microphone is by applying the traditional speech enhancement algorithms which are based on complicated filter designs or sound models. This paper proposes a simple and effective speech enhancement for visual microphone (SEVM) that suppress spectrum components with small amplitude than a predefined threshold value, which exploits the unique properties that the sound spectrum recovered from the visual microphone is relatively high and the noise spectrum generated motion estimation error and damped oscillation is relatively low. The proposed SEVM method can also be easily extended to a multichannel case that multiple speech signals are recovered from multiple cameras. Experimental results show the proposed SEVM method better performance than the traditional speech enhancement algorithms in terms of log-likelihood ratio (LLR), signal to noise ratio (SNR), segmental SNR (SegSNR) and cepstral distance (CEP). From these results, we convince that the proposed SEVM method that is adapted to the visual microphone is really simple and effective than the traditional speech enhancement methods that are just extended to the visual microphone as a post-processing.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126015463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fabric Defect Detection Based on Gabor Filter and Tensor Low-Rank Recovery 基于Gabor滤波和张量低秩恢复的织物缺陷检测
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.37
Guangshuai Gao, Chaodie Liu, Zhoufeng Liu, Chunlei Li, Ruimin Yang
{"title":"Fabric Defect Detection Based on Gabor Filter and Tensor Low-Rank Recovery","authors":"Guangshuai Gao, Chaodie Liu, Zhoufeng Liu, Chunlei Li, Ruimin Yang","doi":"10.1109/ACPR.2017.37","DOIUrl":"https://doi.org/10.1109/ACPR.2017.37","url":null,"abstract":"Fabric defect detection plays a curial step in the quality control of textiles. Existing fabric defect detection methods are lack of adaptability and have a poor detection performance. A novel fabric defect detection method based on Gabor filter and tensor low-rank recovery was proposed in this paper. Defect-free fabric images have the specified direction, while defects damage their regularity of direction. Therefore, the direction feature is curial for fabric defect detection. For different kinds of fabric image, the direction information is also distinct. In order to characterize the direction information for all kinds of fabric image, we adopted a bank of Gabor directional filters to extract directional information, and generated the directional Gabor filtered maps. Thereafter, an efficient TRPCA model is proposed to decompose the feature tensor which is generated by stacking the feature vector of all the feature maps into a low-rank tensor and a sparse tensor by the alternating direction method of multipliers according to the tensor recovery (ADMM-TR) techniques. Finally, the saliency map generated by the sparse tensor part is segmented via the improved adaptive thresholding algorithm to locate the defective regions. Experimental results demonstrate that our algorithm is superior to the state-of-the-art.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126698205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Occlusion Object Detection via Collaborative Sensing Deep Convolution Network 基于协同感知深度卷积网络的遮挡目标检测
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.90
Ce Li, Xinyu Zhao, Hao Liu, Limei Xiao
{"title":"Occlusion Object Detection via Collaborative Sensing Deep Convolution Network","authors":"Ce Li, Xinyu Zhao, Hao Liu, Limei Xiao","doi":"10.1109/ACPR.2017.90","DOIUrl":"https://doi.org/10.1109/ACPR.2017.90","url":null,"abstract":"Object detection is one of the important problems in computer vision. But external occlusion often cause object features missing which lead to a big challenge of object detection. Aim at the problem of occlusion object detection and try to describe object features more effectively; we proposed a collaborative sensing deep convolution network to achieve co-detection by global and partial features of objects. Firstly, we divide the global and partial of the object, it means we segment parent and child in an object. Then, the joint detection network of parent and child is constructed. Finally, through the collaborative detection we achieve the precise positioning and recognition about parents. The proposed algorithm effectively solves the problem that object can not be detected due to missing features. We also ensure the accuracy of parent construction by child. Experiment results demonstrate that our algorithm performs better than other state-of-the-art methods.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"367 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117081128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enhancing Protein-ATP and Protein-ADP Binding Sites Prediction Using Supervised Instance-Transfer Learning 利用监督实例迁移学习增强蛋白质- atp和蛋白质- adp结合位点预测
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.9
Junda Hu, Zi Liu, Dong-Jun Yu
{"title":"Enhancing Protein-ATP and Protein-ADP Binding Sites Prediction Using Supervised Instance-Transfer Learning","authors":"Junda Hu, Zi Liu, Dong-Jun Yu","doi":"10.1109/ACPR.2017.9","DOIUrl":"https://doi.org/10.1109/ACPR.2017.9","url":null,"abstract":"Protein-ATP and protein-ADP interactions are ubiquitous in a wide variety of biological processes. Accurately identifying ATP-binding and ADP-binding sites or pockets is of significant importance for both protein function analysis and drug design. Although much progress has been made, challenges remain, especially in the post-genome era where large volume of proteins without being functional annotated are quickly accumulated. In this study, we report an instance-transfer-learning-based predictor, ATP&ADPsite, to target both ATP-binding and ADP-binding residues from protein sequence and structural information. ATP&ADPsite first employs evolutionary information, predicted secondary structure, and predicted solvent accessibility to represent each residue sample. In the above feature space, a supervised instance-transfer-learning method is proposed to improve the ATP-binding/ADP-binding residues prediction by combining ATP-binding and ADP-binding proteins. Random under-sampling is lastly employed to solve the imbalanced data learning problem. Experimental results demonstrate that the proposed ATP&ADPsite achieves a better prediction performance and outperforms many existing sequence-based predictors. The ATP&ADPsite web-server is available at http://csbio.njust.edu.cn/bioinf/ATP&ADPsite.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124386134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting Driver's Braking Intention Using Recurrent Convolutional Neural Networks Based EEG Analysis 基于循环卷积神经网络的脑电分析检测驾驶员制动意图
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.86
Suk-Min Lee, Jeong-Woo Kim, Seong-Whan Lee
{"title":"Detecting Driver's Braking Intention Using Recurrent Convolutional Neural Networks Based EEG Analysis","authors":"Suk-Min Lee, Jeong-Woo Kim, Seong-Whan Lee","doi":"10.1109/ACPR.2017.86","DOIUrl":"https://doi.org/10.1109/ACPR.2017.86","url":null,"abstract":"Driving assistance system has been recently studied to prevent emergency braking situations by combining external information on radar or camera devices and internal information on driver's intention. Electroencephalography (EEG) is an effective method to read user's intention with high temporal resolution. Our proposed system is mainly contributed to detecting driver's braking intention prior to stepping on the brake pedal in the emergency situation. We investigated early event-related potential (ERP) curves evoked by visual sensory process in emergency situation by using recurrent convolutional neural networks (RCNN) model. RCNN model has advantages to capture contextual and spatial patterns of brain signal. RCNN model is composed of a convolutional layer, two recurrent convolutional layers (RCLs), and a softmax layer. Fourteen participants drove for 120 minutes with two types of emergency situations and a normal driving situation in a virtual driving environment. In this article, early ERP showed a potential to be used for classifying the driver's braking intention. The classification performances based on RCNN and regularized linear discriminant analysis (RLDA) at 200 ms post-stimulus time were 0.86 AUC score and 0.61 AUC score respectively. Following the results, braking intention was recognized at 380 ms earlier based on early ERP patterns using RCNN model than the brake pedal. Our system could be applied to other brain-computer interface (BCI) system for minimizing detection time by capturing early ERP curves based on RCNN model.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124506264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Estimating Food Calories for Multiple-Dish Food Photos 估算多道菜食物照片的食物卡路里
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.145
Takumi Ege, Keiji Yanai
{"title":"Estimating Food Calories for Multiple-Dish Food Photos","authors":"Takumi Ege, Keiji Yanai","doi":"10.1109/ACPR.2017.145","DOIUrl":"https://doi.org/10.1109/ACPR.2017.145","url":null,"abstract":"A food photo generally includes several kinds of food dishes. In order to recognize food images including multiple dishes, we need to detect each dish in food images. Meanwhile, in recent years, the accuracy of object detection has improved drastically by the appearance of CNN. In this paper, we apply Faster R-CNN [10], a major object detection method, to food photos of multiple dishes. In the experiments we verify by using two kinds of food photo datasets. In addition, this food detector is applied to food calorie estimation for food photos of multiple dishes. We use Faster R-CNN as a food detector to detect each dish in a food image, and the food calorie of each detected dish are estimated by image-based food calorie estimation [2]. In this way, we estimate food calories from a food photo of multiple dishes. In this experiment, we collect food photos of multiple dishes with total food calorie of multiple dishes. Then we estimate food calories from food photos of multiple dishes by combining the food detector and image based food calorie estimation.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"461 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124552295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
CNN-Based Pedestrian Orientation Estimation from a Single Image 基于cnn的单幅图像行人方向估计
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.10
Kojiro Kumamoto, K. Yamada
{"title":"CNN-Based Pedestrian Orientation Estimation from a Single Image","authors":"Kojiro Kumamoto, K. Yamada","doi":"10.1109/ACPR.2017.10","DOIUrl":"https://doi.org/10.1109/ACPR.2017.10","url":null,"abstract":"In traffic environments where both vehicles and pedestrians coexist, predicting the path of a pedestrian is an important task for automated driving and driver support systems to prevent accidents. Therefore, research has been conducted to estimate the orientation of a pedestrian using in-vehicle camera images. In this paper, we present a CNN-based method of estimating the pedestrian orientation from single-frame images. The proposed method focuses on the fact that there is a relationship between the direction of a pedestrian's body and the direction of the pedestrian's face. The method is evaluated using TUD and PDC datasets, and the performance is shown.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"123 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124440069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信