2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)最新文献

筛选
英文 中文
Depth Estimation for Hazy Images Using Deep Learning 基于深度学习的朦胧图像深度估计
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.100
Laksmita Rahadianti, Fumihiko Sakaue, J. Sato
{"title":"Depth Estimation for Hazy Images Using Deep Learning","authors":"Laksmita Rahadianti, Fumihiko Sakaue, J. Sato","doi":"10.1109/ACPR.2017.100","DOIUrl":"https://doi.org/10.1109/ACPR.2017.100","url":null,"abstract":"3D scene understanding is important for many applications in the computer vision field. However, the majority of existing solutions commonly assume the images to be captured in clear media. In real world situations, we may encounter less than ideal conditions, for example haze or fog. In these cases, the captured images will contain scattering and veiling effects that obscure the features of the scene. Many studies approach these images by first removing the scattering effects to obtain an approximate clear image. However, by studying the physical model of light propagation in scattering media, we have observed a relation between the captured image intensity and the distance from the camera. Therefore, as a contrast, we attempt to exploit these scattering effects to obtain 3D depth cues. In order to learn the relation between the scattering effects and the depth, we utilize deep networks to help extract and build high-level features. In this paper, we propose a novel classification approach for depth map estimation of hazy images using deep learning.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123697378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Expression Intensity Estimation Based on CNN Features and RankBoost 基于CNN特征和RankBoost的面部表情强度估计
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.109
Yue-Hua Ren, Jiani Hu, Weihong Deng
{"title":"Facial Expression Intensity Estimation Based on CNN Features and RankBoost","authors":"Yue-Hua Ren, Jiani Hu, Weihong Deng","doi":"10.1109/ACPR.2017.109","DOIUrl":"https://doi.org/10.1109/ACPR.2017.109","url":null,"abstract":"Facial expressions provide a wealth of information that can help us understand a person's emotions and attitudes better. And the intensity of facial expression is very important for detecting and tracking the change of expression. In this paper, we present a frame work based on CNN features and Rank Boost algorithm to estimate the intensity of facial expression. In daily life, the change of facial expression is a process of dynamic changes over time. So the problem of estimating the intensity of facial expression can be converted into the sequencing problem of expression. The depth features based on deep learning have strong generalization ability. This paper utilizes the features obtained from CNN as input rather than the features from traditional machine learning. Further it enhances the ranking function of the weak hypothesis in the Rank Boost algorithm and adds more prior information into the loss function. Indeed, a large number of experiments on CohnKanade+ database show that the algorithm presented in this paper has better performance than the previous ones.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121801922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Estimating Relative Pose between Nonoverlapping Cameras by Four Laser Pointers Based on General Camera Model 基于通用相机模型的四激光笔非重叠相机相对姿态估计
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.79
Shigang Li, T. Harada, Wuhe Zou
{"title":"Estimating Relative Pose between Nonoverlapping Cameras by Four Laser Pointers Based on General Camera Model","authors":"Shigang Li, T. Harada, Wuhe Zou","doi":"10.1109/ACPR.2017.79","DOIUrl":"https://doi.org/10.1109/ACPR.2017.79","url":null,"abstract":"In this paper we propose a method of estimating relative pose between non-overlapping cameras by minimal four laser pointers based on a general camera model. In the proposed method, four laser pointers are mounted on a calibration chess board. We call this apparatus a laser calibration board, and model it as a general camera. First, the relative pose of each laser pointer at the coordinate system of the calibration chess board is calibrated. Then, the above calibrated laser calibration board is used to estimate the relative pose between two non-overlapping cameras by using a NPnP (Non-Perspective n Points) algorithm for a general camera. The experimental results are given to show the effectiveness of the proposed method also.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133975303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Coherent Low-tubal-Rank Tensor Completion 相干低管阶张量补全
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.66
Andong Wang, Zhong Jin, Xiangrui Li
{"title":"Coherent Low-tubal-Rank Tensor Completion","authors":"Andong Wang, Zhong Jin, Xiangrui Li","doi":"10.1109/ACPR.2017.66","DOIUrl":"https://doi.org/10.1109/ACPR.2017.66","url":null,"abstract":"The sufficient condition of exact completion of coherent low-tubal-rank tensors is studied in this paper. When the leveraged sampling strategy is adopted instead of the uniform sampling strategy, it can be shown that any 3-D tensor of size n_1 × n_2 × n_3 having tubal-rank r can be exactly recovered using tubal nuclear norm minimization with high probability when the number of observed entries is of order O(max{n_1, n_2}n_3r log^2((n_1+n_2)n_3)). This result removes the tensor incoherence parameter μ_0 in the sample complexity O(μ_0 max{n_1, n_2}n_3r log((n_1+n_2)n_3)) of uniform sampling strategy and can significantly reduce the number of observations for a tensor with The sufficient condition of exact completion of coherent low-tubal-rank tensors is studied in this paper. When the leveraged sampling strategy is adopted instead of the uniform sampling strategy, it can be shown that any 3-D tensor of size n1 x n2 x n3 having tubal-rank r can be exactly recovered using tubal nuclear norm minimization with high probability when the number of observed entries is of order O(max{n1, n2}n3r log2((n1+n2)n3)). This result removes the tensor incoherence parameter µ0 in the sample complexity O(µ0 max{n1, n2}n3r log((n1+n2)n3)) of uniform sampling strategy and can significantly reduce the number of observations for a tensor with large µ0.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134020090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Classification of Protein Subcellular Localization from Immunohistochemistry Images 免疫组织化学图像中基于深度学习的蛋白质亚细胞定位分类
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.125
Jin-Xian Hu, Ying-Ying Xu, Yang Yang, Hongbin Shen
{"title":"Deep Learning-Based Classification of Protein Subcellular Localization from Immunohistochemistry Images","authors":"Jin-Xian Hu, Ying-Ying Xu, Yang Yang, Hongbin Shen","doi":"10.1109/ACPR.2017.125","DOIUrl":"https://doi.org/10.1109/ACPR.2017.125","url":null,"abstract":"Due to the recent breakthrough of bioimaging, automated classification of protein subcellular localization based on immunohistochemistry (IHC) images has become an important topic of proteomics research. Inspired by the impressive performance of deep learning in various image classifications, we trained a deep neural network model to classify protein images of eight subcellular localizations, which is able to achieve higher classification accuracies than using traditional models of support vector machine. Intermediate outputs of the neural network were visualized to show that our model can capture subtle texture features from IHC images and lead to better subcellular location classification results. In addition, our results show that data rebalance can significantly improve the classification performance in this multi-class deep classifier","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130998211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Unconstrained OCR for Urdu Using Deep CNN-RNN Hybrid Networks 基于深度CNN-RNN混合网络的乌尔都语无约束OCR
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.5
Mohit Jain, Minesh Mathew, C. V. Jawahar
{"title":"Unconstrained OCR for Urdu Using Deep CNN-RNN Hybrid Networks","authors":"Mohit Jain, Minesh Mathew, C. V. Jawahar","doi":"10.1109/ACPR.2017.5","DOIUrl":"https://doi.org/10.1109/ACPR.2017.5","url":null,"abstract":"Building robust text recognition systems for languages with cursive scripts like Urdu has always been challenging. Intricacies of the script and the absence of ample annotated data further act as adversaries to this task. We demonstrate the effectiveness of an end-to-end trainable hybrid CNN-RNN architecture in recognizing Urdu text from printed documents, typically known as Urdu OCR. The solution proposed is not bounded by any language specific lexicon with the model following a segmentation-free, sequence-tosequence transcription approach. The network transcribes a sequence of convolutional features from an input image to a sequence of target labels. This discards the need to segment the input image into its constituent characters/glyphs, which is often arduous for scripts like Urdu. Furthermore, past and future contexts modelled by bidirectional recurrent layers aids the transcription. We outperform previous state-of-theart techniques on the synthetic UPTI dataset. Additionally, we publish a new dataset curated by scanning printed Urdu publications in various writing styles and fonts, annotated at the line level. We also provide benchmark results of our model on this dataset","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"1106 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113995140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A Generalized Discriminative Least Squares Regression Model 广义判别最小二乘回归模型
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.3
Haoliang Yuan, Junjie Zheng, Fangyuan Xu, L. L. Lai, Weiyang Li, Houqing Zheng, Zhimin Wang
{"title":"A Generalized Discriminative Least Squares Regression Model","authors":"Haoliang Yuan, Junjie Zheng, Fangyuan Xu, L. L. Lai, Weiyang Li, Houqing Zheng, Zhimin Wang","doi":"10.1109/ACPR.2017.3","DOIUrl":"https://doi.org/10.1109/ACPR.2017.3","url":null,"abstract":"Least squares regression (LSR) is a fundamental tool in statistics theory. In this paper, we propose a generalized discriminative least squares regression (GDLSR) model for multicategory classification. The main motivation of GDLSR is to introduce a translation matrix to enhance the flexibility of the target matrix. Through adding the graph constraint into the translation matrix, GDLSR can make the samples in the same class have similar translation vectors. To optimize our proposed GDLSR, an efficient iteration algorithm is proposed to find the global optimal solution. Extensive experiments results on face data sets confirm the effectiveness of GDLSR.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116825921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locality-Constrained Structral Orthogonal Procrustes Regression for Low-Resolution Face Recognition with Pose Variations 基于位置约束的结构正交Procrustes回归的低分辨率人脸识别
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.85
Guangwei Gao, Pu Huang, Dong Yue, Wankou Yang
{"title":"Locality-Constrained Structral Orthogonal Procrustes Regression for Low-Resolution Face Recognition with Pose Variations","authors":"Guangwei Gao, Pu Huang, Dong Yue, Wankou Yang","doi":"10.1109/ACPR.2017.85","DOIUrl":"https://doi.org/10.1109/ACPR.2017.85","url":null,"abstract":"Face images captured by surveillance cameras usually have low-resolution (LR) as well as uncontrolled poses and illumination conditions, which adversely affect the performance of face recognition algorithms. In this paper, we propose a locality-constrained structural orthogonal Procrustes regression (LCSOPR) approach to learn the pose-robust discriminative representations between LR and high-resolution (HR) images. The orthogonal Procrustes problem (OPP) seeks an optimal transformation between two images to correct the pose from one to the other. Additionally, our LCSOPR uses the nuclear norm constraint on the error term to keep image's structural information. Moreover, a locality constraint is also introduced to preserve the locality and the sparsity simultaneously. Finally, after getting the resolution-robust features, a simple yet powerful sparse representation based classifier engine is used to predict the face labels. The experimental results have shown that the proposed method can give better performance than many state-of-the-art LR face recognition approaches.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133962185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Focal Stack Representation and Focus Manipulation 焦点堆栈表示和焦点操作
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.102
P. Sakurikar, P J Narayanan
{"title":"Focal Stack Representation and Focus Manipulation","authors":"P. Sakurikar, P J Narayanan","doi":"10.1109/ACPR.2017.102","DOIUrl":"https://doi.org/10.1109/ACPR.2017.102","url":null,"abstract":"Focus, depth-of-field, and defocus are important elements that portray the aesthetic emphasis in a good photograph. The ability to manipulate the focus after capture provides useful creative control to photographers. Capturing focal stacks - multiple images with small change in focus setting - of static scenes is relatively easy with modern cameras. We propose a compact representation for focal stacks using an all-in-focus image, a focal-slice index map and pair-wise defocus blur parameters. Using our representation, we show reconstruction of images with different focus effects including extended focus, multiple focus, and scene synthesis with natural focus effects. A user study shows high acceptability of the synthesized images compared to real ones. The compact and powerful representation of focal stacks makes them suitable for handling by image editing tools in order to provide flexible focus manipulation.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133102734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Statistical Performance of Convex Low-Rank and Sparse Tensor Recovery 凸低秩稀疏张量恢复的统计性能
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.67
Xiangrui Li, Andong Wang, Jianfeng Lu, Zhenmin Tang
{"title":"Statistical Performance of Convex Low-Rank and Sparse Tensor Recovery","authors":"Xiangrui Li, Andong Wang, Jianfeng Lu, Zhenmin Tang","doi":"10.1109/ACPR.2017.67","DOIUrl":"https://doi.org/10.1109/ACPR.2017.67","url":null,"abstract":"Suppose a tensor * ∈ ℝn1x...xnK is low-Tucker-rank and sparse simultaneously. The statistical performance of recovering * from it from its noisy observations is studied mathematically in this paper. A convex optimization problem like Remurs [1] which integrates l1-norm and the tensor nuclear norm is proposed. Theoretically, the deterministic upper bound of the estimation error is provided for general noise based on the assumption of restricted strong convexity. For the tensor de-noising problem and the tensor compressive sensing problem, non-asymptotic upper bounds of the estimation error are also shown when the noise is i.i.d. Gaussian.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"444 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133531313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信