2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)最新文献

筛选
英文 中文
Harnessing U-disparity in Point Clouds for Obstacle Detection 利用点云中的u -视差进行障碍物检测
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.106
Yang Wei, Chen Gong, Shuo Chen
{"title":"Harnessing U-disparity in Point Clouds for Obstacle Detection","authors":"Yang Wei, Chen Gong, Shuo Chen","doi":"10.1109/ACPR.2017.106","DOIUrl":"https://doi.org/10.1109/ACPR.2017.106","url":null,"abstract":"Lidar is an indispensable equipment in autonomous vehicle for environment perception, in which obstacle detection plays an important role in collision avoiding and route planning. The main challenge of Lidar based obstacle detection is that processing the disordered and sparse point clouds would be difficult and time-consuming. Hence, this paper presents a novel usage of U-disparity to locate obstacles indiscriminately with point clouds, which makes obstacle detection effective and efficient. The proposed method firstly uses cross-calibration to align point cloud with reference image, so that a depth map is formed. Then, the U-disparity map is introduced to process Lidar based depth map. Due to the particularity of Lidar based U-disparity, we select local peaks in column in U-disparity to identify relevant disparities of obstacles. After applying filtering and clustering steps on these salient peak disparities, the corresponding obstacles can be precisely localized. Quantitative and qualitative experimental results on KITTI object detection benchmark and road detection benchmark reveal that the proposed method achieves very encouraging performances in various environments.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116668943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Neural Networks for Accurate Iris Recognition 精确虹膜识别的深度神经网络
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.152
Yuzheng Xu, Tzu-Chan Chuang, S. Lai
{"title":"Deep Neural Networks for Accurate Iris Recognition","authors":"Yuzheng Xu, Tzu-Chan Chuang, S. Lai","doi":"10.1109/ACPR.2017.152","DOIUrl":"https://doi.org/10.1109/ACPR.2017.152","url":null,"abstract":"Most prior iris recognition techniques based on the existing pipeline have already reached their limits. Therefore, this work explores the possibility of applying the deep learning technique to the field of iris recognition. We combine a novel segmentation network with a modified resnet-18 as the iris matching network. The segmentation network architecture consists of an iterative altered FCN (fully convolutional network) which contains a path of contracting layers to capture features and a symmetric upsampling path that gives precise pixel-to-pixel localization. The network not only generates visually implausible iris masks but also makes good use of data augmentation. We show that combining such networks outperforms the prior methods on several iris image datasets, including CASIA V3-interval and UBIRIS V2 datasets.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125192654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adversarial Learning Based Saliency Detection 基于对抗学习的显著性检测
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.103
Xuecai Hu, Xin Zhao, Kaiqi Huang, T. Tan
{"title":"Adversarial Learning Based Saliency Detection","authors":"Xuecai Hu, Xin Zhao, Kaiqi Huang, T. Tan","doi":"10.1109/ACPR.2017.103","DOIUrl":"https://doi.org/10.1109/ACPR.2017.103","url":null,"abstract":"Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, the typical binary cross entropy loss used in the networks by saliency detection is a pixel-wise loss, resulting in the independent prediction of the salient probability of each pixel. It raises the problem of spatial discontinuity of the predicted saliency maps. Many researchers try to solve this problem by using super-pixel segmentation, but it is complicated and time-consuming. In this paper, we propose an Adversarial Saliency Detection Network (ASDN) to enhance the spatial continuity of the saliency maps with two sub-networks which are saliency detection network and discriminator network, respectively. The aim of the discriminator is to distinguish the saliency maps predicted by the saliency detection network from the ground truth. In this way, the discriminator helps the saliency detection network to enhance long-range spatial continuity of the predicted saliency map. Our ASDN achieves the state-of-the-art performance on standard salient object detection benchmarks.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115178900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Compressed Sensing Natural Imaging via Hadamard-Diagonal Matrix 基于Hadamard-Diagonal矩阵的压缩感知自然成像
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.96
Ying Zhou, Quansen Sun, Yazhou Liu, Jixin Liu
{"title":"Compressed Sensing Natural Imaging via Hadamard-Diagonal Matrix","authors":"Ying Zhou, Quansen Sun, Yazhou Liu, Jixin Liu","doi":"10.1109/ACPR.2017.96","DOIUrl":"https://doi.org/10.1109/ACPR.2017.96","url":null,"abstract":"The measurement matrix is one of the keys of the compressed sensing. However, the existing measurement matrices face the two main problems of the difficult hardware implementation and the low sensing efficiency. In fact, those matrices always ignore the energy concentration characteristic of the natural images in the sparse domain, which greatly limits the sensing efficiency of the measurement matrices and thus the construction efficiency. In this paper, we propose a simple but efficient measurement matrix based on the Hadamard matrix with the consideration of maximizing the energy conservation in the sparse domain, named Hadamard-Diagonal Matrix (HDM). We keep the main sensing rows and columns in the Hadamard matrix with '1' and the others with '0' to keep more energy after the sampling of the natural images in the sparse domain, which increases the sensing efficiency. Meanwhile, the HDM is a binary and sparse matrix which benefits the hardware implementation. The experimental results show that the HDM performs better than some popular existing measurement matrices and is incoherent with different sparse basis.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116573213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Distortion Detection Using Convolutional Neural Network 基于卷积神经网络的图像失真检测
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.95
Namhyuk Ahn, Byungkon Kang, Kyung-ah Sohn
{"title":"Image Distortion Detection Using Convolutional Neural Network","authors":"Namhyuk Ahn, Byungkon Kang, Kyung-ah Sohn","doi":"10.1109/ACPR.2017.95","DOIUrl":"https://doi.org/10.1109/ACPR.2017.95","url":null,"abstract":"Image distortion classification and detection is an im-portant task in many applications. For example when com-pressing images, if we know the exact location of the distortion, then it is possible to re-compress images by adjusting the local compression level dynamically. In this paper, we address the problem of detecting the distortion region and classifying the distortion type of a given image. We show that our model significantly outperforms the state-of-the-art distortion classifier, and report accurate detection results for the first time. We expect that such results prove the use-fulness of our approach in many potential applications such as image compression or distortion restoration.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122861758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Writer Identification in Indic Scripts: A Stroke Distribution Based Approach 印文写作者识别:一种基于笔画分布的方法
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.158
Santhoshini Reddy, Chris Andrew Gadde, U. Pal, Alireza Alaei, Viswanath Pulabaigari
{"title":"Writer Identification in Indic Scripts: A Stroke Distribution Based Approach","authors":"Santhoshini Reddy, Chris Andrew Gadde, U. Pal, Alireza Alaei, Viswanath Pulabaigari","doi":"10.1109/ACPR.2017.158","DOIUrl":"https://doi.org/10.1109/ACPR.2017.158","url":null,"abstract":"This paper proposes to represent an offline handwritten document with a distribution of strokes over an alphabet of strokes for writer identification. A data driven approach for stroke alphabet creation is done as follows: strokes are extracted from the image, using a regression method, extracted strokes are represented as fixed length vectors in a vector space, strokes are clustered into stroke categories to create a stroke alphabet. The paper proposes a clustering method with a new clustering score whereby an optimal number of clusters (categories) are automatically identified. For a given document, based on the frequency of occurrence of elements in the stroke alphabet, a histogram is created that represents the writer's writing style. Support Vector Machine is used for the classification purpose. Offline handwritten documents written in two different Indic languages, viz., Telugu and Kannada, were considered for the experimentation. Results comparable to other methods in the literature are obtained from the proposed method.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122950383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Collaborative Representation-based Classification Method Using Weighted Multi-scale LBP for Image Recognition 基于协同表示的加权多尺度LBP图像识别方法
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.161
Xiaoning Song, Yao Chen
{"title":"Collaborative Representation-based Classification Method Using Weighted Multi-scale LBP for Image Recognition","authors":"Xiaoning Song, Yao Chen","doi":"10.1109/ACPR.2017.161","DOIUrl":"https://doi.org/10.1109/ACPR.2017.161","url":null,"abstract":"In this paper, we propose a novel collaborative representation-based classification method using weighted multi-scale LBP for face recognition. First, to capture more useful local information from the dictionary, we constructed a weighted hierarchical multi-scale LBP as a dictionary optimization tool to dig out the multi-scale information of the original samples. Second, a query sample is represented as a linear combination of the most informative weighted multi-scale LBP features, in which the representation capability of each weighted multi-scale LBP feature is measured to determine the \"nearest neighbors\" for representing the test sample. The final goal of the proposed method is to find an optimal representation of these weighted multi-scale LBP features from the classes with major contributions. Experimental results conducted on the ORL, FERET, AR and GT face databases demonstrate the effectiveness of the proposed method","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117244301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Correction Method for Automatic Data Annotation 自动数据标注的自校正方法
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.135
Ce Liu, Tonghua Su, Lijuan Yu
{"title":"Self-Correction Method for Automatic Data Annotation","authors":"Ce Liu, Tonghua Su, Lijuan Yu","doi":"10.1109/ACPR.2017.135","DOIUrl":"https://doi.org/10.1109/ACPR.2017.135","url":null,"abstract":"Supervised learning from data, especially using deep neural networks, usually needs tons of labeled data. Considering there are plenty of handwritten samples in the Internet, how to make use of the large amount of dirty data or make them cleaner? This paper presents self-correction learning method to automatically label dirty data. The methods lend inspiration from self-training framework. It trains the classifier iteratively on the dirty data, and shifts the cluster centers, corrects or deletes samples according to the estimated confidence during the iterations. Empirical results demonstrate that the proposed method can effectively improve the quality of the data and reduce great human annotation efforts.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124737520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Split and Merge for Accurate Plane Segmentation in RGB-D Images 分割和合并在RGB-D图像精确的平面分割
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.26
Yigong Zhang, Tao Lu, Jian Yang, Hui Kong
{"title":"Split and Merge for Accurate Plane Segmentation in RGB-D Images","authors":"Yigong Zhang, Tao Lu, Jian Yang, Hui Kong","doi":"10.1109/ACPR.2017.26","DOIUrl":"https://doi.org/10.1109/ACPR.2017.26","url":null,"abstract":"In this paper, we propose an accurate and efficient method to detect planar surfaces indoors based on an RGB-D camera. First, we segment the RGB image using a graph-based segmentation approach because of its efficiency and capability in preserving sharp region borders. The graph-based color segmentation methods usually result in over-segmentation or under-segmentation. Then to achieve better plane segmentation results, we propose a split-andmerge strategy. We first segment the planes in the split step by applying a random sampling and consensus (RANSAC) approach to each graph-derived point cloud based on a plane-fitting mean squared error (MSE). In the merge step, we can simultaneously merge some over-segmented regions obtained from the split step by a maximal clique clustering approach. Experiment demonstrates that our plane segmentation algorithm can detect planes indoors at a frame rate of 10Hz, and can achieve very promising performance.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130400516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Person Re-identification by Saliency-Weighted Descriptor and Ranking Aggregation 基于显著加权描述符和排序聚合的人物再识别
2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) Pub Date : 2017-11-01 DOI: 10.1109/ACPR.2017.111
Chao Guan, Minxian Li, Chunxia Zhao
{"title":"Person Re-identification by Saliency-Weighted Descriptor and Ranking Aggregation","authors":"Chao Guan, Minxian Li, Chunxia Zhao","doi":"10.1109/ACPR.2017.111","DOIUrl":"https://doi.org/10.1109/ACPR.2017.111","url":null,"abstract":"Person re-identification which identifies the same person appeared in non-overlapping camera views is an important and challenging task in computer vision. Although most feature representation methods have significantly improved the person re-identification performance, they do not distinguish between pedestrian object and the environment in images in the process of extracting feature. In this paper, we present a novel feature representation called saliency-weighted descriptor (SWD) which intensifies the discrimination of pedestrian feature. Furthermore, we propose a ranking aggregation algorithm to combine SWD and unweighted descriptor for the purpose of mitigating the impact of inaccurate salient region. The experimental results on public person re-identification datasets (VIPeR, QMUL GRID, CUHK01, and CUHK03) demonstrate the effectiveness of our approach.","PeriodicalId":426561,"journal":{"name":"2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116297674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信