2020 25th International Conference on Pattern Recognition (ICPR)最新文献

筛选
英文 中文
Contrastive Data Learning for Facial Pose and Illumination Normalization 面部姿态与光照归一化的对比数据学习
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412811
G. Hsu, Chia-Hao Tang, S. Yanushkevich, M. Gavrilova
{"title":"Contrastive Data Learning for Facial Pose and Illumination Normalization","authors":"G. Hsu, Chia-Hao Tang, S. Yanushkevich, M. Gavrilova","doi":"10.1109/ICPR48806.2021.9412811","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412811","url":null,"abstract":"Face normalization can be a crucial step when handling generic face recognition. We propose the Pose and Illumination Normalization (PIN) framework with contrast data learning for face normalization. The PIN framework is designed to learn the transformation from a source set to a target set. The source set and the target set compose a contrastive data set for learning. The source set contains faces collected in the wild and thus covers a wide range of variation across illumination, pose, expression and other variables. The target set contains face images taken under controlled conditions and all faces are in frontal pose and balanced in illumination. The PIN framework is composed of an encoder, a decoder and two discriminators. The encoder is made of a state-of-the-art face recognition network and acts as a facial feature extractor, which is not updated during training. The decoder is trained on both the source and target sets, and aims to learn the transformation from the source set to the target set; and therefore, it can transform an arbitrary face into a illumination and pose normalized face. The discriminators are trained to ensure the photo-realistic quality of the normalized face images generated by the decoder. The loss functions employed in the decoder and discriminators are appropriately designed and weighted for yielding better normalization outcomes and recognition performance. We verify the performance of the propose framework on several benchmark databases, and compare with state-of-the-art approaches.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"44 1","pages":"8336-8343"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87776324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACRM: Attention Cascade R-CNN with Mix-NMS for Metallic Surface Defect Detection 基于混合神经网络的关注级联R-CNN金属表面缺陷检测
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412424
Junting Fang, Xiaoyang Tan, Yuhui Wang
{"title":"ACRM: Attention Cascade R-CNN with Mix-NMS for Metallic Surface Defect Detection","authors":"Junting Fang, Xiaoyang Tan, Yuhui Wang","doi":"10.1109/ICPR48806.2021.9412424","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412424","url":null,"abstract":"Metallic surface defect detection is of great significance in quality control for production. However, this task is very challenging due to the noise disturbance, large appearance variation, and the ambiguous definition of the defect individual. Traditional image processing methods are unable to detect the damaged region effectively and efficiently. In this paper, we propose a new defect detection method, Attention Cascade R-CNN with Mix-NMS (ACRM), to classify and locate defects robustly. Three submodules are developed to achieve this goal: 1) a lightweight attention block is introduced, which can improve the ability in capture global and local feature both in the spatial and channel dimension; 2) we firstly apply the cascade R-CNN to our task, which exploits multiple detectors to sequentially refine the detection result robustly; 3) we introduce a new method named Mix Non-Maximum Suppression (Mix-NMS), which can significantly improve its ability in filtering the redundant detection result in our task. Extensive experiments on a real industrial dataset show that ACRM achieves state-of-the-art results compared to the existing methods, demonstrating the effectiveness and robustness of our detection method.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"229 1","pages":"423-430"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86885417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Deep Semantic Segmentation of RGB-D Data with Entangled Forests 基于纠缠森林的RGB-D数据深度语义分割
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412787
Matteo Terreran, Elia Bonetto, S. Ghidoni
{"title":"Enhancing Deep Semantic Segmentation of RGB-D Data with Entangled Forests","authors":"Matteo Terreran, Elia Bonetto, S. Ghidoni","doi":"10.1109/ICPR48806.2021.9412787","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412787","url":null,"abstract":"Semantic segmentation is a problem which is getting more and more attention in the computer vision community. Nowadays, deep learning methods represent the state of the art to solve this problem, and the trend is to use deeper networks to get higher performance. The drawback with such models is a higher computational cost, which makes it difficult to integrate them on mobile robot platforms. In this work we want to explore how to obtain lighter deep learning models without compromising performance. To do so we will consider the features used in the 3D Entangled Forests algorithm and we will study the best strategies to integrate these within FuseNet deep network. Such new features allow us to shrink the network size without loosing performance, obtaining hence a lighter model which achieves state-of-the-art performance on the semantic segmentation task and represents an interesting alternative for mobile robotics applications, where computational power and energy are limited.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"1 1","pages":"4634-4641"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88260664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Self-Supervised Learning with Graph Neural Networks for Region of Interest Retrieval in Histopathology 基于图神经网络的组织病理学兴趣区检索自监督学习
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412903
Yigit Ozen, S. Aksoy, K. Kösemehmetoğlu, S. Önder, A. Üner
{"title":"Self-Supervised Learning with Graph Neural Networks for Region of Interest Retrieval in Histopathology","authors":"Yigit Ozen, S. Aksoy, K. Kösemehmetoğlu, S. Önder, A. Üner","doi":"10.1109/ICPR48806.2021.9412903","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412903","url":null,"abstract":"Deep learning has achieved successful performance in representation learning and content-based retrieval of histopathology images. The commonly used setting in deep learning-based approaches is supervised training of deep neural networks for classification, and using the trained model to extract representations that are used for computing and ranking the distances between images. However, there are two remaining major challenges. First, supervised training of deep neural networks requires large amount of manually labeled data which is often limited in the medical field. Transfer learning has been used to overcome this challenge, but its success remained limited. Second, the clinical practice in histopathology necessitates working with regions of interest (ROI) of multiple diagnostic classes with arbitrary shapes and sizes. The typical solution to this problem is to aggregate the representations of fixed-sized patches cropped from these regions to obtain region-level representations. However, naive methods cannot sufficiently exploit the rich contextual information in the complex tissue structures. To tackle these two challenges, we propose a generic method that utilizes graph neural networks (GNN), combined with a self-supervised training method using a contrastive loss. GNN enables representing arbitrarily-shaped ROIs as graphs and encoding contextual information. Self-supervised contrastive learning improves quality of learned representations without requiring labeled data. The experiments using a challenging breast histopathology data set show that the proposed method achieves better performance than the state-of-the-art.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"25 1","pages":"6329-6334"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82660836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
CT-UNet: An Improved Neural Network Based on U-Net for Building Segmentation in Remote Sensing Images CT-UNet:一种基于U-Net的改进神经网络用于遥感图像中建筑物分割
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412355
Huanran Ye, Sheng Liu, K. Jin, Haohao Cheng
{"title":"CT-UNet: An Improved Neural Network Based on U-Net for Building Segmentation in Remote Sensing Images","authors":"Huanran Ye, Sheng Liu, K. Jin, Haohao Cheng","doi":"10.1109/ICPR48806.2021.9412355","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412355","url":null,"abstract":"With the proliferation of remote sensing images, how to segment buildings more accurately in remote sensing images is a critical challenge. First, the high resolution leads to blurred boundaries in the extracted building maps. Second, the similarity between buildings and background results in intra-class inconsistency. To address these two problems, we propose an UNet-based network named Context-Transfer-UNet (CT-UNet). Specifically, we design Dense Boundary Block (DBB). Dense Block utilizes reuse mechanism to refine features and increase recognition capabilities. Boundary Block introduces the low-level spatial information to solve the fuzzy boundary problem. Then, to handle intra-class inconsistency, we construct Spatial Channel Attention Block (SCAB). It combines context space information and selects more distinguishable features from space and channel. Finally, we propose a novel loss function to enhance the purpose of loss by adding evaluation indicator. Based on our proposed CT-UNet, we achieve 85.33% mean IoU on the Inria dataset and 91.00% mean IoU on the WHU dataset, which outperforms our baseline (U-Net ResNet-34) by 3.76% and Web-Net by 2.24%.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"51 1","pages":"166-172"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82668452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning Interpretable Representation for 3D Point Clouds 学习三维点云的可解释表示
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412440
Feng-Guang Su, Ci-Siang Lin, Y. Wang
{"title":"Learning Interpretable Representation for 3D Point Clouds","authors":"Feng-Guang Su, Ci-Siang Lin, Y. Wang","doi":"10.1109/ICPR48806.2021.9412440","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412440","url":null,"abstract":"Point clouds have emerged as a popular representation of 3D visual data. With a set of unordered 3D points, one typically needs to transform them into latent representation before further classification and segmentation tasks. However, one cannot easily interpret such encoded latent representation. To address this issue, we propose a unique deep learning framework for disentangling body-type and pose information from 3D point clouds. Extending from autoencoder, we advance adversarial learning a selected feature type, while classification and data recovery can be additionally observed. Our experiments confirm that our model can be successfully applied to perform a wide range of 3D applications like shape synthesis, action translation, shape/action interpolation, and synchronization.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"46 1","pages":"7470-7477"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82943531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Correlation-based ConvNet for Small Object Detection in Videos 基于相关性的卷积神经网络视频小目标检测
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9413127
Brais Bosquet, M. Mucientes, V. Brea
{"title":"Correlation-based ConvNet for Small Object Detection in Videos","authors":"Brais Bosquet, M. Mucientes, V. Brea","doi":"10.1109/ICPR48806.2021.9413127","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9413127","url":null,"abstract":"The detection of small objects is of particular interest in many real applications. In this paper, we propose STDnet-ST, a novel approach to small object detection in video using spatial information operating alongside temporal video information. STDnet-ST is an end-to-end spatio-temporal convolutional neural network that detects small objects over time and correlates pairs of the top-ranked regions with the highest likelihood of containing small objects. This architecture links the small objects across the time as tubelets, being able to dismiss unprofitable object links in order to provide high-quality tubelets. STDnet-ST achieves state-of-the-art results for small objects on the publicly available USC-GRAD-STDdb and UAVDT video datasets.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"26 1","pages":"1979-1984"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91485051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GPSRL: Learning Semi-Parametric Bayesian Survival Rule Lists from Heterogeneous Patient Data GPSRL:从异构患者数据中学习半参数贝叶斯生存规则列表
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9413157
Ameer Hamza Shakur, Xiaoning Qian, Zhangyang Wang, B. Mortazavi, Shuai Huang
{"title":"GPSRL: Learning Semi-Parametric Bayesian Survival Rule Lists from Heterogeneous Patient Data","authors":"Ameer Hamza Shakur, Xiaoning Qian, Zhangyang Wang, B. Mortazavi, Shuai Huang","doi":"10.1109/ICPR48806.2021.9413157","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9413157","url":null,"abstract":"Survival data is often collected in medical applications from a heterogeneous population of patients. While in the past, popular survival models focused on modeling the average effect of the covariates on survival outcomes, rapidly advancing sensing and information technologies have provided opportunities to further model the heterogeneity of the population as well as the non-linearity of the survival risk. With this motivation, we propose a new semi-parametric Bayesian Survival Rule List model in this paper. Our model derives a rule-based decision-making approach, while within the regime defined by each rule, survival risk is modelled via a Gaussian process latent variable model. Markov Chain Monte Carlo with a nested Laplace approximation on the Gaussian process posterior is used to search over the posterior of the rule lists efficiently. The use of ordered rule lists enables us to model heterogeneity while keeping the model complexity in check. Performance evaluations on a synthetic heterogeneous survival dataset and a real world sepsis survival dataset demonstrate the effectiveness of our model.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"14 1","pages":"10608-10615"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91537118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Compact CNN Structure Learning by Knowledge Distillation 基于知识蒸馏的紧凑CNN结构学习
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9413006
Waqar Ahmed, Andrea Zunino, Pietro Morerio, V. Murino
{"title":"Compact CNN Structure Learning by Knowledge Distillation","authors":"Waqar Ahmed, Andrea Zunino, Pietro Morerio, V. Murino","doi":"10.1109/ICPR48806.2021.9413006","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9413006","url":null,"abstract":"The concept of compressing deep Convolutional Neural Networks (CNNs) is essential to use limited computation, power, and memory resources on embedded devices. However, existing methods achieve this objective at the cost of a drop in inference accuracy in computer vision tasks. To address such a drawback, we propose a framework that leverages knowledge distillation along with customizable block-wise optimization to learn a lightweight CNN structure while preserving better control over the compression-performance tradeoff. Considering specific resource constraints, e.g., floating-point operations per inference (FLOPs) or model-parameters, our method results in a state of the art network compression while being capable of achieving better inference accuracy. In a comprehensive evaluation, we demonstrate that our method is effective, robust, and consistent with results over a variety of network architectures and datasets, at negligible training overhead. In particular, for the already compact network MobileNet_v2, our method offers up to 2× and 5.2× better model compression in terms of FLOPs and model-parameters, respectively, while getting 1.05% better model performance than the baseline network.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"42 1","pages":"6554-6561"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90214867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Separation of Aleatoric and Epistemic Uncertainty in Deterministic Deep Neural Networks 确定性深度神经网络中任意不确定性与认知不确定性的分离
2020 25th International Conference on Pattern Recognition (ICPR) Pub Date : 2021-01-10 DOI: 10.1109/ICPR48806.2021.9412616
Denis Huseljic, B. Sick, M. Herde, D. Kottke
{"title":"Separation of Aleatoric and Epistemic Uncertainty in Deterministic Deep Neural Networks","authors":"Denis Huseljic, B. Sick, M. Herde, D. Kottke","doi":"10.1109/ICPR48806.2021.9412616","DOIUrl":"https://doi.org/10.1109/ICPR48806.2021.9412616","url":null,"abstract":"Despite the success of deep neural networks (DNN) in many applications, their ability to model uncertainty is still significantly limited. For example, in safety-critical applications such as autonomous driving, it is crucial to obtain a prediction that reflects different types of uncertainty to address life-threatening situations appropriately. In such cases, it is essential to be aware of the risk (i.e., aleatoric uncertainty) and the reliability (i.e., epistemic uncertainty) that comes with a prediction. We present AE-DNN, a model allowing the separation of aleatoric and epistemic uncertainty while maintaining a proper generalization capability. AE-DNN is based on deterministic DNN, which can determine the respective uncertainty measures in a single forward pass. In analyses with synthetic and image data, we show that our method improves the modeling of epistemic uncertainty while providing an intuitively understandable separation of risk and reliability.","PeriodicalId":6783,"journal":{"name":"2020 25th International Conference on Pattern Recognition (ICPR)","volume":"42 1","pages":"9172-9179"},"PeriodicalIF":0.0,"publicationDate":"2021-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90481058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信