A Noise-robust Feature Fusion Model Combining Non-local Attention for Material Recognition

Chuanbo Zhou, Guoan Yang, Zhengzhi Lu, Deyang Liu, Yong Yang
{"title":"A Noise-robust Feature Fusion Model Combining Non-local Attention for Material Recognition","authors":"Chuanbo Zhou, Guoan Yang, Zhengzhi Lu, Deyang Liu, Yong Yang","doi":"10.1145/3512388.3512450","DOIUrl":null,"url":null,"abstract":"Material recognition, as an important task of computer vision, is hugely challenging, due to large intra-class variances and small inter-class variances between material images. To address those recognition problems, multi-scale feature fusion methods based on deep convolutional neural networks are presented, which has been widely studied in recent years. However, the past research works paid too much attention to the local features of the image, while ignoring the non-local features that are also crucial for fine image recognition tasks such as material recognition. In this paper, Non-local Attentional Feature Fusion Network (NLA-FFNet) is proposed that combines local and non-local feature of images to improve the feature representation capability. Firstly, we utilize the pre-trained deep convolutional neural network to extract the image feature. Secondly, a Multilayer Non-local Attention (MNLA) block is designed to generate a non-local attention map which represents the long-range dependencies between features of different positions. Therefore, it can achieve stronger noise-robustness of model and better ability to represent fine features. Finally, combined our Multilayer Non-local Attention block with bilinear pooling which has been proved to be effective for feature fusion, we propose a deep neural network framework, NLA-FFNet, with noise-robust multi-layer feature fusion. Experiment prove that our model can achieve a competitive classification accuracy in material image recognition, and has stronger noise-robustness at the same time.","PeriodicalId":434878,"journal":{"name":"Proceedings of the 2022 5th International Conference on Image and Graphics Processing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Image and Graphics Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3512388.3512450","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Material recognition, as an important task of computer vision, is hugely challenging, due to large intra-class variances and small inter-class variances between material images. To address those recognition problems, multi-scale feature fusion methods based on deep convolutional neural networks are presented, which has been widely studied in recent years. However, the past research works paid too much attention to the local features of the image, while ignoring the non-local features that are also crucial for fine image recognition tasks such as material recognition. In this paper, Non-local Attentional Feature Fusion Network (NLA-FFNet) is proposed that combines local and non-local feature of images to improve the feature representation capability. Firstly, we utilize the pre-trained deep convolutional neural network to extract the image feature. Secondly, a Multilayer Non-local Attention (MNLA) block is designed to generate a non-local attention map which represents the long-range dependencies between features of different positions. Therefore, it can achieve stronger noise-robustness of model and better ability to represent fine features. Finally, combined our Multilayer Non-local Attention block with bilinear pooling which has been proved to be effective for feature fusion, we propose a deep neural network framework, NLA-FFNet, with noise-robust multi-layer feature fusion. Experiment prove that our model can achieve a competitive classification accuracy in material image recognition, and has stronger noise-robustness at the same time.
一种结合非局部注意的材料识别抗噪特征融合模型
材料识别作为计算机视觉的一项重要任务,由于材料图像之间的类内方差较大,类间方差较小,因此具有很大的挑战性。为了解决这些识别问题,提出了基于深度卷积神经网络的多尺度特征融合方法,近年来得到了广泛的研究。然而,以往的研究工作过多地关注图像的局部特征,而忽略了非局部特征,而非局部特征对于材料识别等精细图像识别任务也至关重要。本文提出了一种结合图像局部和非局部特征的非局部注意特征融合网络(NLA-FFNet),以提高图像的特征表示能力。首先,利用预训练好的深度卷积神经网络提取图像特征。其次,设计了多层非局部注意块(Multilayer Non-local Attention, MNLA)来生成非局部注意图,该非局部注意图表示不同位置特征之间的长期依赖关系。因此,该模型具有更强的噪声鲁棒性和更好的精细特征表征能力。最后,将多层非局部注意块与双线性池相结合,提出了一种具有噪声鲁棒性的多层特征融合深度神经网络框架NLA-FFNet。实验证明,该模型在材料图像识别中具有较好的分类精度,同时具有较强的噪声鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信