Deep Residual Network with Self Attention Improves Person Re-Identification Accuracy

Jean-Paul Ainam, Ke Qin, Guisong Liu, Guangchun Luo
{"title":"Deep Residual Network with Self Attention Improves Person Re-Identification Accuracy","authors":"Jean-Paul Ainam, Ke Qin, Guisong Liu, Guangchun Luo","doi":"10.1145/3318299.3318324","DOIUrl":null,"url":null,"abstract":"In this paper, we present an attention mechanism scheme to improve the person re-identification task. Inspired by biology, we propose Self Attention Grid (SAG) to discover the most informative parts from a high-resolution image using its internal representation. In particular, given an input image, the proposed model is fed with two copies of the same image and consists of two branches. The upper branch processes the high-resolution image and learns high dimensional feature representation while the lower branch processes the low-resolution image and learns a filtering attention grid. We apply a max filter operation to non-overlapping sub-regions on the high feature representation before element-wise multiplied with the output of the second branch. The feature maps of the second branch are subsequently weighted to reflect the importance of each patch of the grid using a softmax operation. Our attention module helps the network to learn the most discriminative visual features of multiple image regions and is specifically optimized to attend feature representation at different levels. Extensive experiments on three large-scale datasets show that our self-attention mechanism significantly improves the baseline model and outperforms various state-of-art models by a large margin.","PeriodicalId":164987,"journal":{"name":"International Conference on Machine Learning and Computing","volume":"16 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Machine Learning and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3318299.3318324","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In this paper, we present an attention mechanism scheme to improve the person re-identification task. Inspired by biology, we propose Self Attention Grid (SAG) to discover the most informative parts from a high-resolution image using its internal representation. In particular, given an input image, the proposed model is fed with two copies of the same image and consists of two branches. The upper branch processes the high-resolution image and learns high dimensional feature representation while the lower branch processes the low-resolution image and learns a filtering attention grid. We apply a max filter operation to non-overlapping sub-regions on the high feature representation before element-wise multiplied with the output of the second branch. The feature maps of the second branch are subsequently weighted to reflect the importance of each patch of the grid using a softmax operation. Our attention module helps the network to learn the most discriminative visual features of multiple image regions and is specifically optimized to attend feature representation at different levels. Extensive experiments on three large-scale datasets show that our self-attention mechanism significantly improves the baseline model and outperforms various state-of-art models by a large margin.
基于自关注的深度残差网络提高了人再识别的准确性
本文提出了一种改进人再识别任务的注意机制方案。受生物学的启发,我们提出了自注意网格(SAG),利用其内部表示从高分辨率图像中发现最具信息量的部分。特别地,给定一个输入图像,所提出的模型被馈送到同一图像的两个副本,并由两个分支组成。上分支处理高分辨率图像并学习高维特征表示,下分支处理低分辨率图像并学习过滤注意网格。在与第二个分支的输出相乘之前,我们对高特征表示上的非重叠子区域应用最大过滤操作。随后,使用softmax操作对第二个分支的特征映射进行加权,以反映网格中每个补丁的重要性。我们的注意力模块帮助网络学习多个图像区域中最具判别性的视觉特征,并特别优化以参与不同层次的特征表示。在三个大规模数据集上的大量实验表明,我们的自注意机制显著改善了基线模型,并大大优于各种最新模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信