属性-图像人再识别的双模态元度量学习

Rongxian Xu, Fei Shen, Hanxiao Wu, Jianqing Zhu, Huanqiang Zeng
{"title":"属性-图像人再识别的双模态元度量学习","authors":"Rongxian Xu, Fei Shen, Hanxiao Wu, Jianqing Zhu, Huanqiang Zeng","doi":"10.1109/ICNSC52481.2021.9702261","DOIUrl":null,"url":null,"abstract":"Attribute-image person re-identification (AIPR) aiming to retrieve persons from massive images via an attribute query is a meaningful but challenging cross-modal retrieval task. Because there is a huge modal difference between person images and attributes, e.g., on the image modal one subject usually contains of varying instances, but on the attribute modal, one subject only contains an explicit instance. Unlike most existing AIPR methods focusing on shrinking feature differences crossing modals, we propose a dual modal meta metric learning (DM3L) method for AIPR in this paper. Specifically, in each episode, we sample a subset as a new task and split the training data into a single-modal support set of person images and a dual modal query set consisting of both person images and attributes. Based on the single-modal support set and the dual modal query set, our DM3L learns not only attribute-image cross-modal metrics but also learns image-image intra-modal metrics. Therefore, our DM3L method encourages data on both attribute and image modalities are discriminate to improve AIPR. Experiments show that our DM3L outperforms state-of-the-art approaches on Market-1501 Attribute and PETA datasets.","PeriodicalId":129062,"journal":{"name":"2021 IEEE International Conference on Networking, Sensing and Control (ICNSC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Dual Modal Meta Metric Learning for Attribute-Image Person Re-identification\",\"authors\":\"Rongxian Xu, Fei Shen, Hanxiao Wu, Jianqing Zhu, Huanqiang Zeng\",\"doi\":\"10.1109/ICNSC52481.2021.9702261\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Attribute-image person re-identification (AIPR) aiming to retrieve persons from massive images via an attribute query is a meaningful but challenging cross-modal retrieval task. Because there is a huge modal difference between person images and attributes, e.g., on the image modal one subject usually contains of varying instances, but on the attribute modal, one subject only contains an explicit instance. Unlike most existing AIPR methods focusing on shrinking feature differences crossing modals, we propose a dual modal meta metric learning (DM3L) method for AIPR in this paper. Specifically, in each episode, we sample a subset as a new task and split the training data into a single-modal support set of person images and a dual modal query set consisting of both person images and attributes. Based on the single-modal support set and the dual modal query set, our DM3L learns not only attribute-image cross-modal metrics but also learns image-image intra-modal metrics. Therefore, our DM3L method encourages data on both attribute and image modalities are discriminate to improve AIPR. Experiments show that our DM3L outperforms state-of-the-art approaches on Market-1501 Attribute and PETA datasets.\",\"PeriodicalId\":129062,\"journal\":{\"name\":\"2021 IEEE International Conference on Networking, Sensing and Control (ICNSC)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Networking, Sensing and Control (ICNSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNSC52481.2021.9702261\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Networking, Sensing and Control (ICNSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNSC52481.2021.9702261","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

属性-图像人物再识别(AIPR)旨在通过属性查询从海量图像中检索人物,是一项有意义但具有挑战性的跨模式检索任务。由于人的形象和属性之间的模态差异很大,例如,在形象模态上,一个主语通常包含不同的实例,而在属性模态上,一个主语只包含一个显式实例。与大多数现有的AIPR方法不同,我们提出了一种双模态元度量学习(DM3L)方法用于AIPR。具体来说,在每一集中,我们抽取一个子集作为一个新的任务,并将训练数据分成一个单模态的人物图像支持集和一个由人物图像和属性组成的双模态查询集。基于单模态支持集和双模态查询集,我们的DM3L不仅学习属性-图像的跨模态度量,还学习图像-图像的模态内度量。因此,我们的DM3L方法鼓励区分属性和图像模式的数据,以提高AIPR。实验表明,我们的DM3L在Market-1501属性和PETA数据集上优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Dual Modal Meta Metric Learning for Attribute-Image Person Re-identification
Attribute-image person re-identification (AIPR) aiming to retrieve persons from massive images via an attribute query is a meaningful but challenging cross-modal retrieval task. Because there is a huge modal difference between person images and attributes, e.g., on the image modal one subject usually contains of varying instances, but on the attribute modal, one subject only contains an explicit instance. Unlike most existing AIPR methods focusing on shrinking feature differences crossing modals, we propose a dual modal meta metric learning (DM3L) method for AIPR in this paper. Specifically, in each episode, we sample a subset as a new task and split the training data into a single-modal support set of person images and a dual modal query set consisting of both person images and attributes. Based on the single-modal support set and the dual modal query set, our DM3L learns not only attribute-image cross-modal metrics but also learns image-image intra-modal metrics. Therefore, our DM3L method encourages data on both attribute and image modalities are discriminate to improve AIPR. Experiments show that our DM3L outperforms state-of-the-art approaches on Market-1501 Attribute and PETA datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信