基于嵌入式编码器和密集卷积的车辆再识别协同学习网络

IF 6.8 2区 工程技术 Q1 ENGINEERING, MULTIDISCIPLINARY
Mingfu Xiong , Tanghao Gui , Zhihong Sun , Saeed Anwar , Aziz Alotaibi , Khan Muhammad
{"title":"基于嵌入式编码器和密集卷积的车辆再识别协同学习网络","authors":"Mingfu Xiong ,&nbsp;Tanghao Gui ,&nbsp;Zhihong Sun ,&nbsp;Saeed Anwar ,&nbsp;Aziz Alotaibi ,&nbsp;Khan Muhammad","doi":"10.1016/j.aej.2025.04.101","DOIUrl":null,"url":null,"abstract":"<div><div>To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method’s generalization capability.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"128 ","pages":"Pages 297-305"},"PeriodicalIF":6.8000,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SEDNet: Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution for Vehicle Re-identification\",\"authors\":\"Mingfu Xiong ,&nbsp;Tanghao Gui ,&nbsp;Zhihong Sun ,&nbsp;Saeed Anwar ,&nbsp;Aziz Alotaibi ,&nbsp;Khan Muhammad\",\"doi\":\"10.1016/j.aej.2025.04.101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method’s generalization capability.</div></div>\",\"PeriodicalId\":7484,\"journal\":{\"name\":\"alexandria engineering journal\",\"volume\":\"128 \",\"pages\":\"Pages 297-305\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2025-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"alexandria engineering journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1110016825006064\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825006064","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

为了解决车辆再识别中由于过分强调局部特征而导致的信息冗余(如颜色和车辆型号)问题,本文提出了一种带有嵌入式编码器和密集卷积的协同学习网络(SEDNet)。所提出的SEDNet框架由三个独特的分支组成:全局嵌入式多头编码器(GEME)、局部双密度亚特拉斯卷积(LDAC)和辅助属性嵌入(AAM)。GEME分支整合了车辆的整体外观特征,以增强从不同角度描述的一致性。为了抑制冗余信息,如颜色和车辆模型信息,并细化局部特征,LDAC分支采用注意机制,使用不同膨胀率的卷积核捕获多尺度特征。此外,AAM分支使用车辆元数据,例如方向和摄像头视角,来增强功能的鲁棒性。我们提出的SEDNet方法已经在主流基准车辆再识别数据集(包括VeRi-776、VehicleID和VeRi-Wild)上进行了严格的测试。结果表明,与最新方法相比,我们的方法分别提高了2.2%、2.2%和0.2%的mAP,所有方法都在常规尺度上进行评估。在Market-1501和DukeMTMC-reID数据集上进行的其他实验进一步验证了我们的方法的泛化能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SEDNet: Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution for Vehicle Re-identification
To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method’s generalization capability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
alexandria engineering journal
alexandria engineering journal Engineering-General Engineering
CiteScore
11.20
自引率
4.40%
发文量
1015
审稿时长
43 days
期刊介绍: Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification: • Mechanical, Production, Marine and Textile Engineering • Electrical Engineering, Computer Science and Nuclear Engineering • Civil and Architecture Engineering • Chemical Engineering and Applied Sciences • Environmental Engineering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信