Mingfu Xiong , Tanghao Gui , Zhihong Sun , Saeed Anwar , Aziz Alotaibi , Khan Muhammad
{"title":"基于嵌入式编码器和密集卷积的车辆再识别协同学习网络","authors":"Mingfu Xiong , Tanghao Gui , Zhihong Sun , Saeed Anwar , Aziz Alotaibi , Khan Muhammad","doi":"10.1016/j.aej.2025.04.101","DOIUrl":null,"url":null,"abstract":"<div><div>To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method’s generalization capability.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"128 ","pages":"Pages 297-305"},"PeriodicalIF":6.8000,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SEDNet: Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution for Vehicle Re-identification\",\"authors\":\"Mingfu Xiong , Tanghao Gui , Zhihong Sun , Saeed Anwar , Aziz Alotaibi , Khan Muhammad\",\"doi\":\"10.1016/j.aej.2025.04.101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method’s generalization capability.</div></div>\",\"PeriodicalId\":7484,\"journal\":{\"name\":\"alexandria engineering journal\",\"volume\":\"128 \",\"pages\":\"Pages 297-305\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2025-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"alexandria engineering journal\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1110016825006064\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825006064","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
SEDNet: Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution for Vehicle Re-identification
To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method’s generalization capability.
期刊介绍:
Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification:
• Mechanical, Production, Marine and Textile Engineering
• Electrical Engineering, Computer Science and Nuclear Engineering
• Civil and Architecture Engineering
• Chemical Engineering and Applied Sciences
• Environmental Engineering