VERI-D: A new dataset and method for multi-camera vehicle re-identification of damaged cars under varying lighting conditions

Shao Liu, S. Agaian
{"title":"VERI-D: A new dataset and method for multi-camera vehicle re-identification of damaged cars under varying lighting conditions","authors":"Shao Liu, S. Agaian","doi":"10.1063/5.0183408","DOIUrl":null,"url":null,"abstract":"Vehicle re-identification (V-ReID) is a critical task that aims to match the same vehicle across images from different camera viewpoints. The previous studies have leveraged attribute clues, such as color, model, and license plate, to enhance the V-ReID performance. However, these methods often lack effective interaction between the global–local features and the final V-ReID objective. Moreover, they do not address the challenging issues in real-world scenarios, such as high viewpoint variations, extreme illumination conditions, and car appearance changes (e.g., due to damage or wrong driving). We propose a novel framework to tackle these problems and advance the research in V-ReID, which can handle various types of car appearance changes and achieve robust V-ReID under varying lighting conditions. Our main contributions are as follows: (i) we propose a new Re-ID architecture named global–local self-attention network, which integrates local information into the feature learning process and enhances the feature representation for V-ReID and (ii) we introduce a novel damaged vehicle Re-ID dataset called VERI-D, which is the first publicly available dataset that focuses on this challenging yet practical scenario. The dataset contains both natural and synthetic images of damaged vehicles captured from multiple camera viewpoints and under different lighting conditions. (iii) We conduct extensive experiments on the VERI-D dataset and demonstrate the effectiveness of our approach in addressing the challenges associated with damaged vehicle re-identification. We also compare our method to several state-of-the-art V-ReID methods and show its superiority.","PeriodicalId":502250,"journal":{"name":"APL Machine Learning","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"APL Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1063/5.0183408","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Vehicle re-identification (V-ReID) is a critical task that aims to match the same vehicle across images from different camera viewpoints. The previous studies have leveraged attribute clues, such as color, model, and license plate, to enhance the V-ReID performance. However, these methods often lack effective interaction between the global–local features and the final V-ReID objective. Moreover, they do not address the challenging issues in real-world scenarios, such as high viewpoint variations, extreme illumination conditions, and car appearance changes (e.g., due to damage or wrong driving). We propose a novel framework to tackle these problems and advance the research in V-ReID, which can handle various types of car appearance changes and achieve robust V-ReID under varying lighting conditions. Our main contributions are as follows: (i) we propose a new Re-ID architecture named global–local self-attention network, which integrates local information into the feature learning process and enhances the feature representation for V-ReID and (ii) we introduce a novel damaged vehicle Re-ID dataset called VERI-D, which is the first publicly available dataset that focuses on this challenging yet practical scenario. The dataset contains both natural and synthetic images of damaged vehicles captured from multiple camera viewpoints and under different lighting conditions. (iii) We conduct extensive experiments on the VERI-D dataset and demonstrate the effectiveness of our approach in addressing the challenges associated with damaged vehicle re-identification. We also compare our method to several state-of-the-art V-ReID methods and show its superiority.
VERI-D:用于在不同光照条件下对受损汽车进行多摄像头车辆再识别的新数据集和方法
车辆再识别(V-ReID)是一项关键任务,其目的是在不同摄像机视角的图像中匹配同一辆车。以往的研究利用颜色、车型和车牌等属性线索来提高 V-ReID 性能。然而,这些方法往往缺乏全局-局部特征与最终 V-ReID 目标之间的有效互动。此外,这些方法无法解决现实世界中的挑战性问题,如视角变化大、极端光照条件和汽车外观变化(如损坏或错误驾驶)。我们提出了一个新颖的框架来解决这些问题,并推动了 V-ReID 的研究,该框架可以处理各种类型的汽车外观变化,并在不同光照条件下实现稳健的 V-ReID 技术。我们的主要贡献如下(i) 我们提出了一种名为全局-局部自注意力网络的新型 Re-ID 架构,该架构将局部信息整合到特征学习过程中,并增强了 V-ReID 的特征表示;(ii) 我们引入了一种名为 VERI-D 的新型受损车辆 Re-ID 数据集,这是首个公开可用的数据集,重点关注这一具有挑战性但实用的场景。该数据集包含从多个摄像机视角在不同光照条件下捕捉到的受损车辆的自然图像和合成图像。(iii) 我们在 VERI-D 数据集上进行了大量实验,证明了我们的方法在应对受损车辆再识别相关挑战方面的有效性。我们还将我们的方法与几种最先进的 V-ReID 方法进行了比较,并展示了其优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信