面向分割网络提取车辆再识别背景

A. Munir, N. Martinel, C. Micheloni
{"title":"面向分割网络提取车辆再识别背景","authors":"A. Munir, N. Martinel, C. Micheloni","doi":"10.1109/AVSS52988.2021.9663832","DOIUrl":null,"url":null,"abstract":"Vehicle re-identification (re-id) is a challenging task due to the presence of high intra-class and low inter-class variations in the visual data acquired from monitoring camera networks. Unique and discriminative feature representations are needed to overcome the existence of several variations including color, illumination, orientation, background and occlusion. The orientations of the vehicles in the images make the learned models unable to learn multiple parts of the vehicle and relationship between them. The combination of global and partial features is one of the solutions to improve the discriminative learning of deep learning models. Leveraging on such solutions, we propose an Oriented Splits Network (OSN) for an end to end learning of multiple features along with global features to form a strong descriptor for vehicle re-identification. To capture the orientation variability of the vehicles, the proposed network introduces a partition of the images into several oriented stripes to obtain local descriptors for each part/region. Such a scheme is therefore exploited by a camera based feature distillation (CBD) training strategy to remove the background features. These are filtered out from oriented vehicles representations which yield to a much stronger unique representation of the vehicles. We perform experiments on two benchmark vehicle re-id datasets to verify the performance of the proposed approach which show that the proposed solution achieves better result with respect to the state of the art with margin.","PeriodicalId":246327,"journal":{"name":"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Oriented Splits Network to Distill Background for Vehicle Re-Identification\",\"authors\":\"A. Munir, N. Martinel, C. Micheloni\",\"doi\":\"10.1109/AVSS52988.2021.9663832\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Vehicle re-identification (re-id) is a challenging task due to the presence of high intra-class and low inter-class variations in the visual data acquired from monitoring camera networks. Unique and discriminative feature representations are needed to overcome the existence of several variations including color, illumination, orientation, background and occlusion. The orientations of the vehicles in the images make the learned models unable to learn multiple parts of the vehicle and relationship between them. The combination of global and partial features is one of the solutions to improve the discriminative learning of deep learning models. Leveraging on such solutions, we propose an Oriented Splits Network (OSN) for an end to end learning of multiple features along with global features to form a strong descriptor for vehicle re-identification. To capture the orientation variability of the vehicles, the proposed network introduces a partition of the images into several oriented stripes to obtain local descriptors for each part/region. Such a scheme is therefore exploited by a camera based feature distillation (CBD) training strategy to remove the background features. These are filtered out from oriented vehicles representations which yield to a much stronger unique representation of the vehicles. We perform experiments on two benchmark vehicle re-id datasets to verify the performance of the proposed approach which show that the proposed solution achieves better result with respect to the state of the art with margin.\",\"PeriodicalId\":246327,\"journal\":{\"name\":\"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AVSS52988.2021.9663832\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 17th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS52988.2021.9663832","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

车辆再识别是一项具有挑战性的任务,因为从监控摄像头网络中获取的视觉数据存在高类别内和低类别间的变化。为了克服颜色、光照、方向、背景和遮挡等多种变化的存在,需要独特的、有区别的特征表示。车辆在图像中的方向使得学习模型无法学习车辆的多个部件及其之间的关系。将全局特征与局部特征相结合是改善深度学习模型判别学习的解决方案之一。利用这些解决方案,我们提出了一个面向分割网络(OSN),用于多个特征的端到端学习以及全局特征,以形成车辆重新识别的强描述符。为了捕获车辆的方向变化,该网络将图像划分为几个有方向的条纹,以获得每个部分/区域的局部描述符。因此,基于相机的特征蒸馏(CBD)训练策略利用这种方案来去除背景特征。这些都是从面向车辆的表示中过滤出来的,从而产生更强的车辆的独特表示。我们在两个基准车辆重新识别数据集上进行了实验,以验证所提出方法的性能,结果表明所提出的解决方案在具有裕度的情况下取得了更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Oriented Splits Network to Distill Background for Vehicle Re-Identification
Vehicle re-identification (re-id) is a challenging task due to the presence of high intra-class and low inter-class variations in the visual data acquired from monitoring camera networks. Unique and discriminative feature representations are needed to overcome the existence of several variations including color, illumination, orientation, background and occlusion. The orientations of the vehicles in the images make the learned models unable to learn multiple parts of the vehicle and relationship between them. The combination of global and partial features is one of the solutions to improve the discriminative learning of deep learning models. Leveraging on such solutions, we propose an Oriented Splits Network (OSN) for an end to end learning of multiple features along with global features to form a strong descriptor for vehicle re-identification. To capture the orientation variability of the vehicles, the proposed network introduces a partition of the images into several oriented stripes to obtain local descriptors for each part/region. Such a scheme is therefore exploited by a camera based feature distillation (CBD) training strategy to remove the background features. These are filtered out from oriented vehicles representations which yield to a much stronger unique representation of the vehicles. We perform experiments on two benchmark vehicle re-id datasets to verify the performance of the proposed approach which show that the proposed solution achieves better result with respect to the state of the art with margin.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信