基于深度关系线索的视图自适应多目标跟踪方法

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Haoran Sun, Yang Li, Guanci Yang, Zhidong Su, Kexin Luo
{"title":"基于深度关系线索的视图自适应多目标跟踪方法","authors":"Haoran Sun, Yang Li, Guanci Yang, Zhidong Su, Kexin Luo","doi":"10.1007/s40747-024-01776-7","DOIUrl":null,"url":null,"abstract":"<p>Multi-object tracking (MOT) tasks face challenges from multiple perception views due to the diversity of application scenarios. Different views (front-view and top-view) have different imaging and data distribution characteristics, but the current MOT methods do not consider these differences and only adopt a unified association strategy to deal with various occlusion situations. This paper proposed View Adaptive Multi-Object Tracking Method Based on Depth Relationship Cues (ViewTrack) to enable MOT to adapt to the scene's dynamic changes. Firstly, based on exploiting the depth relationships between objects by using the position information of the bounding box, a view-type recognition method based on depth relationship cues (VTRM) is proposed to perceive the changes of depth and view within the dynamic scene. Secondly, by adjusting the interval partitioning strategy to adapt to the changes in view characteristics, a view adaptive partitioning method for tracklet sets and detection sets (VAPM) is proposed to achieve sparse decomposition in occluded scenes. Then, combining pedestrian displacement with Intersection over Union (IoU), a displacement modulated Intersection over Union method (DMIoU) is proposed to improve the association accuracy between detection and tracklet boxes. Finally, the comparison results with 12 representative methods demonstrate that ViewTrack outperforms multiple metrics on the benchmark datasets. The code is available at https://github.com/Hamor404/ViewTrack.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"7 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"View adaptive multi-object tracking method based on depth relationship cues\",\"authors\":\"Haoran Sun, Yang Li, Guanci Yang, Zhidong Su, Kexin Luo\",\"doi\":\"10.1007/s40747-024-01776-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Multi-object tracking (MOT) tasks face challenges from multiple perception views due to the diversity of application scenarios. Different views (front-view and top-view) have different imaging and data distribution characteristics, but the current MOT methods do not consider these differences and only adopt a unified association strategy to deal with various occlusion situations. This paper proposed View Adaptive Multi-Object Tracking Method Based on Depth Relationship Cues (ViewTrack) to enable MOT to adapt to the scene's dynamic changes. Firstly, based on exploiting the depth relationships between objects by using the position information of the bounding box, a view-type recognition method based on depth relationship cues (VTRM) is proposed to perceive the changes of depth and view within the dynamic scene. Secondly, by adjusting the interval partitioning strategy to adapt to the changes in view characteristics, a view adaptive partitioning method for tracklet sets and detection sets (VAPM) is proposed to achieve sparse decomposition in occluded scenes. Then, combining pedestrian displacement with Intersection over Union (IoU), a displacement modulated Intersection over Union method (DMIoU) is proposed to improve the association accuracy between detection and tracklet boxes. Finally, the comparison results with 12 representative methods demonstrate that ViewTrack outperforms multiple metrics on the benchmark datasets. The code is available at https://github.com/Hamor404/ViewTrack.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01776-7\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01776-7","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

由于应用场景的多样性,多目标跟踪任务面临着来自多个感知视角的挑战。不同的视图(前视图和俯视图)具有不同的成像和数据分布特征,但目前的MOT方法没有考虑这些差异,只是采用统一的关联策略来处理各种遮挡情况。为了使MOT能够适应场景的动态变化,提出了基于深度关系线索的视图自适应多目标跟踪方法(ViewTrack)。首先,在利用边界框位置信息挖掘物体间深度关系的基础上,提出了一种基于深度关系线索(VTRM)的视觉类型识别方法来感知动态场景中景深和视角的变化;其次,通过调整间隔划分策略以适应视图特征的变化,提出了一种轨道集和检测集的视图自适应划分方法(VAPM),实现了遮挡场景下的稀疏分解;然后,将行人位移与交叉口联合(Intersection over Union, IoU)相结合,提出了一种位移调制交叉口联合方法(Intersection over Union, DMIoU),以提高检测与轨迹盒之间的关联精度。最后,与12种代表性方法的比较结果表明,ViewTrack在基准数据集上优于多个指标。代码可在https://github.com/Hamor404/ViewTrack上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
View adaptive multi-object tracking method based on depth relationship cues

Multi-object tracking (MOT) tasks face challenges from multiple perception views due to the diversity of application scenarios. Different views (front-view and top-view) have different imaging and data distribution characteristics, but the current MOT methods do not consider these differences and only adopt a unified association strategy to deal with various occlusion situations. This paper proposed View Adaptive Multi-Object Tracking Method Based on Depth Relationship Cues (ViewTrack) to enable MOT to adapt to the scene's dynamic changes. Firstly, based on exploiting the depth relationships between objects by using the position information of the bounding box, a view-type recognition method based on depth relationship cues (VTRM) is proposed to perceive the changes of depth and view within the dynamic scene. Secondly, by adjusting the interval partitioning strategy to adapt to the changes in view characteristics, a view adaptive partitioning method for tracklet sets and detection sets (VAPM) is proposed to achieve sparse decomposition in occluded scenes. Then, combining pedestrian displacement with Intersection over Union (IoU), a displacement modulated Intersection over Union method (DMIoU) is proposed to improve the association accuracy between detection and tracklet boxes. Finally, the comparison results with 12 representative methods demonstrate that ViewTrack outperforms multiple metrics on the benchmark datasets. The code is available at https://github.com/Hamor404/ViewTrack.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信