Pedestrian tracking method based on S-YOFEO framework in complex scene

Wenshun Sheng, Jiahui Shen, Qiming Huang, Zhixuan Liu, Zihao Ding
{"title":"Pedestrian tracking method based on S-YOFEO framework in complex scene","authors":"Wenshun Sheng, Jiahui Shen, Qiming Huang, Zhixuan Liu, Zihao Ding","doi":"10.3233/jifs-237208","DOIUrl":null,"url":null,"abstract":"A real-time stable multi-target tracking method based on the enhanced You Only Look Once-v8 (YOLOv8) and the optimized Simple Online and Realtime Tracking with a Deep association metric (DeepSORT) for multi-target tracking (S-YOFEO) is proposed with the aim of addressing the issue of target ID transformation and loss caused by the increase of practical background complexity. For the purpose of further enhancing the representation of small-scale features, a small target detection head is first introduced to the detection layer of YOLOv8 in this paper with the aim of collecting more detailed information by increasing the detection resolution of YOLOv8. Secondly, the Omni-Scale Network (OSNet) feature extraction network is implemented to enable accurate and efficient fusion of the extracted complex and comparable feature information, taking into account the restricted computational power of DeepSORT’s original feature extraction network. Again, a novel adaptive forgetting Kalman filter algorithm (FSA) is devised to enhance the precision of model prediction and the effectiveness of parameter updates to adjust to the uncertain movement speed and trajectory of pedestrians in real scenarios. Following that, an accurate and stable association matching process is obtained by substituting Efficient-Intersection over Union (EIOU) for Complete-Intersection over Union (CIOU) in DeepSORT to boost the convergence speed and matching effect during association matching. Last but not least, One-Shot Aggregation (OSA) is presented as the trajectory feature extractor to deal with the various noise interferences in the complex scene. OSA is highly sensitive to information of different scales, and its one-time aggregation property substantially decreases the computational overhead of the model. According to the trial results, S-YOFEO has made some developments as its precision can reach 78.2% and its speed can reach 56.0 frames per second (FPS).","PeriodicalId":509313,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent & Fuzzy Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/jifs-237208","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A real-time stable multi-target tracking method based on the enhanced You Only Look Once-v8 (YOLOv8) and the optimized Simple Online and Realtime Tracking with a Deep association metric (DeepSORT) for multi-target tracking (S-YOFEO) is proposed with the aim of addressing the issue of target ID transformation and loss caused by the increase of practical background complexity. For the purpose of further enhancing the representation of small-scale features, a small target detection head is first introduced to the detection layer of YOLOv8 in this paper with the aim of collecting more detailed information by increasing the detection resolution of YOLOv8. Secondly, the Omni-Scale Network (OSNet) feature extraction network is implemented to enable accurate and efficient fusion of the extracted complex and comparable feature information, taking into account the restricted computational power of DeepSORT’s original feature extraction network. Again, a novel adaptive forgetting Kalman filter algorithm (FSA) is devised to enhance the precision of model prediction and the effectiveness of parameter updates to adjust to the uncertain movement speed and trajectory of pedestrians in real scenarios. Following that, an accurate and stable association matching process is obtained by substituting Efficient-Intersection over Union (EIOU) for Complete-Intersection over Union (CIOU) in DeepSORT to boost the convergence speed and matching effect during association matching. Last but not least, One-Shot Aggregation (OSA) is presented as the trajectory feature extractor to deal with the various noise interferences in the complex scene. OSA is highly sensitive to information of different scales, and its one-time aggregation property substantially decreases the computational overhead of the model. According to the trial results, S-YOFEO has made some developments as its precision can reach 78.2% and its speed can reach 56.0 frames per second (FPS).
复杂场景中基于 S-YOFEO 框架的行人跟踪方法
本文提出了一种基于增强型 "你只看一次-v8"(YOLOv8)的实时稳定多目标跟踪方法,以及经过优化的多目标跟踪深度关联指标(DeepSORT)简单在线实时跟踪方法(S-YOFEO),旨在解决实际背景复杂度增加导致的目标 ID 变换和丢失问题。为了进一步增强小尺度特征的表示能力,本文首先在 YOLOv8 的检测层中引入了小目标检测头,目的是通过提高 YOLOv8 的检测分辨率来收集更多细节信息。其次,考虑到 DeepSORT 原始特征提取网络的计算能力有限,本文采用了全尺度网络(Omni-Scale Network,OSNet)特征提取网络,以便准确高效地融合提取的复杂可比特征信息。此外,还设计了一种新颖的自适应遗忘卡尔曼滤波算法(FSA),以提高模型预测的精度和参数更新的有效性,从而适应真实场景中行人不确定的移动速度和轨迹。然后,在 DeepSORT 中用 "有效交集联合"(EIOU)代替 "完全交集联合"(CIOU),以提高关联匹配的收敛速度和匹配效果,从而获得精确稳定的关联匹配过程。最后,单次聚合(OSA)作为轨迹特征提取器被提出来处理复杂场景中的各种噪声干扰。OSA 对不同尺度的信息高度敏感,其一次性聚合的特性大大降低了模型的计算开销。根据试验结果,S-YOFEO 的精度达到 78.2%,速度达到每秒 56.0 帧(FPS),取得了一定的进展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信