The application of target tracking algorithm in intelligent video system to flight support

Jianjun Peng, Jialei Zhai, Xiang Jin, Chengshuang Hu, Zaigang Li
{"title":"The application of target tracking algorithm in intelligent video system to flight support","authors":"Jianjun Peng, Jialei Zhai, Xiang Jin, Chengshuang Hu, Zaigang Li","doi":"10.1117/12.3014375","DOIUrl":null,"url":null,"abstract":"As the global pandemic gradually eases and the aviation transport industry continues to experience steady growth, highdensity flight operations are becoming the new normal. The intelligentization of flight support processes is a crucial avenue for enhancing both the safety and efficiency of flight operations. With the advancement of computer vision technology, video-based object tracking has shown significant potential in the context of flight support processes. However, in real airport environments, object tracking often encounters challenges such as occlusion, scale variations, rotation, and changes in lighting conditions, leading to a decrease in tracking accuracy and even target loss. In this paper, our focus is on overcoming tracking failures caused by occlusion, deformation, and lighting variations. We have conducted the following work, taking into consideration the unique characteristics of airport environments and the specific requirements of flight support processes: (i) We utilized features at three levels, namely, Histogram of Oriented Gradient (HOG), Color Names, and Convolutional Neural Networks (CNN), to describe the texture, color, and high-level semantics of video images, respectively. (ii) We employed a multi-feature fusion approach using a trilinear interpolation function to integrate information from various sources. (iii) We implemented improved ECO algorithms for the tracking of moving objects in the airport environment. Finally, we validated this object tracking system using real surveillance videos from the airport. Experimental results have demonstrated the effectiveness and practicality of the method under challenging conditions.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3014375","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As the global pandemic gradually eases and the aviation transport industry continues to experience steady growth, highdensity flight operations are becoming the new normal. The intelligentization of flight support processes is a crucial avenue for enhancing both the safety and efficiency of flight operations. With the advancement of computer vision technology, video-based object tracking has shown significant potential in the context of flight support processes. However, in real airport environments, object tracking often encounters challenges such as occlusion, scale variations, rotation, and changes in lighting conditions, leading to a decrease in tracking accuracy and even target loss. In this paper, our focus is on overcoming tracking failures caused by occlusion, deformation, and lighting variations. We have conducted the following work, taking into consideration the unique characteristics of airport environments and the specific requirements of flight support processes: (i) We utilized features at three levels, namely, Histogram of Oriented Gradient (HOG), Color Names, and Convolutional Neural Networks (CNN), to describe the texture, color, and high-level semantics of video images, respectively. (ii) We employed a multi-feature fusion approach using a trilinear interpolation function to integrate information from various sources. (iii) We implemented improved ECO algorithms for the tracking of moving objects in the airport environment. Finally, we validated this object tracking system using real surveillance videos from the airport. Experimental results have demonstrated the effectiveness and practicality of the method under challenging conditions.
智能视频系统中目标跟踪算法在飞行支持中的应用
随着全球疫情的逐渐缓解和航空运输业的持续稳定增长,高密度的飞行作业正在成为新常态。飞行支持流程的智能化是提高飞行安全和效率的重要途径。随着计算机视觉技术的发展,基于视频的物体跟踪技术在飞行支持流程中显示出了巨大的潜力。然而,在真实的机场环境中,物体跟踪经常会遇到遮挡、比例变化、旋转和光照条件变化等挑战,从而导致跟踪精度下降,甚至丢失目标。在本文中,我们的重点是克服由遮挡、变形和光照变化引起的跟踪失败。考虑到机场环境的特殊性和飞行保障流程的具体要求,我们开展了以下工作:(i) 我们利用三个层次的特征,即方向梯度直方图(HOG)、颜色名称和卷积神经网络(CNN),分别描述视频图像的纹理、颜色和高级语义。(ii) 我们采用了一种多特征融合方法,利用三线性插值函数来整合来自不同来源的信息。(iii) 我们改进了 ECO 算法,用于跟踪机场环境中的移动物体。最后,我们利用机场的真实监控视频验证了这一物体跟踪系统。实验结果证明了该方法在具有挑战性的条件下的有效性和实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信