Research on Pedestrian Detection Based on Multimodal Infor-mation Fusion

IF 2 4区 计算机科学 Q3 AUTOMATION & CONTROL SYSTEMS
Xiaoping Yang, Zhehong Li, Yuan Liu, Ran Huang, Kai Tan, Lin Huang
{"title":"Research on Pedestrian Detection Based on Multimodal Infor-mation Fusion","authors":"Xiaoping Yang, Zhehong Li, Yuan Liu, Ran Huang, Kai Tan, Lin Huang","doi":"10.5755/j01.itc.52.4.33766","DOIUrl":null,"url":null,"abstract":"Aiming at the matter that pedestrian detection in the autonomous driving system is vulnerable to the influence of the external environment and the detector supported single sensor modal detector has poor performance beneath the condition of enormous amendment of unrestricted light-weight, this paper proposes a fusion of light and thermal infrared dual mode pedestrian detection methodology. Firstly, 1 × 1 convolution and expanded convolution square measure are introduced within the residual network, and also the ROI Align methodology is employed to exchange the ROI Pooling method-ology to map the candidate box to the feature layer to optimize the Faster R-CNN. Secondly, the loss performance of the generalized intersection over union (GIoU) is employed because of the loss performance of the prediction box positioning regression; finally, supported by the improved Faster R-CNN, four forms of multimodal neural network structures square measure designed to fuse visible and thermal infrared pictures. According to experimental findings, the proposed technique outperforms current mainstream detection algorithms on the KAIST dataset. As compared to the conventional ACF + T + THOG pedestrian detector, the AP is 8.38 percentage points greater. Compared to the visible light pedestrian detector, the miss rate is 5.34 percentage points lower.","PeriodicalId":54982,"journal":{"name":"Information Technology and Control","volume":"17 3","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Technology and Control","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.5755/j01.itc.52.4.33766","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Aiming at the matter that pedestrian detection in the autonomous driving system is vulnerable to the influence of the external environment and the detector supported single sensor modal detector has poor performance beneath the condition of enormous amendment of unrestricted light-weight, this paper proposes a fusion of light and thermal infrared dual mode pedestrian detection methodology. Firstly, 1 × 1 convolution and expanded convolution square measure are introduced within the residual network, and also the ROI Align methodology is employed to exchange the ROI Pooling method-ology to map the candidate box to the feature layer to optimize the Faster R-CNN. Secondly, the loss performance of the generalized intersection over union (GIoU) is employed because of the loss performance of the prediction box positioning regression; finally, supported by the improved Faster R-CNN, four forms of multimodal neural network structures square measure designed to fuse visible and thermal infrared pictures. According to experimental findings, the proposed technique outperforms current mainstream detection algorithms on the KAIST dataset. As compared to the conventional ACF + T + THOG pedestrian detector, the AP is 8.38 percentage points greater. Compared to the visible light pedestrian detector, the miss rate is 5.34 percentage points lower.
基于多模态信息融合的行人检测研究
针对自动驾驶系统中行人检测易受外部环境影响,以及在轻量化不受限制的巨大修正条件下,单传感器模态检测器性能不佳的问题,本文提出了一种光热红外双模行人融合检测方法。首先,在残差网络中引入 1 × 1 卷积和扩展卷积平方量,并采用 ROI Align 方法交换 ROI Pooling 方法将候选框映射到特征层,从而优化 Faster R-CNN 。其次,由于预测框定位回归的损失性能,采用了广义交集大于联合(GIoU)的损失性能;最后,在改进的 Faster R-CNN 的支持下,设计了四种形式的多模态神经网络结构来融合可见光和热红外图像。实验结果表明,在 KAIST 数据集上,所提出的技术优于当前的主流检测算法。与传统的 ACF + T + THOG 行人检测器相比,AP 高出 8.38 个百分点。与可见光行人检测器相比,漏检率降低了 5.34 个百分点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Technology and Control
Information Technology and Control 工程技术-计算机:人工智能
CiteScore
2.70
自引率
9.10%
发文量
36
审稿时长
12 months
期刊介绍: Periodical journal covers a wide field of computer science and control systems related problems including: -Software and hardware engineering; -Management systems engineering; -Information systems and databases; -Embedded systems; -Physical systems modelling and application; -Computer networks and cloud computing; -Data visualization; -Human-computer interface; -Computer graphics, visual analytics, and multimedia systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信