Research on pedestrian targe detection based on deep learning

Hansong Wang, Quan Liang
{"title":"Research on pedestrian targe detection based on deep learning","authors":"Hansong Wang, Quan Liang","doi":"10.1117/12.2667364","DOIUrl":null,"url":null,"abstract":"In the process of autonomous driving, there will be missed detections and false detections caused by dense crowds and occlusions during pedestrian target detection. This paper proposes a pedestrian object detection network model that combines Swin Transformer and YOLOv3. First use the lightweight Swin Transformer Tiny to replace the original Darknet53 as the backbone network of YOLOv3. The multi-scale detection is realized through the self-attention hierarchical network, which optimizes the detection effect in the case of dense pedestrians. Secondly, to deal with the occlusion in the crowd, Focal-EIoU Loss is used as a new loss function. I Introduce edge length loss and Focal L1 loss to increase the loss and gradient of IoU, thereby improving the regression accuracy. Finally, experiments are performed on the Caltech dataset. The experimental results show that the precision on the Caltech dataset reaches 95.23% and the recall rate reaches 89.57%. Compared with the original YOLOv3 algorithm, the precision is increased by 3.22%, and the recall rate is increased by 4.35%. The effectiveness of the algorithm is verified, and the performance of pedestrian detection is greatly improved.","PeriodicalId":345723,"journal":{"name":"Fifth International Conference on Computer Information Science and Artificial Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fifth International Conference on Computer Information Science and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2667364","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the process of autonomous driving, there will be missed detections and false detections caused by dense crowds and occlusions during pedestrian target detection. This paper proposes a pedestrian object detection network model that combines Swin Transformer and YOLOv3. First use the lightweight Swin Transformer Tiny to replace the original Darknet53 as the backbone network of YOLOv3. The multi-scale detection is realized through the self-attention hierarchical network, which optimizes the detection effect in the case of dense pedestrians. Secondly, to deal with the occlusion in the crowd, Focal-EIoU Loss is used as a new loss function. I Introduce edge length loss and Focal L1 loss to increase the loss and gradient of IoU, thereby improving the regression accuracy. Finally, experiments are performed on the Caltech dataset. The experimental results show that the precision on the Caltech dataset reaches 95.23% and the recall rate reaches 89.57%. Compared with the original YOLOv3 algorithm, the precision is increased by 3.22%, and the recall rate is increased by 4.35%. The effectiveness of the algorithm is verified, and the performance of pedestrian detection is greatly improved.
基于深度学习的行人目标检测研究
在自动驾驶过程中,行人目标检测过程中会出现因密集人群和遮挡造成的漏检和误检。本文提出了一种结合Swin Transformer和YOLOv3的行人目标检测网络模型。首先使用轻量级Swin Transformer Tiny取代原来的Darknet53作为YOLOv3的骨干网络。通过自关注分层网络实现多尺度检测,优化了行人密集情况下的检测效果。其次,采用Focal-EIoU Loss作为新的损失函数来处理人群中的遮挡问题。引入边缘长度损耗和Focal L1损耗,增加IoU的损耗和梯度,从而提高回归精度。最后,在加州理工学院数据集上进行了实验。实验结果表明,该方法在加州理工学院数据集上的准确率达到95.23%,召回率达到89.57%。与原来的YOLOv3算法相比,准确率提高了3.22%,召回率提高了4.35%。验证了算法的有效性,大大提高了行人检测的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信