FYCFNet: Vehicle and Pedestrian Detection Network based on Multi-model Fusion

Pnegyu Dai
{"title":"FYCFNet: Vehicle and Pedestrian Detection Network based on Multi-model Fusion","authors":"Pnegyu Dai","doi":"10.1109/cvidliccea56201.2022.9825072","DOIUrl":null,"url":null,"abstract":"Vision-based solutions for target detection in autonomous driving are very much about the accuracy of detection. A correct or incorrect detection may cause or avoid a traffic accident. Therefore, in this paper, to further improve the detection accuracy of vision schemes, we propose a multi-model fusion network: Fusion Network with YoloV5 and CBNEet Faster-RCNN (FYCFNet) that fuses a one-stage target detection model and a two-stage model, which consists of three parts: the first part is a single-stage YOLOV5 [1] detection model, the second part is a Faster-RCNN [2] with CBNet-V2 [3] as the backbone, and the third part is the post-fusion head of weighted boxes fusion. We tested the performance of this network and compared it with other mainstream networks, and verified that the network achieves a very impressive accuracy improvement.","PeriodicalId":23649,"journal":{"name":"Vision","volume":"31 1","pages":"230-236"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/cvidliccea56201.2022.9825072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Vision-based solutions for target detection in autonomous driving are very much about the accuracy of detection. A correct or incorrect detection may cause or avoid a traffic accident. Therefore, in this paper, to further improve the detection accuracy of vision schemes, we propose a multi-model fusion network: Fusion Network with YoloV5 and CBNEet Faster-RCNN (FYCFNet) that fuses a one-stage target detection model and a two-stage model, which consists of three parts: the first part is a single-stage YOLOV5 [1] detection model, the second part is a Faster-RCNN [2] with CBNet-V2 [3] as the backbone, and the third part is the post-fusion head of weighted boxes fusion. We tested the performance of this network and compared it with other mainstream networks, and verified that the network achieves a very impressive accuracy improvement.
FYCFNet:基于多模型融合的车辆行人检测网络
自动驾驶中基于视觉的目标检测解决方案非常注重检测的准确性。正确或错误的检测可能导致或避免交通事故。因此,为了进一步提高视觉方案的检测精度,本文提出了一种多模型融合网络:YoloV5和CBNEet Faster-RCNN融合网络(FYCFNet),它融合了一种单阶段目标检测模型和两阶段模型,由三部分组成:第一部分是单阶段YoloV5[1]检测模型,第二部分是以CBNet-V2[3]为骨架的Faster-RCNN[2],第三部分是加权盒融合的融合后头部。我们测试了该网络的性能,并将其与其他主流网络进行了比较,并验证了该网络实现了非常令人印象深刻的准确性提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信