F3N:用于目标检测的全特征融合网络

Gang Wang, Tang Kai, Kazushige Ouchi
{"title":"F3N:用于目标检测的全特征融合网络","authors":"Gang Wang, Tang Kai, Kazushige Ouchi","doi":"10.1145/3446132.3446152","DOIUrl":null,"url":null,"abstract":"This paper is mainly aimed at proposing a powerful feature fusion method for object detection. An exceptionally significant accuracy improvement is achieved by augmenting all multi-scale features by adding a finite amount of computation. Hence, we created our detector based on a fast detector on SSD [1] and called it Full Feature Fusion Network (F3N). Using several Feature Fusion modules, we fused low-level and high-level features by parallel low-high level sub-network with repeated information exchange across multi-scale features. We fused all the multi-scale features using concatenate and interpolate methods within several feature fusion modules. F3N achieves the new state of the art result for one-stage object detection. F3N with 512x512 input achieves 82.5% mAP (mean Average Precision) and 320x320 input yields 80.3% on the VOC2007 test, with 512x512 input achieving 81.1% and 320x320 input yielding 77.3% on the VOC2012 test. In MS COCO data set, 512x512 input obtains 33.9% and 320x320 input yields 30.4%. The accuracies are significantly enhanced compared to the current mainstream approaches such as SSD [1], DSSD [8], FPN [11], YOLO [6].","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"F3N: Full Feature Fusion Network for Object Detection\",\"authors\":\"Gang Wang, Tang Kai, Kazushige Ouchi\",\"doi\":\"10.1145/3446132.3446152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper is mainly aimed at proposing a powerful feature fusion method for object detection. An exceptionally significant accuracy improvement is achieved by augmenting all multi-scale features by adding a finite amount of computation. Hence, we created our detector based on a fast detector on SSD [1] and called it Full Feature Fusion Network (F3N). Using several Feature Fusion modules, we fused low-level and high-level features by parallel low-high level sub-network with repeated information exchange across multi-scale features. We fused all the multi-scale features using concatenate and interpolate methods within several feature fusion modules. F3N achieves the new state of the art result for one-stage object detection. F3N with 512x512 input achieves 82.5% mAP (mean Average Precision) and 320x320 input yields 80.3% on the VOC2007 test, with 512x512 input achieving 81.1% and 320x320 input yielding 77.3% on the VOC2012 test. In MS COCO data set, 512x512 input obtains 33.9% and 320x320 input yields 30.4%. The accuracies are significantly enhanced compared to the current mainstream approaches such as SSD [1], DSSD [8], FPN [11], YOLO [6].\",\"PeriodicalId\":125388,\"journal\":{\"name\":\"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3446132.3446152\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3446132.3446152","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文的主要目的是提出一种功能强大的目标检测特征融合方法。通过增加有限的计算量来增加所有的多尺度特征,实现了非常显著的精度提高。因此,我们基于SSD上的快速检测器创建了检测器[1],并将其称为Full Feature Fusion Network (F3N)。利用多个特征融合模块,通过多尺度特征间的重复信息交换,实现低、高层特征的融合。我们在几个特征融合模块中使用连接和插值方法融合了所有的多尺度特征。F3N实现了单阶段目标检测的最新技术成果。在VOC2007测试中,512x512输入的F3N达到82.5% mAP(平均精度),320x320输入的产量为80.3%,而在VOC2012测试中,512x512输入的产量为81.1%,320x320输入的产量为77.3%。在MS COCO数据集中,512x512输入获得33.9%,320x320输入获得30.4%。与当前主流的SSD[1]、DSSD[8]、FPN[11]、YOLO[6]等方法相比,精度得到了显著提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
F3N: Full Feature Fusion Network for Object Detection
This paper is mainly aimed at proposing a powerful feature fusion method for object detection. An exceptionally significant accuracy improvement is achieved by augmenting all multi-scale features by adding a finite amount of computation. Hence, we created our detector based on a fast detector on SSD [1] and called it Full Feature Fusion Network (F3N). Using several Feature Fusion modules, we fused low-level and high-level features by parallel low-high level sub-network with repeated information exchange across multi-scale features. We fused all the multi-scale features using concatenate and interpolate methods within several feature fusion modules. F3N achieves the new state of the art result for one-stage object detection. F3N with 512x512 input achieves 82.5% mAP (mean Average Precision) and 320x320 input yields 80.3% on the VOC2007 test, with 512x512 input achieving 81.1% and 320x320 input yielding 77.3% on the VOC2012 test. In MS COCO data set, 512x512 input obtains 33.9% and 320x320 input yields 30.4%. The accuracies are significantly enhanced compared to the current mainstream approaches such as SSD [1], DSSD [8], FPN [11], YOLO [6].
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信