DenseYOLO: Yet Faster, Lighter and More Accurate YOLO

Solomon Negussie Tesema, E. Bourennane
{"title":"DenseYOLO: Yet Faster, Lighter and More Accurate YOLO","authors":"Solomon Negussie Tesema, E. Bourennane","doi":"10.1109/IEMCON51383.2020.9284923","DOIUrl":null,"url":null,"abstract":"As much as an object detector should be accurate, it should be light and fast as well. However, current object detectors tend to be either inaccurate when lightweight or very slow and heavy when accurate. Accordingly, determining tolerable tradeoff between speed and accuracy of an object detector is not a simple task. One of the object detectors that have commendable balance of speed and accuracy is YOLOv2. YOLOv2 performs detection by dividing an input image into grids and training each grid cell to predict certain number of objects. In this paper we propose a new approach to even make YOLOv2 more fast and accurate. We re-purpose YOLOv2 into a dense object detector by using fine-grained grids, where a cell predicts only one object and its corresponding class and objectness confidence score. Our approach also trains the system to learn to pick a best fitting anchor box instead of the fixed anchor assignment during ground-truth annotation as used by YOLOv2. We will also introduce a new loss function to balance the overwhelming imbalance between the number of grids responsible of detecting an object and those that should not.","PeriodicalId":6871,"journal":{"name":"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","volume":"57 1","pages":"0534-0539"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEMCON51383.2020.9284923","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

As much as an object detector should be accurate, it should be light and fast as well. However, current object detectors tend to be either inaccurate when lightweight or very slow and heavy when accurate. Accordingly, determining tolerable tradeoff between speed and accuracy of an object detector is not a simple task. One of the object detectors that have commendable balance of speed and accuracy is YOLOv2. YOLOv2 performs detection by dividing an input image into grids and training each grid cell to predict certain number of objects. In this paper we propose a new approach to even make YOLOv2 more fast and accurate. We re-purpose YOLOv2 into a dense object detector by using fine-grained grids, where a cell predicts only one object and its corresponding class and objectness confidence score. Our approach also trains the system to learn to pick a best fitting anchor box instead of the fixed anchor assignment during ground-truth annotation as used by YOLOv2. We will also introduce a new loss function to balance the overwhelming imbalance between the number of grids responsible of detecting an object and those that should not.
DenseYOLO:更快,更轻,更准确的YOLO
物体探测器既要精确,又要轻便快速。然而,目前的目标探测器往往要么是不准确的轻,要么是非常缓慢和沉重的精确。因此,确定目标检测器的速度和精度之间的可容忍权衡并不是一项简单的任务。YOLOv2是在速度和精度之间取得了令人称道的平衡的目标探测器之一。YOLOv2通过将输入图像划分为网格并训练每个网格单元来预测一定数量的物体来进行检测。在本文中,我们提出了一种新的方法,甚至使YOLOv2更快更准确。我们通过使用细粒度网格将YOLOv2重新定位为密集对象检测器,其中单元格仅预测一个对象及其相应的类和对象置信度得分。我们的方法还训练系统学习选择一个最适合的锚框,而不是像YOLOv2那样在ground-truth注释期间使用固定锚分配。我们还将引入一个新的损失函数,以平衡负责检测对象和不应该检测对象的网格数量之间的压倒性不平衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信