室内场景中更快的目标检测边界框标注

Bishwo Adhikari, Jukka Peltomäki, Jussi Puura, H. Huttunen
{"title":"室内场景中更快的目标检测边界框标注","authors":"Bishwo Adhikari, Jukka Peltomäki, Jussi Puura, H. Huttunen","doi":"10.1109/EUVIP.2018.8611732","DOIUrl":null,"url":null,"abstract":"This paper proposes an approach for rapid bounding box annotation for object detection datasets. The procedure consists of two stages: The first step is to annotate a part of the dataset manually, and the second step proposes annotations for the remaining samples using a model trained with the first stage annotations. We experimentally study which first/second stage split minimizes to total workload. In addition, we introduce a new fully labeled object detection dataset collected from indoor scenes. Compared to other indoor datasets, our collection has more class categories, diverse backgrounds, lighting conditions, occlusions and high intra-class differences. We train deep learning based object detectors with a number of state-of-the-art models and compare them in terms of speed and accuracy. The fully annotated dataset is released freely available for the research community.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"34","resultStr":"{\"title\":\"Faster Bounding Box Annotation for Object Detection in Indoor Scenes\",\"authors\":\"Bishwo Adhikari, Jukka Peltomäki, Jussi Puura, H. Huttunen\",\"doi\":\"10.1109/EUVIP.2018.8611732\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes an approach for rapid bounding box annotation for object detection datasets. The procedure consists of two stages: The first step is to annotate a part of the dataset manually, and the second step proposes annotations for the remaining samples using a model trained with the first stage annotations. We experimentally study which first/second stage split minimizes to total workload. In addition, we introduce a new fully labeled object detection dataset collected from indoor scenes. Compared to other indoor datasets, our collection has more class categories, diverse backgrounds, lighting conditions, occlusions and high intra-class differences. We train deep learning based object detectors with a number of state-of-the-art models and compare them in terms of speed and accuracy. The fully annotated dataset is released freely available for the research community.\",\"PeriodicalId\":252212,\"journal\":{\"name\":\"2018 7th European Workshop on Visual Information Processing (EUVIP)\",\"volume\":\"170 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"34\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 7th European Workshop on Visual Information Processing (EUVIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EUVIP.2018.8611732\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th European Workshop on Visual Information Processing (EUVIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EUVIP.2018.8611732","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 34

摘要

提出了一种针对目标检测数据集的快速边界框标注方法。该过程包括两个阶段:第一步是手动对数据集的一部分进行注释,第二步使用经过第一阶段注释训练的模型对剩余的样本进行注释。我们通过实验研究了第一阶段/第二阶段的分离使总工作量最小化。此外,我们还引入了一个从室内场景中收集的新的全标记目标检测数据集。与其他室内数据集相比,我们的数据集具有更多的类类别、不同的背景、光照条件、遮挡和较高的类内差异。我们用许多最先进的模型来训练基于深度学习的物体检测器,并在速度和准确性方面对它们进行比较。完整注释的数据集免费发布,供研究界使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Faster Bounding Box Annotation for Object Detection in Indoor Scenes
This paper proposes an approach for rapid bounding box annotation for object detection datasets. The procedure consists of two stages: The first step is to annotate a part of the dataset manually, and the second step proposes annotations for the remaining samples using a model trained with the first stage annotations. We experimentally study which first/second stage split minimizes to total workload. In addition, we introduce a new fully labeled object detection dataset collected from indoor scenes. Compared to other indoor datasets, our collection has more class categories, diverse backgrounds, lighting conditions, occlusions and high intra-class differences. We train deep learning based object detectors with a number of state-of-the-art models and compare them in terms of speed and accuracy. The fully annotated dataset is released freely available for the research community.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信