一个经济、快速、健壮的注释工具

A. F. Said, Vinay Kashyap, Namrata Choudhury, F. Akhbari
{"title":"一个经济、快速、健壮的注释工具","authors":"A. F. Said, Vinay Kashyap, Namrata Choudhury, F. Akhbari","doi":"10.1109/AIPR.2017.8457958","DOIUrl":null,"url":null,"abstract":"Deep learning requires huge datasets for training their models. In most cases, dataset generation is done manually or by using conventional approaches which are time consuming, costly, and prone to inaccuracy. There is an urgent need for developing fast, robust, and effective techniques to annotate each image with different labels and classes based on its contents. In this paper, we propose the use of in-house annotation tool that can be used for generating accurate datasets in a short time with minimal human interference. The proposed tool reads a captured video and the operator manually highlights the objects with their assigned classes in the initial frame. A robust and fast object tracking approach was developed to detect and track each highlighted object for subsequent frames. Experimental results using our annotation tool show 50x to 200x faster processing time compared to conventional annotation methods in addition to better performance with higher accuracy.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"280 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Cost-Effective, Fast, and Robust Annotation Tool\",\"authors\":\"A. F. Said, Vinay Kashyap, Namrata Choudhury, F. Akhbari\",\"doi\":\"10.1109/AIPR.2017.8457958\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning requires huge datasets for training their models. In most cases, dataset generation is done manually or by using conventional approaches which are time consuming, costly, and prone to inaccuracy. There is an urgent need for developing fast, robust, and effective techniques to annotate each image with different labels and classes based on its contents. In this paper, we propose the use of in-house annotation tool that can be used for generating accurate datasets in a short time with minimal human interference. The proposed tool reads a captured video and the operator manually highlights the objects with their assigned classes in the initial frame. A robust and fast object tracking approach was developed to detect and track each highlighted object for subsequent frames. Experimental results using our annotation tool show 50x to 200x faster processing time compared to conventional annotation methods in addition to better performance with higher accuracy.\",\"PeriodicalId\":128779,\"journal\":{\"name\":\"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"280 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR.2017.8457958\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2017.8457958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

深度学习需要大量的数据集来训练他们的模型。在大多数情况下,数据集生成是手动完成的,或者使用传统的方法,这些方法既耗时又昂贵,而且容易产生不准确性。迫切需要开发快速、健壮和有效的技术,根据图像的内容用不同的标签和类对每个图像进行注释。在本文中,我们建议使用内部注释工具,可以在短时间内以最小的人为干扰生成准确的数据集。该工具读取捕获的视频,操作员在初始帧中手动突出显示具有指定类的对象。开发了一种鲁棒快速的目标跟踪方法,用于检测和跟踪每个突出显示的目标。实验结果表明,与传统的标注方法相比,我们的标注工具的处理时间提高了50到200倍,并且具有更高的准确性和更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Cost-Effective, Fast, and Robust Annotation Tool
Deep learning requires huge datasets for training their models. In most cases, dataset generation is done manually or by using conventional approaches which are time consuming, costly, and prone to inaccuracy. There is an urgent need for developing fast, robust, and effective techniques to annotate each image with different labels and classes based on its contents. In this paper, we propose the use of in-house annotation tool that can be used for generating accurate datasets in a short time with minimal human interference. The proposed tool reads a captured video and the operator manually highlights the objects with their assigned classes in the initial frame. A robust and fast object tracking approach was developed to detect and track each highlighted object for subsequent frames. Experimental results using our annotation tool show 50x to 200x faster processing time compared to conventional annotation methods in addition to better performance with higher accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信