A. F. Said, Vinay Kashyap, Namrata Choudhury, F. Akhbari
{"title":"A Cost-Effective, Fast, and Robust Annotation Tool","authors":"A. F. Said, Vinay Kashyap, Namrata Choudhury, F. Akhbari","doi":"10.1109/AIPR.2017.8457958","DOIUrl":null,"url":null,"abstract":"Deep learning requires huge datasets for training their models. In most cases, dataset generation is done manually or by using conventional approaches which are time consuming, costly, and prone to inaccuracy. There is an urgent need for developing fast, robust, and effective techniques to annotate each image with different labels and classes based on its contents. In this paper, we propose the use of in-house annotation tool that can be used for generating accurate datasets in a short time with minimal human interference. The proposed tool reads a captured video and the operator manually highlights the objects with their assigned classes in the initial frame. A robust and fast object tracking approach was developed to detect and track each highlighted object for subsequent frames. Experimental results using our annotation tool show 50x to 200x faster processing time compared to conventional annotation methods in addition to better performance with higher accuracy.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"280 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2017.8457958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep learning requires huge datasets for training their models. In most cases, dataset generation is done manually or by using conventional approaches which are time consuming, costly, and prone to inaccuracy. There is an urgent need for developing fast, robust, and effective techniques to annotate each image with different labels and classes based on its contents. In this paper, we propose the use of in-house annotation tool that can be used for generating accurate datasets in a short time with minimal human interference. The proposed tool reads a captured video and the operator manually highlights the objects with their assigned classes in the initial frame. A robust and fast object tracking approach was developed to detect and track each highlighted object for subsequent frames. Experimental results using our annotation tool show 50x to 200x faster processing time compared to conventional annotation methods in addition to better performance with higher accuracy.