Shuang Wu, Shibao Zheng, Hua Yang, Yawen Fan, Longfei Liang, Hang Su
{"title":"SAGTA: Semi-automatic Ground Truth Annotation in crowd scenes","authors":"Shuang Wu, Shibao Zheng, Hua Yang, Yawen Fan, Longfei Liang, Hang Su","doi":"10.1109/ICMEW.2014.6890539","DOIUrl":null,"url":null,"abstract":"Ground truth is crucial in the performance evaluation of algorithms. Nevertheless, it is a tedious and time-consuming task to annotate ground truth manually, especially in crowd scenes. In this paper, we propose a novel semi-automatic tool called SAGTA (Semi-automatic Ground Truth Annotation Tool), which can assist researchers to annotate pedestrians easily and quickly in crowd scenes. Firstly, users label pedestrians manually in a few key frames by drawing bounding boxes through the friendly GUI of SAGTA. Then, the annotations in the rest frames are coarsely estimated by automatically interpolating based on 3D linear motion assumption. Moreover, our tool refines the estimated annotations through using ORB feature matching. This coarse-to-fine method facilitates the annotation process efficiently. Afterwards, the refined annotations are manually verified and corrected to guarantee the accuracy of annotations. In addition, some extra information (such as density, trajectory and occlusion relationships) can be inferred automatically and visualized vividly. The proposed tool has been tested on PETS and real surveillance data sets. Experimental results demonstrate that SAGTA achieves superior performance in time cost than ViPER-GT, which is the widely used annotation tool.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"328 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW.2014.6890539","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Ground truth is crucial in the performance evaluation of algorithms. Nevertheless, it is a tedious and time-consuming task to annotate ground truth manually, especially in crowd scenes. In this paper, we propose a novel semi-automatic tool called SAGTA (Semi-automatic Ground Truth Annotation Tool), which can assist researchers to annotate pedestrians easily and quickly in crowd scenes. Firstly, users label pedestrians manually in a few key frames by drawing bounding boxes through the friendly GUI of SAGTA. Then, the annotations in the rest frames are coarsely estimated by automatically interpolating based on 3D linear motion assumption. Moreover, our tool refines the estimated annotations through using ORB feature matching. This coarse-to-fine method facilitates the annotation process efficiently. Afterwards, the refined annotations are manually verified and corrected to guarantee the accuracy of annotations. In addition, some extra information (such as density, trajectory and occlusion relationships) can be inferred automatically and visualized vividly. The proposed tool has been tested on PETS and real surveillance data sets. Experimental results demonstrate that SAGTA achieves superior performance in time cost than ViPER-GT, which is the widely used annotation tool.