Sunghyun Park, Philippa Shoemark, Louis-Philippe Morency
{"title":"Toward crowdsourcing micro-level behavior annotations: the challenges of interface, training, and generalization","authors":"Sunghyun Park, Philippa Shoemark, Louis-Philippe Morency","doi":"10.1145/2557500.2557512","DOIUrl":null,"url":null,"abstract":"Research that involves human behavior analysis usually requires laborious and costly efforts for obtaining micro-level behavior annotations on a large video corpus. With the emerging paradigm of crowdsourcing however, these efforts can be considerably reduced. We first present OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient behavior annotations in videos, directly portable to popular crowdsourcing platforms. As part of OCTAB, we introduce a training module with specialized visualizations. The training module's design was inspired by an observational study of local experienced coders, and it enables an iterative procedure for effectively training crowd workers online. Finally, we present an extensive set of experiments that evaluates the feasibility of our crowdsourcing approach for obtaining micro-level behavior annotations in videos, showing the reliability improvement in annotation accuracy when properly training online crowd workers. We also show the generalization of our training approach to a new independent video corpus.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"136 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th international conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2557500.2557512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25
Abstract
Research that involves human behavior analysis usually requires laborious and costly efforts for obtaining micro-level behavior annotations on a large video corpus. With the emerging paradigm of crowdsourcing however, these efforts can be considerably reduced. We first present OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors), a web-based annotation tool that allows precise and convenient behavior annotations in videos, directly portable to popular crowdsourcing platforms. As part of OCTAB, we introduce a training module with specialized visualizations. The training module's design was inspired by an observational study of local experienced coders, and it enables an iterative procedure for effectively training crowd workers online. Finally, we present an extensive set of experiments that evaluates the feasibility of our crowdsourcing approach for obtaining micro-level behavior annotations in videos, showing the reliability improvement in annotation accuracy when properly training online crowd workers. We also show the generalization of our training approach to a new independent video corpus.
涉及人类行为分析的研究通常需要在大型视频语料库上获得微观层面的行为注释,这需要付出艰苦和昂贵的努力。然而,随着众包模式的出现,这些努力可以大大减少。我们首先提出OCTAB (Online Crowdsourcing Tool for Annotations of Behaviors),这是一个基于网络的注释工具,允许在视频中进行精确和方便的行为注释,直接移植到流行的众包平台。作为OCTAB的一部分,我们引入了一个具有专门可视化的培训模块。培训模块的设计灵感来自于对当地经验丰富的编码人员的观察研究,它可以通过迭代程序有效地在线培训人群工作人员。最后,我们提出了一组广泛的实验来评估我们的众包方法在视频中获得微观层面行为注释的可行性,显示了在适当培训在线人群工作人员时注释准确性的可靠性提高。我们还展示了将我们的训练方法推广到一个新的独立视频语料库。