How clumpy is my image? Evaluating crowdsourced annotation tasks

Hugo Hutt, R. Everson, Murray Grant, John Love, George R. Littlejohn
{"title":"How clumpy is my image? Evaluating crowdsourced annotation tasks","authors":"Hugo Hutt, R. Everson, Murray Grant, John Love, George R. Littlejohn","doi":"10.1109/UKCI.2013.6651298","DOIUrl":null,"url":null,"abstract":"The use of citizen science to obtain annotations from multiple annotators has been shown to be an effective method for annotating datasets in which computational methods alone are not feasible. The way in which the annotations are obtained is an important consideration which affects the quality of the resulting consensus estimates. In this paper, we examine three separate approaches to obtaining scores for instances rather than merely classifications. To obtain a consensus score annotators were asked to make annotations in one of three paradigms: classification, scoring and ranking. A web-based citizen science experiment is described which implements the three approaches as crowdsourced annotation tasks. The tasks are evaluated in relation to the accuracy and agreement among the participants using both simulated and real-world data from the experiment. The results show a clear difference in performance between the three tasks, with the ranking task obtaining the highest accuracy and agreement among the participants. We show how a simple evolutionary optimiser may be used to improve the performance by reweighting the importance of annotators.","PeriodicalId":106191,"journal":{"name":"2013 13th UK Workshop on Computational Intelligence (UKCI)","volume":"243 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 13th UK Workshop on Computational Intelligence (UKCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UKCI.2013.6651298","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

The use of citizen science to obtain annotations from multiple annotators has been shown to be an effective method for annotating datasets in which computational methods alone are not feasible. The way in which the annotations are obtained is an important consideration which affects the quality of the resulting consensus estimates. In this paper, we examine three separate approaches to obtaining scores for instances rather than merely classifications. To obtain a consensus score annotators were asked to make annotations in one of three paradigms: classification, scoring and ranking. A web-based citizen science experiment is described which implements the three approaches as crowdsourced annotation tasks. The tasks are evaluated in relation to the accuracy and agreement among the participants using both simulated and real-world data from the experiment. The results show a clear difference in performance between the three tasks, with the ranking task obtaining the highest accuracy and agreement among the participants. We show how a simple evolutionary optimiser may be used to improve the performance by reweighting the importance of annotators.
我的图像有多粗糙?评估众包注释任务
使用公民科学从多个注释器中获取注释已被证明是对数据集进行注释的有效方法,其中单独使用计算方法是不可行的。获得注释的方式是一个重要的考虑因素,它会影响结果一致估计的质量。在本文中,我们研究了三种不同的方法来获得实例的分数,而不仅仅是分类。为了获得一个一致的分数,注释者被要求在三种范式中进行注释:分类、评分和排名。描述了一个基于网络的公民科学实验,该实验将这三种方法实现为众包注释任务。任务是评估的准确性和协议之间的参与者使用模拟和现实世界的数据从实验。结果表明,三个任务之间的表现存在明显差异,排名任务获得了最高的准确性和一致性。我们将展示如何使用一个简单的进化优化器通过重新加权注释器的重要性来提高性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信