CrowdTruth的三个方面

Lora Aroyo, Chris Welty
{"title":"CrowdTruth的三个方面","authors":"Lora Aroyo, Chris Welty","doi":"10.15346/HC.V1I1.34","DOIUrl":null,"url":null,"abstract":"Crowdsourcing is often used to gather annotated data for training and evaluating computational systems that attempt to solve cognitive problems, such as understanding Natural Language sentences. Crowd workers are asked to perform semantic interpretation of sentences to establish a ground truth. This has always been done under the assumption that each task unit, e.g. each sentence, has a single correct interpretation that is contained in the ground truth. We have countered this assumption with CrowdTruth, and have shown that it can be better suited to tasks for which semantic interpretation is subjective. In this paper we investigate the dependence of worker metrics for detecting spam on the quality of sentences in the dataset, and the quality of the target semantics. We show that worker quality metrics can improve significantly when the quality of these other aspects of semantic interpretation are considered.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"1 1","pages":"31-44"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"78","resultStr":"{\"title\":\"The Three Sides of CrowdTruth\",\"authors\":\"Lora Aroyo, Chris Welty\",\"doi\":\"10.15346/HC.V1I1.34\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Crowdsourcing is often used to gather annotated data for training and evaluating computational systems that attempt to solve cognitive problems, such as understanding Natural Language sentences. Crowd workers are asked to perform semantic interpretation of sentences to establish a ground truth. This has always been done under the assumption that each task unit, e.g. each sentence, has a single correct interpretation that is contained in the ground truth. We have countered this assumption with CrowdTruth, and have shown that it can be better suited to tasks for which semantic interpretation is subjective. In this paper we investigate the dependence of worker metrics for detecting spam on the quality of sentences in the dataset, and the quality of the target semantics. We show that worker quality metrics can improve significantly when the quality of these other aspects of semantic interpretation are considered.\",\"PeriodicalId\":92785,\"journal\":{\"name\":\"Human computation (Fairfax, Va.)\",\"volume\":\"1 1\",\"pages\":\"31-44\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"78\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human computation (Fairfax, Va.)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15346/HC.V1I1.34\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human computation (Fairfax, Va.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15346/HC.V1I1.34","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 78

摘要

众包通常用于收集带注释的数据,用于训练和评估试图解决认知问题的计算系统,例如理解自然语言句子。群体工作者被要求对句子进行语义解释,以建立一个基本真理。这一直是在假设每个任务单元,例如每个句子,都有一个包含在基本真理中的单一正确解释的情况下进行的。我们用CrowdTruth反驳了这一假设,并表明它可以更好地适用于语义解释是主观的任务。在本文中,我们研究了用于检测垃圾邮件的工作度量对数据集中句子质量和目标语义质量的依赖。我们表明,当考虑到语义解释的这些其他方面的质量时,工人质量指标可以显着提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Three Sides of CrowdTruth
Crowdsourcing is often used to gather annotated data for training and evaluating computational systems that attempt to solve cognitive problems, such as understanding Natural Language sentences. Crowd workers are asked to perform semantic interpretation of sentences to establish a ground truth. This has always been done under the assumption that each task unit, e.g. each sentence, has a single correct interpretation that is contained in the ground truth. We have countered this assumption with CrowdTruth, and have shown that it can be better suited to tasks for which semantic interpretation is subjective. In this paper we investigate the dependence of worker metrics for detecting spam on the quality of sentences in the dataset, and the quality of the target semantics. We show that worker quality metrics can improve significantly when the quality of these other aspects of semantic interpretation are considered.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信