Association Between Logical Reasoning Ability and Quality of Relevance Judgments in Crowdsourcing

Parnia Samimi, Prabha Rajagopal, Sri Devi Ravana
{"title":"Association Between Logical Reasoning Ability and Quality of Relevance Judgments in Crowdsourcing","authors":"Parnia Samimi, Prabha Rajagopal, Sri Devi Ravana","doi":"10.1109/INFRKM.2018.8464689","DOIUrl":null,"url":null,"abstract":"Human assessors are in charge of creating relevance judgments set in a typical test collection. Nevertheless, this approach is not that efficient as it is expensive and time consuming while scales deficiently. Crowdsourcing as a recent technique for data acquisition is a low-cost and fast method for building relevance judgments. One of the most important issues for using crowdsourcing instead of human expert assessors is the quality of crowdsourcing in building relevance judgments. In order to assess this issue, factors that may have significant effects on the quality of crowdsourcing relevance judgments should be identified. The main objective of this study is to find out whether cognitive characteristics of crowdsourced workers significantly associated with quality of crowdsourced judgments, and to evaluate the effect(s) that each of those characteristics may have on judgment quality, as compared with the gold standard dataset (i.e. human assessment). Thus, the judgments of the crowdsourced workers is compared to that of a human judgment, as the overlap between relevance assessments, and by comparing the system effectiveness evaluation provided by human judgment and from worker assessors. In this study, we assess the effects of the cognitive ability namely logical reasoning ability on quality of relevance judgment. The experiment shows that logical reasoning ability of individuals is remarkably correlated with quality of relevance judgments.","PeriodicalId":196731,"journal":{"name":"2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFRKM.2018.8464689","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Human assessors are in charge of creating relevance judgments set in a typical test collection. Nevertheless, this approach is not that efficient as it is expensive and time consuming while scales deficiently. Crowdsourcing as a recent technique for data acquisition is a low-cost and fast method for building relevance judgments. One of the most important issues for using crowdsourcing instead of human expert assessors is the quality of crowdsourcing in building relevance judgments. In order to assess this issue, factors that may have significant effects on the quality of crowdsourcing relevance judgments should be identified. The main objective of this study is to find out whether cognitive characteristics of crowdsourced workers significantly associated with quality of crowdsourced judgments, and to evaluate the effect(s) that each of those characteristics may have on judgment quality, as compared with the gold standard dataset (i.e. human assessment). Thus, the judgments of the crowdsourced workers is compared to that of a human judgment, as the overlap between relevance assessments, and by comparing the system effectiveness evaluation provided by human judgment and from worker assessors. In this study, we assess the effects of the cognitive ability namely logical reasoning ability on quality of relevance judgment. The experiment shows that logical reasoning ability of individuals is remarkably correlated with quality of relevance judgments.
众包中逻辑推理能力与相关性判断质量的关系
人类评估人员负责在典型的测试集合中创建相关判断集。然而,这种方法不是那么有效,因为它既昂贵又耗时,而且伸缩性不足。众包作为一种最新的数据获取技术,是一种低成本、快速构建相关性判断的方法。使用众包代替人类专家评估的最重要问题之一是众包在构建相关性判断中的质量。为了评估这一问题,应该确定可能对众包相关性判断质量产生重大影响的因素。本研究的主要目的是找出众包工作者的认知特征是否与众包判断的质量显著相关,并评估与黄金标准数据集(即人类评估)相比,每个特征可能对判断质量产生的影响。因此,将众包工人的判断与人类的判断进行比较,作为相关性评估之间的重叠,并通过比较人类判断和工人评估人员提供的系统有效性评估。在本研究中,我们评估了认知能力即逻辑推理能力对关联判断质量的影响。实验表明,个体的逻辑推理能力与相关性判断的质量显著相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信