{"title":"Association Between Logical Reasoning Ability and Quality of Relevance Judgments in Crowdsourcing","authors":"Parnia Samimi, Prabha Rajagopal, Sri Devi Ravana","doi":"10.1109/INFRKM.2018.8464689","DOIUrl":null,"url":null,"abstract":"Human assessors are in charge of creating relevance judgments set in a typical test collection. Nevertheless, this approach is not that efficient as it is expensive and time consuming while scales deficiently. Crowdsourcing as a recent technique for data acquisition is a low-cost and fast method for building relevance judgments. One of the most important issues for using crowdsourcing instead of human expert assessors is the quality of crowdsourcing in building relevance judgments. In order to assess this issue, factors that may have significant effects on the quality of crowdsourcing relevance judgments should be identified. The main objective of this study is to find out whether cognitive characteristics of crowdsourced workers significantly associated with quality of crowdsourced judgments, and to evaluate the effect(s) that each of those characteristics may have on judgment quality, as compared with the gold standard dataset (i.e. human assessment). Thus, the judgments of the crowdsourced workers is compared to that of a human judgment, as the overlap between relevance assessments, and by comparing the system effectiveness evaluation provided by human judgment and from worker assessors. In this study, we assess the effects of the cognitive ability namely logical reasoning ability on quality of relevance judgment. The experiment shows that logical reasoning ability of individuals is remarkably correlated with quality of relevance judgments.","PeriodicalId":196731,"journal":{"name":"2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFRKM.2018.8464689","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Human assessors are in charge of creating relevance judgments set in a typical test collection. Nevertheless, this approach is not that efficient as it is expensive and time consuming while scales deficiently. Crowdsourcing as a recent technique for data acquisition is a low-cost and fast method for building relevance judgments. One of the most important issues for using crowdsourcing instead of human expert assessors is the quality of crowdsourcing in building relevance judgments. In order to assess this issue, factors that may have significant effects on the quality of crowdsourcing relevance judgments should be identified. The main objective of this study is to find out whether cognitive characteristics of crowdsourced workers significantly associated with quality of crowdsourced judgments, and to evaluate the effect(s) that each of those characteristics may have on judgment quality, as compared with the gold standard dataset (i.e. human assessment). Thus, the judgments of the crowdsourced workers is compared to that of a human judgment, as the overlap between relevance assessments, and by comparing the system effectiveness evaluation provided by human judgment and from worker assessors. In this study, we assess the effects of the cognitive ability namely logical reasoning ability on quality of relevance judgment. The experiment shows that logical reasoning ability of individuals is remarkably correlated with quality of relevance judgments.