I. Hupont, Pierre R. Lebreton, T. Maki, E. Skodras, Matthias Hirth
{"title":"Is affective crowdsourcing reliable?","authors":"I. Hupont, Pierre R. Lebreton, T. Maki, E. Skodras, Matthias Hirth","doi":"10.1109/CCE.2014.6916757","DOIUrl":null,"url":null,"abstract":"Affective content annotations are typically acquired from subjective manual assessments by experts in supervised laboratory tests. While well manageable, such campaigns are expensive, time-consuming and results may not be generalizable to larger audiences. Crowdsourcing constitutes a promising approach for quickly collecting data with wide demographic scope and reasonable costs. Undeniably, affective crowdsourcing is particularly challenging in the sense that it attempts to collect subjective perceptions from humans with different cultures, languages, knowledge background, etc. In this study we analyze the validity of well-known user affective scales in a crowdsourcing context by comparing results with the ones obtained in laboratory tests. Experimental results demonstrate that pictorial scales possess promising features for affective crowdsourcing.","PeriodicalId":377853,"journal":{"name":"2014 IEEE Fifth International Conference on Communications and Electronics (ICCE)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Fifth International Conference on Communications and Electronics (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCE.2014.6916757","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Affective content annotations are typically acquired from subjective manual assessments by experts in supervised laboratory tests. While well manageable, such campaigns are expensive, time-consuming and results may not be generalizable to larger audiences. Crowdsourcing constitutes a promising approach for quickly collecting data with wide demographic scope and reasonable costs. Undeniably, affective crowdsourcing is particularly challenging in the sense that it attempts to collect subjective perceptions from humans with different cultures, languages, knowledge background, etc. In this study we analyze the validity of well-known user affective scales in a crowdsourcing context by comparing results with the ones obtained in laboratory tests. Experimental results demonstrate that pictorial scales possess promising features for affective crowdsourcing.