{"title":"Crowd-Sourcing for Data Science and Quantifiable Challenges: Optimal Contest Design","authors":"Milind Dawande, G. Janakiraman, Goutham Takasi","doi":"10.2139/ssrn.3740224","DOIUrl":null,"url":null,"abstract":"We study the optimal design of a crowd-sourcing contest in settings where the output (from the contestants) is quantifiable -- for example, a data science challenge. This setting is in contrast to settings where the output is only qualitative and cannot be quantified in an objective manner -- for example, when the goal of the contest is to design a logo. The rapidly growing literature on the design of crowd-sourcing contests focuses largely on ordinal contests -- these are contests where contestants' outputs are ranked by the organizer and awards are based on the relative ranks. Such contests are ideally suited for the latter setting, where output is qualitative. For our setting (quantitative output), it is possible to design contests where awards are based on the actual outputs and not on their ranking alone -- thus, our space of contest designs includes ordinal contests but is significantly larger. We derive an easy-to-implement contest design for this setting and establish its optimality.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3740224","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We study the optimal design of a crowd-sourcing contest in settings where the output (from the contestants) is quantifiable -- for example, a data science challenge. This setting is in contrast to settings where the output is only qualitative and cannot be quantified in an objective manner -- for example, when the goal of the contest is to design a logo. The rapidly growing literature on the design of crowd-sourcing contests focuses largely on ordinal contests -- these are contests where contestants' outputs are ranked by the organizer and awards are based on the relative ranks. Such contests are ideally suited for the latter setting, where output is qualitative. For our setting (quantitative output), it is possible to design contests where awards are based on the actual outputs and not on their ranking alone -- thus, our space of contest designs includes ordinal contests but is significantly larger. We derive an easy-to-implement contest design for this setting and establish its optimality.