状态对评价的影响:来自在线编码竞赛的证据

IF 7 2区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Swanand J. Deodhar, Yash Babar, Gordon Burtch
{"title":"状态对评价的影响:来自在线编码竞赛的证据","authors":"Swanand J. Deodhar, Yash Babar, Gordon Burtch","doi":"10.25300/misq/2022/16178","DOIUrl":null,"url":null,"abstract":"In many instances, online contest platforms rely on contestants to ensure submission quality. This scalable evaluation mechanism offers a collective benefit. However, contestants may also leverage it to achieve personal, competitive benefits. Our study examines this tension from a status-theoretic perspective, suggesting that the conflict between competitive and collective benefits, and the net implication for evaluation efficacy, is influenced by contestants’ status. On the one hand, contestants of lower status may be viewed as less skilled and hence more likely to make mistakes. Therefore, low-status contestants may attract more evaluations if said evaluations are driven predominantly by an interest in collective benefits. On the other hand, if evaluations are driven largely by an interest in personal, competitive benefits, a low-status contestant makes for a less attractive target and hence may attract fewer evaluations. We empirically test these competing possibilities using a dataset of coding contests from Codeforces. The platform allows contestants to assess others’ submissions and improve evaluations (a collective benefit) by devising test cases (hacks) in addition to those defined by the contest organizer. If a submission is successfully hacked, the hacker earns additional points, and the target submission is eliminated from the contest (a competitive benefit). We begin by providing qualitative evidence based on semi-structured interviews conducted with contestants spanning the status spectrum at Codeforces. Next, we present quantitative evidence exploiting a structural change at Codeforces wherein many contestants experienced an arbitrary status reduction unrelated to their performance because of sudden changes to the platform’s color-coding system around contestant ratings. We show that status-loser contestants received systematically more evaluations from other contestants, absent changes in their short-run submission quality. Finally, we show that the excess evaluations allocated toward affected contestants were less effective, indicating status-driven evaluations as potentially less efficacious. We discuss the implications of our findings for managing evaluation processes in online contests.","PeriodicalId":49807,"journal":{"name":"Mis Quarterly","volume":" ","pages":""},"PeriodicalIF":7.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Influence of Status on Evaluations: Evidence from Online Coding Contests\",\"authors\":\"Swanand J. Deodhar, Yash Babar, Gordon Burtch\",\"doi\":\"10.25300/misq/2022/16178\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In many instances, online contest platforms rely on contestants to ensure submission quality. This scalable evaluation mechanism offers a collective benefit. However, contestants may also leverage it to achieve personal, competitive benefits. Our study examines this tension from a status-theoretic perspective, suggesting that the conflict between competitive and collective benefits, and the net implication for evaluation efficacy, is influenced by contestants’ status. On the one hand, contestants of lower status may be viewed as less skilled and hence more likely to make mistakes. Therefore, low-status contestants may attract more evaluations if said evaluations are driven predominantly by an interest in collective benefits. On the other hand, if evaluations are driven largely by an interest in personal, competitive benefits, a low-status contestant makes for a less attractive target and hence may attract fewer evaluations. We empirically test these competing possibilities using a dataset of coding contests from Codeforces. The platform allows contestants to assess others’ submissions and improve evaluations (a collective benefit) by devising test cases (hacks) in addition to those defined by the contest organizer. If a submission is successfully hacked, the hacker earns additional points, and the target submission is eliminated from the contest (a competitive benefit). We begin by providing qualitative evidence based on semi-structured interviews conducted with contestants spanning the status spectrum at Codeforces. Next, we present quantitative evidence exploiting a structural change at Codeforces wherein many contestants experienced an arbitrary status reduction unrelated to their performance because of sudden changes to the platform’s color-coding system around contestant ratings. We show that status-loser contestants received systematically more evaluations from other contestants, absent changes in their short-run submission quality. Finally, we show that the excess evaluations allocated toward affected contestants were less effective, indicating status-driven evaluations as potentially less efficacious. We discuss the implications of our findings for managing evaluation processes in online contests.\",\"PeriodicalId\":49807,\"journal\":{\"name\":\"Mis Quarterly\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mis Quarterly\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.25300/misq/2022/16178\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mis Quarterly","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.25300/misq/2022/16178","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在许多情况下,在线竞赛平台依靠参赛者来确保提交的质量。这种可扩展的评估机制提供了一个共同的好处。然而,参赛者也可能利用它来实现个人的竞争利益。我们的研究从地位理论的角度考察了这种紧张关系,表明竞争利益与集体利益之间的冲突以及对评价效能的净影响受到参赛者地位的影响。一方面,地位较低的选手可能被认为技能较差,因此更容易犯错。因此,地位低的参赛者可能会吸引更多的评价,如果这些评价主要是由对集体利益的兴趣驱动的。另一方面,如果评价很大程度上是由个人利益和竞争利益驱动的,那么地位低的竞争者就会成为一个不那么有吸引力的目标,因此可能会吸引更少的评价。我们使用Codeforces的编码竞赛数据集对这些竞争的可能性进行了实证测试。该平台允许参赛者评估其他人的提交,并通过设计测试用例(黑客)来改进评估(集体利益),除了比赛组织者定义的测试用例之外。如果提交的内容被成功黑客攻击,黑客就会获得额外的分数,而目标提交的内容将被淘汰出局(这是一种竞争优势)。我们首先提供基于半结构化访谈的定性证据,这些访谈是对Codeforces中不同地位的参赛者进行的。接下来,我们提出了利用Codeforces结构变化的定量证据,其中许多参赛者经历了与他们的表现无关的任意状态降低,因为平台围绕参赛者评级的颜色编码系统突然发生了变化。我们发现,地位失败者的参赛者系统地从其他参赛者那里得到了更多的评价,而他们的短期提交质量没有变化。最后,我们表明分配给受影响的参赛者的多余评价是不太有效的,这表明地位驱动的评价可能不太有效。我们讨论了我们的研究结果对管理在线竞赛评估过程的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Influence of Status on Evaluations: Evidence from Online Coding Contests
In many instances, online contest platforms rely on contestants to ensure submission quality. This scalable evaluation mechanism offers a collective benefit. However, contestants may also leverage it to achieve personal, competitive benefits. Our study examines this tension from a status-theoretic perspective, suggesting that the conflict between competitive and collective benefits, and the net implication for evaluation efficacy, is influenced by contestants’ status. On the one hand, contestants of lower status may be viewed as less skilled and hence more likely to make mistakes. Therefore, low-status contestants may attract more evaluations if said evaluations are driven predominantly by an interest in collective benefits. On the other hand, if evaluations are driven largely by an interest in personal, competitive benefits, a low-status contestant makes for a less attractive target and hence may attract fewer evaluations. We empirically test these competing possibilities using a dataset of coding contests from Codeforces. The platform allows contestants to assess others’ submissions and improve evaluations (a collective benefit) by devising test cases (hacks) in addition to those defined by the contest organizer. If a submission is successfully hacked, the hacker earns additional points, and the target submission is eliminated from the contest (a competitive benefit). We begin by providing qualitative evidence based on semi-structured interviews conducted with contestants spanning the status spectrum at Codeforces. Next, we present quantitative evidence exploiting a structural change at Codeforces wherein many contestants experienced an arbitrary status reduction unrelated to their performance because of sudden changes to the platform’s color-coding system around contestant ratings. We show that status-loser contestants received systematically more evaluations from other contestants, absent changes in their short-run submission quality. Finally, we show that the excess evaluations allocated toward affected contestants were less effective, indicating status-driven evaluations as potentially less efficacious. We discuss the implications of our findings for managing evaluation processes in online contests.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Mis Quarterly
Mis Quarterly 工程技术-计算机:信息系统
CiteScore
13.30
自引率
4.10%
发文量
36
审稿时长
6-12 weeks
期刊介绍: Journal Name: MIS Quarterly Editorial Objective: The editorial objective of MIS Quarterly is focused on: Enhancing and communicating knowledge related to: Development of IT-based services Management of IT resources Use, impact, and economics of IT with managerial, organizational, and societal implications Addressing professional issues affecting the Information Systems (IS) field as a whole Key Focus Areas: Development of IT-based services Management of IT resources Use, impact, and economics of IT with managerial, organizational, and societal implications Professional issues affecting the IS field as a whole
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信