Nearly Optimal Committee Selection For Bias Minimization

Yang Cai, Eric Xue
{"title":"Nearly Optimal Committee Selection For Bias Minimization","authors":"Yang Cai, Eric Xue","doi":"10.1145/3580507.3597761","DOIUrl":null,"url":null,"abstract":"We study the model of metric voting initially proposed by Feldman et al. [2020]. In this model, experts and candidates are located in a metric space, and each candidate possesses a quality that is independent of her location. An expert evaluates each candidate as the candidate's quality less the distance between the candidate and the expert in the metric space. The expert votes for her favorite candidate. Naturally, the expert prefers candidates that are \"similar\" to herself, i.e., close to her location in the metric space, thus creating bias in the vote. The goal is to select a voting rule and a committee of experts to mitigate the bias. More specifically, given m candidates, what is the minimum number of experts needed to ensure that the voting rule selects a candidate whose quality is at most ε worse than the best one? Our first main result is a new way to select the committee using exponentially less experts compared to the method proposed in Feldman et al. [2020]. Our second main result is a novel construction that substantially improves the lower bound on the committee size. Indeed, our upper and lower bounds match in terms of m, the number of candidates, and ε, the desired accuracy, for general convex normed spaces, and differ by a multiplicative factor that only depends on the dimension of the underlying normed space but is independent of other parameters of the problem. We further extend the nearly matching upper and lower bounds to the setting in which each expert returns a ranking of her top k candidates and we wish to choose ℓ candidates with cumulative quality at most ε worse than that of the best set of ℓ candidates, settling an open problem of Feldman et al. [2020]. Finally, we consider the setting where there are multiple rounds of voting. We show that by introducing another round of voting, the number of experts needed to guarantee the selection of an ε-optimal candidate becomes independent of the number of candidates.","PeriodicalId":210555,"journal":{"name":"Proceedings of the 24th ACM Conference on Economics and Computation","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 24th ACM Conference on Economics and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3580507.3597761","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We study the model of metric voting initially proposed by Feldman et al. [2020]. In this model, experts and candidates are located in a metric space, and each candidate possesses a quality that is independent of her location. An expert evaluates each candidate as the candidate's quality less the distance between the candidate and the expert in the metric space. The expert votes for her favorite candidate. Naturally, the expert prefers candidates that are "similar" to herself, i.e., close to her location in the metric space, thus creating bias in the vote. The goal is to select a voting rule and a committee of experts to mitigate the bias. More specifically, given m candidates, what is the minimum number of experts needed to ensure that the voting rule selects a candidate whose quality is at most ε worse than the best one? Our first main result is a new way to select the committee using exponentially less experts compared to the method proposed in Feldman et al. [2020]. Our second main result is a novel construction that substantially improves the lower bound on the committee size. Indeed, our upper and lower bounds match in terms of m, the number of candidates, and ε, the desired accuracy, for general convex normed spaces, and differ by a multiplicative factor that only depends on the dimension of the underlying normed space but is independent of other parameters of the problem. We further extend the nearly matching upper and lower bounds to the setting in which each expert returns a ranking of her top k candidates and we wish to choose ℓ candidates with cumulative quality at most ε worse than that of the best set of ℓ candidates, settling an open problem of Feldman et al. [2020]. Finally, we consider the setting where there are multiple rounds of voting. We show that by introducing another round of voting, the number of experts needed to guarantee the selection of an ε-optimal candidate becomes independent of the number of candidates.
偏见最小化的近最优委员会选择
我们研究了由Feldman等人[2020]最初提出的度量投票模型。在该模型中,专家和候选人位于度量空间中,每个候选人都具有独立于其位置的品质。专家在度量空间中用候选对象的质量减去候选对象与专家之间的距离来评估每个候选对象。专家投票给她最喜欢的候选人。自然,专家更喜欢与自己“相似”的候选人,即接近她在度量空间中的位置,从而在投票中产生偏见。目标是选择一个投票规则和一个专家委员会来减轻偏见。更具体地说,给定m个候选人,确保投票规则选择的候选人的质量最多比最好的候选人差ε所需的专家的最少数量是多少?我们的第一个主要结果是与Feldman等人[2020]提出的方法相比,使用指数较少的专家来选择委员会的新方法。我们的第二个主要成果是一种新的结构,它大大提高了委员会规模的下限。实际上,对于一般凸赋范空间,我们的上界和下界在m(候选数)和ε(期望精度)方面是匹配的,并且由于一个乘法因子而不同,该乘法因子仅取决于底层赋范空间的维数,但与问题的其他参数无关。我们进一步将接近匹配的上界和下界扩展到每个专家返回她的前k个候选人的排名的设置,并且我们希望选择累计质量最多比最优的候选集合差ε的候选,解决了Feldman等人[2020]的一个开放问题。最后,我们考虑有多轮投票的设置。我们证明,通过引入另一轮投票,保证ε-最优候选人的选择所需的专家数量与候选人数量无关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信