Automating Fairness Configurations for Machine Learning

Haipei Sun, Yiding Yang, Yanying Li, Huihui Liu, Xinchao Wang, Wendy Hui Wang
{"title":"Automating Fairness Configurations for Machine Learning","authors":"Haipei Sun, Yiding Yang, Yanying Li, Huihui Liu, Xinchao Wang, Wendy Hui Wang","doi":"10.1145/3442442.3452301","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed substantial efforts devoted to ensuring algorithmic fairness for machine learning (ML), spanning from formalizing fairness metrics to designing fairness-enhancing methods. These efforts lead to numerous possible choices in terms of fairness definitions and fairness-enhancing algorithms. However, finding the best fairness configuration (including both fairness definition and fairness-enhancing algorithms) for a specific ML task is extremely challenging in practice. The large design space of fairness configurations combined with the tremendous cost required for fairness deployment poses a major obstacle to this endeavor. This raises an important issue: can we enable automated fairness configurations for a new ML task on a potentially unseen dataset? To this point, we design Auto-Fair, a system that provides recommendations of fairness configurations by ranking all fairness configuration candidates based on their evaluations on prior ML tasks. At the core of Auto-Fair lies a meta-learning model that ranks all fairness configuration candidates by utilizing: (1) a set of meta-features that are derived from both datasets and fairness configurations that were used in prior evaluations; and (2) the knowledge accumulated from previous evaluations of fairness configurations on related ML tasks and datasets. The experimental results on 350 different fairness configurations and 1,500 data samples demonstrate the effectiveness of Auto-Fair.","PeriodicalId":129420,"journal":{"name":"Companion Proceedings of the Web Conference 2021","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Proceedings of the Web Conference 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3442442.3452301","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Recent years have witnessed substantial efforts devoted to ensuring algorithmic fairness for machine learning (ML), spanning from formalizing fairness metrics to designing fairness-enhancing methods. These efforts lead to numerous possible choices in terms of fairness definitions and fairness-enhancing algorithms. However, finding the best fairness configuration (including both fairness definition and fairness-enhancing algorithms) for a specific ML task is extremely challenging in practice. The large design space of fairness configurations combined with the tremendous cost required for fairness deployment poses a major obstacle to this endeavor. This raises an important issue: can we enable automated fairness configurations for a new ML task on a potentially unseen dataset? To this point, we design Auto-Fair, a system that provides recommendations of fairness configurations by ranking all fairness configuration candidates based on their evaluations on prior ML tasks. At the core of Auto-Fair lies a meta-learning model that ranks all fairness configuration candidates by utilizing: (1) a set of meta-features that are derived from both datasets and fairness configurations that were used in prior evaluations; and (2) the knowledge accumulated from previous evaluations of fairness configurations on related ML tasks and datasets. The experimental results on 350 different fairness configurations and 1,500 data samples demonstrate the effectiveness of Auto-Fair.
自动化机器学习公平性配置
近年来,人们为确保机器学习(ML)的算法公平性做出了大量努力,从形式化公平性指标到设计公平性增强方法。这些努力在公平定义和公平增强算法方面导致了许多可能的选择。然而,在实践中,为特定的ML任务找到最佳的公平性配置(包括公平性定义和公平性增强算法)是极具挑战性的。公平性配置的巨大设计空间以及公平性部署所需的巨大成本对这一努力构成了主要障碍。这就提出了一个重要的问题:我们能否在一个可能看不见的数据集上为一个新的ML任务启用自动公平性配置?为此,我们设计了Auto-Fair,这是一个系统,通过根据所有公平配置候选人对先前ML任务的评估对他们进行排名来提供公平配置的建议。Auto-Fair的核心是一个元学习模型,该模型利用:(1)一组元特征,这些特征来自于先前评估中使用的数据集和公平配置;(2)从之前对相关ML任务和数据集的公平性配置的评估中积累的知识。在350种不同的公平配置和1500个数据样本上的实验结果证明了Auto-Fair的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信