Fairness warnings and fair-MAML: learning fairly with minimal data

Dylan Slack, Sorelle A. Friedler, Emile Givental
{"title":"Fairness warnings and fair-MAML: learning fairly with minimal data","authors":"Dylan Slack, Sorelle A. Friedler, Emile Givental","doi":"10.1145/3351095.3372839","DOIUrl":null,"url":null,"abstract":"Motivated by concerns surrounding the fairness effects of sharing and transferring fair machine learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The first is a model-agnostic algorithm that provides interpretable boundary conditions for when a fairly trained model may not behave fairly on similar but slightly different tasks within a given domain. The second is a fair meta-learning approach to train models that can be quickly fine-tuned to specific tasks from only a few number of sample instances while balancing fairness and accuracy. We demonstrate experimentally the individual utility of each model using relevant baselines and provide the first experiment to our knowledge of K-shot fairness, i.e. training a fair model on a new task with only K data points. Then, we illustrate the usefulness of both algorithms as a combined method for training models from a few data points on new tasks while using Fairness Warnings as interpretable boundary conditions under which the newly trained model may not be fair.","PeriodicalId":377829,"journal":{"name":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"42","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3351095.3372839","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 42

Abstract

Motivated by concerns surrounding the fairness effects of sharing and transferring fair machine learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The first is a model-agnostic algorithm that provides interpretable boundary conditions for when a fairly trained model may not behave fairly on similar but slightly different tasks within a given domain. The second is a fair meta-learning approach to train models that can be quickly fine-tuned to specific tasks from only a few number of sample instances while balancing fairness and accuracy. We demonstrate experimentally the individual utility of each model using relevant baselines and provide the first experiment to our knowledge of K-shot fairness, i.e. training a fair model on a new task with only K data points. Then, we illustrate the usefulness of both algorithms as a combined method for training models from a few data points on new tasks while using Fairness Warnings as interpretable boundary conditions under which the newly trained model may not be fair.
公平警告和公平- maml:用最少的数据公平地学习
出于对共享和转移公平机器学习工具的公平效应的关注,我们提出了两种算法:公平警告和公平- maml。第一个是模型不可知算法,它提供了一个可解释的边界条件,当一个经过良好训练的模型在给定领域内类似但略有不同的任务上可能表现不佳时。第二种是一种公平的元学习方法,用于训练模型,该模型可以在平衡公平性和准确性的同时,仅从少量样本实例中快速微调到特定任务。我们通过实验证明了每个模型使用相关基线的个人效用,并提供了我们对K-shot公平性知识的第一个实验,即在只有K个数据点的新任务上训练一个公平模型。然后,我们说明了这两种算法作为一种组合方法的实用性,用于从新任务的几个数据点训练模型,同时使用公平性警告作为可解释的边界条件,在此条件下新训练的模型可能不公平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信