算法决策中的公平性

Abhijnan Chakraborty, K. Gummadi
{"title":"算法决策中的公平性","authors":"Abhijnan Chakraborty, K. Gummadi","doi":"10.1145/3371158.3371234","DOIUrl":null,"url":null,"abstract":"Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking applicants), judiciary (offender profiling) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have attempted to identify (measure) bias or unfairness in algorithmic decisions and propose mechanisms to control (mitigate) such biases. In this tutorial, we introduce the related literature to the cods-comad community. Moreover, going over the more prevalent works on fairness in classification or regression tasks, we explore fairness issues in decision making scenarios, where we need to account for preferences of multiple stakeholders. Specifically, we cover our own past and ongoing works on fairness in recommendation and matching systems. We discuss the notions of fairness in these contexts and propose techniques to achieve them. Additionally, we briefly touch upon the possibility of utilizing user interface of platforms (choice architecture) to achieve fair outcomes in certain scenarios. We conclude the tutorial with a list of open questions and directions for future work.","PeriodicalId":360747,"journal":{"name":"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD","volume":"133 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Fairness in Algorithmic Decision Making\",\"authors\":\"Abhijnan Chakraborty, K. Gummadi\",\"doi\":\"10.1145/3371158.3371234\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking applicants), judiciary (offender profiling) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have attempted to identify (measure) bias or unfairness in algorithmic decisions and propose mechanisms to control (mitigate) such biases. In this tutorial, we introduce the related literature to the cods-comad community. Moreover, going over the more prevalent works on fairness in classification or regression tasks, we explore fairness issues in decision making scenarios, where we need to account for preferences of multiple stakeholders. Specifically, we cover our own past and ongoing works on fairness in recommendation and matching systems. We discuss the notions of fairness in these contexts and propose techniques to achieve them. Additionally, we briefly touch upon the possibility of utilizing user interface of platforms (choice architecture) to achieve fair outcomes in certain scenarios. We conclude the tutorial with a list of open questions and directions for future work.\",\"PeriodicalId\":360747,\"journal\":{\"name\":\"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD\",\"volume\":\"133 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3371158.3371234\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3371158.3371234","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

算法(数据驱动)决策越来越多地被用于在具有高社会影响的领域协助或取代人类决策,例如银行(估计信誉),招聘(对申请人进行排名),司法(罪犯分析)和新闻(推荐新闻故事)。因此,近年来,许多研究工作试图识别(衡量)算法决策中的偏见或不公平,并提出控制(减轻)这种偏见的机制。在本教程中,我们将向cods-comad社区介绍相关文献。此外,回顾了在分类或回归任务中关于公平性的更普遍的工作,我们探索了决策场景中的公平性问题,在这些场景中,我们需要考虑多个利益相关者的偏好。具体来说,我们将介绍我们自己过去和正在进行的关于推荐和匹配系统公平性的工作。我们在这些背景下讨论了公平的概念,并提出了实现它们的技术。此外,我们还简要介绍了在某些场景中利用平台的用户界面(选择架构)来实现公平结果的可能性。我们以开放问题和未来工作方向的列表来结束本教程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fairness in Algorithmic Decision Making
Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking applicants), judiciary (offender profiling) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have attempted to identify (measure) bias or unfairness in algorithmic decisions and propose mechanisms to control (mitigate) such biases. In this tutorial, we introduce the related literature to the cods-comad community. Moreover, going over the more prevalent works on fairness in classification or regression tasks, we explore fairness issues in decision making scenarios, where we need to account for preferences of multiple stakeholders. Specifically, we cover our own past and ongoing works on fairness in recommendation and matching systems. We discuss the notions of fairness in these contexts and propose techniques to achieve them. Additionally, we briefly touch upon the possibility of utilizing user interface of platforms (choice architecture) to achieve fair outcomes in certain scenarios. We conclude the tutorial with a list of open questions and directions for future work.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信