人工智能和算法偏差:来源、检测、缓解和影响

Runshan Fu, Yan Huang, Param Vir Singh
{"title":"人工智能和算法偏差:来源、检测、缓解和影响","authors":"Runshan Fu, Yan Huang, Param Vir Singh","doi":"10.2139/ssrn.3681517","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) and machine learning (ML) algorithms are widely used throughout our economy in making decisions that have far-reaching impacts on employment, education, access to credit, and other areas. Initially considered neutral and fair, ML algorithms have recently been found increasingly biased, creating and perpetuating structural inequalities in society. With the rising concerns about algorithmic bias, a growing body of literature attempts to understand and resolve the issue of algorithmic bias. In this tutorial, we discuss five important aspects of algorithmic bias. We start with its definition and the notions of fairness policy makers, practitioners, and academic researchers have used and proposed. Next, we note the challenges in identifying and detecting algorithmic bias given the observed decision outcome, and we describe methods for bias detection. We then explain the potential sources of algorithmic bias and review several bias-correction methods. Finally, we discuss how agents’ strategic behavior may lead to biased societal outcomes, even when the algorithm itself is unbiased. We conclude by discussing open questions and future research directions.","PeriodicalId":189628,"journal":{"name":"InfoSciRN: Machine Learning (Sub-Topic)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"AI and Algorithmic Bias: Source, Detection, Mitigation and Implications\",\"authors\":\"Runshan Fu, Yan Huang, Param Vir Singh\",\"doi\":\"10.2139/ssrn.3681517\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) and machine learning (ML) algorithms are widely used throughout our economy in making decisions that have far-reaching impacts on employment, education, access to credit, and other areas. Initially considered neutral and fair, ML algorithms have recently been found increasingly biased, creating and perpetuating structural inequalities in society. With the rising concerns about algorithmic bias, a growing body of literature attempts to understand and resolve the issue of algorithmic bias. In this tutorial, we discuss five important aspects of algorithmic bias. We start with its definition and the notions of fairness policy makers, practitioners, and academic researchers have used and proposed. Next, we note the challenges in identifying and detecting algorithmic bias given the observed decision outcome, and we describe methods for bias detection. We then explain the potential sources of algorithmic bias and review several bias-correction methods. Finally, we discuss how agents’ strategic behavior may lead to biased societal outcomes, even when the algorithm itself is unbiased. We conclude by discussing open questions and future research directions.\",\"PeriodicalId\":189628,\"journal\":{\"name\":\"InfoSciRN: Machine Learning (Sub-Topic)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"InfoSciRN: Machine Learning (Sub-Topic)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3681517\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"InfoSciRN: Machine Learning (Sub-Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3681517","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

摘要

人工智能(AI)和机器学习(ML)算法在我们的经济中广泛应用于做出对就业、教育、获得信贷和其他领域产生深远影响的决策。最初被认为是中立和公平的,但最近发现ML算法越来越有偏见,在社会中创造并延续了结构性不平等。随着对算法偏见的关注日益增加,越来越多的文献试图理解和解决算法偏见的问题。在本教程中,我们将讨论算法偏差的五个重要方面。我们从它的定义和公平的概念开始,政策制定者、实践者和学术研究者已经使用和提出了这些概念。接下来,我们注意到在给定观察到的决策结果的情况下识别和检测算法偏差的挑战,并描述了偏差检测的方法。然后,我们解释了算法偏差的潜在来源,并回顾了几种偏差校正方法。最后,我们讨论了代理人的策略行为如何导致有偏见的社会结果,即使算法本身是无偏的。最后讨论了有待解决的问题和未来的研究方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI and Algorithmic Bias: Source, Detection, Mitigation and Implications
Artificial intelligence (AI) and machine learning (ML) algorithms are widely used throughout our economy in making decisions that have far-reaching impacts on employment, education, access to credit, and other areas. Initially considered neutral and fair, ML algorithms have recently been found increasingly biased, creating and perpetuating structural inequalities in society. With the rising concerns about algorithmic bias, a growing body of literature attempts to understand and resolve the issue of algorithmic bias. In this tutorial, we discuss five important aspects of algorithmic bias. We start with its definition and the notions of fairness policy makers, practitioners, and academic researchers have used and proposed. Next, we note the challenges in identifying and detecting algorithmic bias given the observed decision outcome, and we describe methods for bias detection. We then explain the potential sources of algorithmic bias and review several bias-correction methods. Finally, we discuss how agents’ strategic behavior may lead to biased societal outcomes, even when the algorithm itself is unbiased. We conclude by discussing open questions and future research directions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信