人工智能应用中的公平性

C. Shelley
{"title":"人工智能应用中的公平性","authors":"C. Shelley","doi":"10.1109/istas52410.2021.9629140","DOIUrl":null,"url":null,"abstract":"Applications of Artificial Intelligence (AI) that have broad, social impact for many people have recently increased greatly in number. They will continue to increase in ubiquity and impact for some time to come. In conjunction with this increase, many scholars have studied the nature of these impacts, including problems of fairness. Here, fairness refers to conflicts of interest between social groups that result from the configuration of these AI systems. One focus of research has been to define these fairness problems and to quantify them in a way that lends itself to calculation of fair outcomes. The purpose of this presentation is to show that this issue of fairness in AI is consistent with fairness problems posed by technological design in general and that addressing these problems goes beyond what can be readily quantified and calculated. For example, many such problems may be best resolved by forms of public consultation. This point is clarified by presenting an analytical tool, the Fairness Impact Assessment, and examples from AI and elsewhere.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fairness in AI applications\",\"authors\":\"C. Shelley\",\"doi\":\"10.1109/istas52410.2021.9629140\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Applications of Artificial Intelligence (AI) that have broad, social impact for many people have recently increased greatly in number. They will continue to increase in ubiquity and impact for some time to come. In conjunction with this increase, many scholars have studied the nature of these impacts, including problems of fairness. Here, fairness refers to conflicts of interest between social groups that result from the configuration of these AI systems. One focus of research has been to define these fairness problems and to quantify them in a way that lends itself to calculation of fair outcomes. The purpose of this presentation is to show that this issue of fairness in AI is consistent with fairness problems posed by technological design in general and that addressing these problems goes beyond what can be readily quantified and calculated. For example, many such problems may be best resolved by forms of public consultation. This point is clarified by presenting an analytical tool, the Fairness Impact Assessment, and examples from AI and elsewhere.\",\"PeriodicalId\":314239,\"journal\":{\"name\":\"2021 IEEE International Symposium on Technology and Society (ISTAS)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Symposium on Technology and Society (ISTAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/istas52410.2021.9629140\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/istas52410.2021.9629140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

对许多人具有广泛社会影响的人工智能(AI)应用最近在数量上大大增加。在未来一段时间内,它们将继续增加其普遍性和影响力。与这种增长相结合,许多学者研究了这些影响的本质,包括公平问题。这里,公平指的是由于这些AI系统的配置而导致的社会群体之间的利益冲突。研究的一个重点是定义这些公平问题,并以一种有助于计算公平结果的方式对其进行量化。本次演讲的目的是表明,人工智能中的公平性问题与一般技术设计所带来的公平性问题是一致的,并且解决这些问题超出了可以轻易量化和计算的范围。例如,许多这类问题最好通过各种形式的公众协商来解决。通过分析工具——公平影响评估(Fairness Impact Assessment),以及人工智能和其他领域的例子,阐明了这一点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fairness in AI applications
Applications of Artificial Intelligence (AI) that have broad, social impact for many people have recently increased greatly in number. They will continue to increase in ubiquity and impact for some time to come. In conjunction with this increase, many scholars have studied the nature of these impacts, including problems of fairness. Here, fairness refers to conflicts of interest between social groups that result from the configuration of these AI systems. One focus of research has been to define these fairness problems and to quantify them in a way that lends itself to calculation of fair outcomes. The purpose of this presentation is to show that this issue of fairness in AI is consistent with fairness problems posed by technological design in general and that addressing these problems goes beyond what can be readily quantified and calculated. For example, many such problems may be best resolved by forms of public consultation. This point is clarified by presenting an analytical tool, the Fairness Impact Assessment, and examples from AI and elsewhere.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信