Revisiting Transparency and Fairness in Algorithmic Systems Through the Lens of Public Education and Engagement

Motahhare Eslami
{"title":"Revisiting Transparency and Fairness in Algorithmic Systems Through the Lens of Public Education and Engagement","authors":"Motahhare Eslami","doi":"10.1145/3430895.3462228","DOIUrl":null,"url":null,"abstract":"The power, opacity, and bias of algorithmic systems have opened up new research areas for bringing transparency, fairness, and accountability into these systems. In this talk, I will revisit these lines of work, and argue that while they are critical to making algorithmic systems responsible, fresh perspectives are needed when these efforts fall short. I particularly discuss the necessity of algorithmic literacy and public education about the shortcomings of existing transparency and fairness efforts in algorithmic systems in order to enable everyday users to make more informed decisions in interactions with these systems. First, I discuss how algorithmic transparency, when not designed carefully, can be more harmful than helpful, and that we need to inform users about the limitations of transparency mechanisms provided in algorithmic systems. Second, I will talk about the current approaches tackling algorithmic bias in algorithmic systems, including bias detection and bias mitigation, and their limitations. I particularly show that the current algorithm auditing techniques that mainly rely on experts, and are conducted outside of everyday use of algorithmic systems, fall short in detecting biases that emerge in real-world contexts of use, and in the presence of complex social dynamics over time. This leads to the idea of \"everyday algorithm auditing\" that involves educating and enabling everyday users to understand, detect and/or interrogate biased and harmful algorithmic behaviors via their day-to-day interactions with algorithmic systems. I then take a new perspective on the bias mitigation efforts that endeavor to bring fairness to algorithmic systems, and argue that there are many cases that mitigating algorithmic bias is quite challenging, if not impossible. I propose the concept of \"bias transparency\" that centers bias awareness in algorithmic systems, particularly in high-stakes decision-making systems, by educating the public about potential biases these systems can introduce to users' decisions. I will end by discussing the importance of educating youth and fostering literacy around algorithmic systems from K-12 to prepare everyday users in their interactions with algorithmic systems.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"177 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eighth ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3430895.3462228","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The power, opacity, and bias of algorithmic systems have opened up new research areas for bringing transparency, fairness, and accountability into these systems. In this talk, I will revisit these lines of work, and argue that while they are critical to making algorithmic systems responsible, fresh perspectives are needed when these efforts fall short. I particularly discuss the necessity of algorithmic literacy and public education about the shortcomings of existing transparency and fairness efforts in algorithmic systems in order to enable everyday users to make more informed decisions in interactions with these systems. First, I discuss how algorithmic transparency, when not designed carefully, can be more harmful than helpful, and that we need to inform users about the limitations of transparency mechanisms provided in algorithmic systems. Second, I will talk about the current approaches tackling algorithmic bias in algorithmic systems, including bias detection and bias mitigation, and their limitations. I particularly show that the current algorithm auditing techniques that mainly rely on experts, and are conducted outside of everyday use of algorithmic systems, fall short in detecting biases that emerge in real-world contexts of use, and in the presence of complex social dynamics over time. This leads to the idea of "everyday algorithm auditing" that involves educating and enabling everyday users to understand, detect and/or interrogate biased and harmful algorithmic behaviors via their day-to-day interactions with algorithmic systems. I then take a new perspective on the bias mitigation efforts that endeavor to bring fairness to algorithmic systems, and argue that there are many cases that mitigating algorithmic bias is quite challenging, if not impossible. I propose the concept of "bias transparency" that centers bias awareness in algorithmic systems, particularly in high-stakes decision-making systems, by educating the public about potential biases these systems can introduce to users' decisions. I will end by discussing the importance of educating youth and fostering literacy around algorithmic systems from K-12 to prepare everyday users in their interactions with algorithmic systems.
从公众教育和参与的角度重新审视算法系统的透明度和公平性
算法系统的权力、不透明性和偏见为将透明度、公平性和问责制引入这些系统开辟了新的研究领域。在这次演讲中,我将重新审视这些工作,并认为虽然它们对算法系统负责至关重要,但当这些努力不足时,需要新的视角。我特别讨论了算法素养和公众教育的必要性,以了解算法系统中现有透明度和公平努力的缺点,以便使日常用户能够在与这些系统交互时做出更明智的决定。首先,我讨论了算法透明度在设计不当时是如何弊大于利的,以及我们需要告知用户算法系统中提供的透明度机制的局限性。其次,我将讨论当前在算法系统中解决算法偏差的方法,包括偏差检测和偏差缓解,以及它们的局限性。我特别指出,目前的算法审计技术主要依赖于专家,并且是在算法系统的日常使用之外进行的,在检测实际使用环境中出现的偏见方面存在不足,并且随着时间的推移存在复杂的社会动态。这导致了“日常算法审计”的想法,包括教育和使日常用户能够通过与算法系统的日常交互来理解、检测和/或询问有偏见和有害的算法行为。然后,我对努力为算法系统带来公平的偏见缓解工作采取了新的视角,并认为在许多情况下,减轻算法偏见即使不是不可能的,也是相当具有挑战性的。我提出了“偏见透明度”的概念,通过教育公众这些系统可能引入用户决策的潜在偏见,将偏见意识集中在算法系统中,特别是在高风险决策系统中。最后,我将讨论教育青少年和培养从K-12开始的算法系统扫盲的重要性,以使日常用户在与算法系统的交互中做好准备。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信