Governing Algorithmic Systems with Impact Assessments: Six Observations

E. A. Watkins, E. Moss, Jacob Metcalf, Ranjit Singh, M. C. Elish
{"title":"Governing Algorithmic Systems with Impact Assessments: Six Observations","authors":"E. A. Watkins, E. Moss, Jacob Metcalf, Ranjit Singh, M. C. Elish","doi":"10.1145/3461702.3462580","DOIUrl":null,"url":null,"abstract":"Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools-through Algorithmic Impact Assessments (AIA)-is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.","PeriodicalId":197336,"journal":{"name":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3461702.3462580","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools-through Algorithmic Impact Assessments (AIA)-is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.
治理算法系统与影响评估:六个观察
算法决策和决策支持系统(ADS)对社会如何分配资源、管理司法和提供机会的影响越来越大。然而,总的来说,我们并没有充分研究这些系统如何影响人们,也没有记录它们与重要社会功能相结合所造成的实际或潜在危害。这对于衡量和管理人工智能系统的计算正义工作来说是一个重大挑战。影响评估通常被用作建立问责关系的工具,并向受环境、财政和人权影响项目影响的社区提供一定程度的代理权和发言权。通过算法影响评估(AIA)应用这些工具是建立美国存托凭证责任关系的可行方法。与此同时,友邦保险将带来什么仍未明确;他们提出的问题和他们回答的一样多。关于人工智能的方法、范围和目的的选择构成了人工智能治理的可能性条件。在本文中,我们通过社会技术的视角介绍了我们对不同领域影响评估历史的研究,并提出了关于它们如何共同构成问责制的六个观察结果。决定什么类型的影响算作影响;评估影响时;考虑到其利益的;谁被邀请参加;由谁进行评估;如何向公众提供评估,以及评估的产出可能是什么;所有这些都塑造了aia产生的问责形式。因为AlAs仍然是一种早期的治理策略,将它们作为不需要单一或普遍方法的社会结构来处理,提供了一个产生经过仔细考虑的干预措施的机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信