Design for operator contestability: control over autonomous systems by introducing defeaters

Herman Veluwenkamp, Stefan Buijsman
{"title":"Design for operator contestability: control over autonomous systems by introducing defeaters","authors":"Herman Veluwenkamp,&nbsp;Stefan Buijsman","doi":"10.1007/s43681-025-00657-0","DOIUrl":null,"url":null,"abstract":"<div><p>This paper introduces the concept of Operator Contestability in AI systems: the principle that those overseeing AI systems (operators) must have the necessary control to be accountable for the decisions made by these algorithms. We argue that designers have a duty to ensure operator contestability. We demonstrate how this duty can be fulfilled by applying the'Design for Defeaters' framework, which provides strategies to embed tools within AI systems that enable operators to challenge decisions. Defeaters are designed to contest either the justification for the AI’s data inputs (undercutting defeaters) or the validity of the conclusions drawn from that data (rebutting defeaters). To illustrate the necessity and application of this framework, we examine case studies such as AI-driven recruitment processes, where operators need tools and authority to uncover and address potential biases, and autonomous driving systems, where real-time decision-making is crucial. The paper argues that operator contestability requires ensuring that operators have (1) epistemic access to the relevant normative reasons and (2) the authority and cognitive capacity to act on these defeaters. By addressing these challenges, the paper emphasizes the importance of designing AI systems in a way that enables operators to effectively contest AI decisions, thereby ensuring that the appropriate individuals can take responsibility for the outcomes of human-AI interactions.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3699 - 3711"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00657-0.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00657-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper introduces the concept of Operator Contestability in AI systems: the principle that those overseeing AI systems (operators) must have the necessary control to be accountable for the decisions made by these algorithms. We argue that designers have a duty to ensure operator contestability. We demonstrate how this duty can be fulfilled by applying the'Design for Defeaters' framework, which provides strategies to embed tools within AI systems that enable operators to challenge decisions. Defeaters are designed to contest either the justification for the AI’s data inputs (undercutting defeaters) or the validity of the conclusions drawn from that data (rebutting defeaters). To illustrate the necessity and application of this framework, we examine case studies such as AI-driven recruitment processes, where operators need tools and authority to uncover and address potential biases, and autonomous driving systems, where real-time decision-making is crucial. The paper argues that operator contestability requires ensuring that operators have (1) epistemic access to the relevant normative reasons and (2) the authority and cognitive capacity to act on these defeaters. By addressing these challenges, the paper emphasizes the importance of designing AI systems in a way that enables operators to effectively contest AI decisions, thereby ensuring that the appropriate individuals can take responsibility for the outcomes of human-AI interactions.

操作人员可竞争性设计:通过引入失败程序来控制自主系统
本文介绍了人工智能系统中操作员可竞争性的概念:监督人工智能系统的人(操作员)必须拥有必要的控制权,以便对这些算法做出的决策负责。我们认为,设计师有责任确保操作符的可竞争性。我们通过应用“败者设计”框架展示了如何履行这一职责,该框架提供了在人工智能系统中嵌入工具的策略,使操作员能够挑战决策。击败者的设计目的是质疑AI数据输入的合理性(削弱对手)或从数据中得出结论的有效性(反驳对手)。为了说明该框架的必要性和应用,我们研究了一些案例研究,例如人工智能驱动的招聘流程,操作员需要工具和权限来发现和解决潜在的偏见,以及自动驾驶系统,实时决策至关重要。本文认为,经营者的可争议性要求确保经营者具有(1)对相关规范原因的认知访问,以及(2)对这些反对者采取行动的权威和认知能力。通过解决这些挑战,本文强调了设计人工智能系统的重要性,使操作员能够有效地质疑人工智能决策,从而确保适当的个人能够对人类与人工智能交互的结果负责。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信