Unravelling responsibility for AI

Zoe Porter , Philippa Ryan , Phillip Morgan , Joanna Al-Qaddoumi , Bernard Twomey , Paul Noordhof , John McDermid , Ibrahim Habli
{"title":"Unravelling responsibility for AI","authors":"Zoe Porter ,&nbsp;Philippa Ryan ,&nbsp;Phillip Morgan ,&nbsp;Joanna Al-Qaddoumi ,&nbsp;Bernard Twomey ,&nbsp;Paul Noordhof ,&nbsp;John McDermid ,&nbsp;Ibrahim Habli","doi":"10.1016/j.jrt.2025.100124","DOIUrl":null,"url":null,"abstract":"<div><div>It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what ‘responsibility’ means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation ‘Actor A is responsible for Occurrence O,’ the framework unravels the concept of responsibility to clarify that there are different possibilities of <em>who</em> is responsible for AI, <em>senses</em> in which they are responsible, and <em>aspects of events</em> they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100124"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666659625000204","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what ‘responsibility’ means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation ‘Actor A is responsible for Occurrence O,’ the framework unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI, senses in which they are responsible, and aspects of events they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.
解开人工智能的责任
人们普遍认为,我们需要确定谁对人工智能系统的产出和影响负责。这对于实现人工智能伤害受害者的正义和赔偿,以及为政策和工程实践提供信息非常重要。但是,如果对“责任”的含义没有一个清晰、透彻的理解,那么对责任所在的思考,往好了说,就是没有重点、不完整,往坏了说,就是被误导了。此外,人工智能支持的系统存在于更广泛的参与者、决策和治理结构生态系统中,从而产生复杂的责任关系网络。为了解决这些问题,本文提出了一个责任的概念框架,附带了一个图形符号和通用方法,用于可视化这些责任网络,并用于追踪人工智能的不同责任归因。采用“行动者A对事件O负责”的三部分公式,该框架揭示了责任的概念,以澄清谁对人工智能负责、他们负责的感觉以及他们负责的事件方面存在不同的可能性。符号允许用图形表示这些排列。该方法使用户能够将框架应用于特定的场景。其目的是为来自不同学科背景的利益相关者提供一个基础,以讨论和解决涉及人工智能的假设和现实案例中的复杂责任问题。这项工作通过应用于一个虚构的场景来说明,即一艘无人驾驶的人工智能海上船只在自主模式下与一艘传统的海上有船员的船只发生致命碰撞。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of responsible technology
Journal of responsible technology Information Systems, Artificial Intelligence, Human-Computer Interaction
CiteScore
3.60
自引率
0.00%
发文量
0
审稿时长
168 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信