人工智能治理的赋能能力

Siri Padmanabhan Poti, Christopher J Stanton
{"title":"人工智能治理的赋能能力","authors":"Siri Padmanabhan Poti,&nbsp;Christopher J Stanton","doi":"10.1016/j.jrt.2024.100086","DOIUrl":null,"url":null,"abstract":"<div><p>Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962400012X/pdfft?md5=9bf6cc548743ad7d2d5c0830773f5145&pid=1-s2.0-S266665962400012X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Enabling affordances for AI Governance\",\"authors\":\"Siri Padmanabhan Poti,&nbsp;Christopher J Stanton\",\"doi\":\"10.1016/j.jrt.2024.100086\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.</p></div>\",\"PeriodicalId\":73937,\"journal\":{\"name\":\"Journal of responsible technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S266665962400012X/pdfft?md5=9bf6cc548743ad7d2d5c0830773f5145&pid=1-s2.0-S266665962400012X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of responsible technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S266665962400012X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266665962400012X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

处理关键任务人工智能自主系统的组织可能需要提供持续的风险管理控制,并建立治理手段。为此,组织需要在这些系统中植入可信度和透明度,并由人类进行监督和问责。自主系统通过对结果的保证、对行为的解释和对意图的诠释来获得可信度、透明度、质量和可维护性。然而,在自主系统的软件开发生命周期(SDLC)中,技术、商业和市场方面的挑战可能会导致自主系统的质量、可维护性、可解释性和可解释性大打折扣。本文从概念上对 SDLC 的转型进行了建模,以实现自主系统的保证、解释、诠释和整体管理能力。我们认为,通过技术债务管理、左移方法和非短暂工件等协同干预措施,可为 SDLC 转型提供机会。本文有助于自主系统治理的理论和实践,以及逐步和分层建立可信度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Enabling affordances for AI Governance

Enabling affordances for AI Governance

Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of responsible technology
Journal of responsible technology Information Systems, Artificial Intelligence, Human-Computer Interaction
CiteScore
3.60
自引率
0.00%
发文量
0
审稿时长
168 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信