使用以人为本的方法设计人工智能:可信度的可解释性和准确性

Jordan Richard Schoenherr;Roba Abbas;Katina Michael;Pablo Rivas;Theresa Dirndorfer Anderson
{"title":"使用以人为本的方法设计人工智能:可信度的可解释性和准确性","authors":"Jordan Richard Schoenherr;Roba Abbas;Katina Michael;Pablo Rivas;Theresa Dirndorfer Anderson","doi":"10.1109/TTS.2023.3257627","DOIUrl":null,"url":null,"abstract":"One of the major criticisms of Artificial Intelligence is its lack of explainability. A claim is made by many critics that without knowing how an AI may derive a result or come to a given conclusion, it is impossible to trust in its outcomes. This problem is especially concerning when AI-based systems and applications fail to perform their tasks successfully. In this Special Issue Editorial, we focus on two main areas, explainable AI (XAI) and accuracy, and how both dimensions are critical to building trustworthy systems. We review prominent XAI design themes, leading to a reframing of the design and development effort that highlights the significance of the human, thereby demonstrating the importance of human-centered AI (HCAI). The HCAI approach advocates for a range of deliberate design-related decisions, such as those pertaining to multi-stakeholder engagement and the dissolving of disciplinary boundaries. This enables the consideration and integration of deep interdisciplinary knowledge, as evidenced in our example of social cognitive approaches to AI design. This Editorial then presents a discussion on ways forward, underscoring the value of a balanced approach to assessing the opportunities, risks and responsibilities associated with AI design. We conclude by presenting papers in the Special Issue and their contribution, pointing to future research endeavors.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"9-23"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8566059/10086685/10086944.pdf","citationCount":"1","resultStr":"{\"title\":\"Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness\",\"authors\":\"Jordan Richard Schoenherr;Roba Abbas;Katina Michael;Pablo Rivas;Theresa Dirndorfer Anderson\",\"doi\":\"10.1109/TTS.2023.3257627\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the major criticisms of Artificial Intelligence is its lack of explainability. A claim is made by many critics that without knowing how an AI may derive a result or come to a given conclusion, it is impossible to trust in its outcomes. This problem is especially concerning when AI-based systems and applications fail to perform their tasks successfully. In this Special Issue Editorial, we focus on two main areas, explainable AI (XAI) and accuracy, and how both dimensions are critical to building trustworthy systems. We review prominent XAI design themes, leading to a reframing of the design and development effort that highlights the significance of the human, thereby demonstrating the importance of human-centered AI (HCAI). The HCAI approach advocates for a range of deliberate design-related decisions, such as those pertaining to multi-stakeholder engagement and the dissolving of disciplinary boundaries. This enables the consideration and integration of deep interdisciplinary knowledge, as evidenced in our example of social cognitive approaches to AI design. This Editorial then presents a discussion on ways forward, underscoring the value of a balanced approach to assessing the opportunities, risks and responsibilities associated with AI design. We conclude by presenting papers in the Special Issue and their contribution, pointing to future research endeavors.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"4 1\",\"pages\":\"9-23\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/8566059/10086685/10086944.pdf\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10086944/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10086944/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

对人工智能的主要批评之一是它缺乏可解释性。许多评论家声称,如果不知道人工智能是如何得出结果或得出给定结论的,就不可能相信它的结果。当基于人工智能的系统和应用程序无法成功执行任务时,这个问题尤其令人担忧。在本期特刊社论中,我们关注两个主要领域,可解释人工智能(XAI)和准确性,以及这两个维度如何对构建值得信赖的系统至关重要。我们回顾了突出的XAI设计主题,从而重新定义了强调人类意义的设计和开发工作,从而证明了以人为中心的人工智能(HCAI)的重要性。HCAI方法提倡一系列深思熟虑的设计相关决策,例如与多方利益相关者参与和学科界限的消除有关的决策。这使得能够考虑和整合深入的跨学科知识,正如我们对人工智能设计的社会认知方法的例子所证明的那样。这篇社论随后就未来的发展方向进行了讨论,强调了平衡方法评估人工智能设计相关机会、风险和责任的价值。最后,我们在特刊上发表论文及其贡献,指出未来的研究努力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness
One of the major criticisms of Artificial Intelligence is its lack of explainability. A claim is made by many critics that without knowing how an AI may derive a result or come to a given conclusion, it is impossible to trust in its outcomes. This problem is especially concerning when AI-based systems and applications fail to perform their tasks successfully. In this Special Issue Editorial, we focus on two main areas, explainable AI (XAI) and accuracy, and how both dimensions are critical to building trustworthy systems. We review prominent XAI design themes, leading to a reframing of the design and development effort that highlights the significance of the human, thereby demonstrating the importance of human-centered AI (HCAI). The HCAI approach advocates for a range of deliberate design-related decisions, such as those pertaining to multi-stakeholder engagement and the dissolving of disciplinary boundaries. This enables the consideration and integration of deep interdisciplinary knowledge, as evidenced in our example of social cognitive approaches to AI design. This Editorial then presents a discussion on ways forward, underscoring the value of a balanced approach to assessing the opportunities, risks and responsibilities associated with AI design. We conclude by presenting papers in the Special Issue and their contribution, pointing to future research endeavors.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信