确定最终用户的解释需求:XAI题库的应用和扩展

Lars Sipos, Ulrike Schäfer, Katrin Glinka, C. Müller-Birn
{"title":"确定最终用户的解释需求:XAI题库的应用和扩展","authors":"Lars Sipos, Ulrike Schäfer, Katrin Glinka, C. Müller-Birn","doi":"10.1145/3603555.3608551","DOIUrl":null,"url":null,"abstract":"Explainable Artificial Intelligence (XAI) is concerned with making the decisions of AI systems interpretable to humans. Explanations are typically developed by AI experts and focus on algorithmic transparency and the inner workings of AI systems. Research has shown that such explanations do not meet the needs of users who do not have AI expertise. As a result, explanations are often ineffective in making system decisions interpretable and understandable. We aim to strengthen a socio-technical view of AI by following a Human-Centered Explainable Artificial Intelligence (HC-XAI) approach, which investigates the explanation needs of end-users (i.e., subject matter experts and lay users) in specific usage contexts. One of the most influential works in this area is the XAI Question Bank (XAIQB) by Liao et al. The authors propose a set of questions that end-users might ask when using an AI system, which in turn is intended to help developers and designers identify and address explanation needs. Although the XAIQB is widely referenced, there are few reports of its use in practice. In particular, it is unclear to what extent the XAIQB sufficiently captures the explanation needs of end-users and what potential problems exist in the practical application of the XAIQB. To explore these open questions, we used the XAIQB as the basis for analyzing 12 think-aloud software explorations with subject matter experts, i.e., art historians. We investigated the suitability of the XAIQB as a tool for identifying explanation needs in a specific usage context. Our analysis revealed a number of explanation needs that were missing from the question bank, but that emerged repeatedly as our study participants interacted with an AI system. We also found that some of the XAIQB questions were difficult to distinguish and required interpretation during use. Our contribution is an extension of the XAIQB with 11 new questions. In addition, we have expanded the descriptions of all new and existing questions to facilitate their use. We hope that this extension will enable HCI researchers and practitioners to use the XAIQB in practice and may provide a basis for future studies on the identification of explanation needs in different contexts.","PeriodicalId":132553,"journal":{"name":"Proceedings of Mensch und Computer 2023","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank\",\"authors\":\"Lars Sipos, Ulrike Schäfer, Katrin Glinka, C. Müller-Birn\",\"doi\":\"10.1145/3603555.3608551\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explainable Artificial Intelligence (XAI) is concerned with making the decisions of AI systems interpretable to humans. Explanations are typically developed by AI experts and focus on algorithmic transparency and the inner workings of AI systems. Research has shown that such explanations do not meet the needs of users who do not have AI expertise. As a result, explanations are often ineffective in making system decisions interpretable and understandable. We aim to strengthen a socio-technical view of AI by following a Human-Centered Explainable Artificial Intelligence (HC-XAI) approach, which investigates the explanation needs of end-users (i.e., subject matter experts and lay users) in specific usage contexts. One of the most influential works in this area is the XAI Question Bank (XAIQB) by Liao et al. The authors propose a set of questions that end-users might ask when using an AI system, which in turn is intended to help developers and designers identify and address explanation needs. Although the XAIQB is widely referenced, there are few reports of its use in practice. In particular, it is unclear to what extent the XAIQB sufficiently captures the explanation needs of end-users and what potential problems exist in the practical application of the XAIQB. To explore these open questions, we used the XAIQB as the basis for analyzing 12 think-aloud software explorations with subject matter experts, i.e., art historians. We investigated the suitability of the XAIQB as a tool for identifying explanation needs in a specific usage context. Our analysis revealed a number of explanation needs that were missing from the question bank, but that emerged repeatedly as our study participants interacted with an AI system. We also found that some of the XAIQB questions were difficult to distinguish and required interpretation during use. Our contribution is an extension of the XAIQB with 11 new questions. In addition, we have expanded the descriptions of all new and existing questions to facilitate their use. We hope that this extension will enable HCI researchers and practitioners to use the XAIQB in practice and may provide a basis for future studies on the identification of explanation needs in different contexts.\",\"PeriodicalId\":132553,\"journal\":{\"name\":\"Proceedings of Mensch und Computer 2023\",\"volume\":\"68 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of Mensch und Computer 2023\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3603555.3608551\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of Mensch und Computer 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3603555.3608551","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

可解释的人工智能(XAI)关注的是使人工智能系统的决策对人类来说是可解释的。解释通常由人工智能专家开发,专注于算法透明度和人工智能系统的内部运作。研究表明,这种解释不能满足不具备人工智能专业知识的用户的需求。因此,解释在使系统决策可解释和可理解方面往往是无效的。我们的目标是通过遵循以人为中心的可解释人工智能(HC-XAI)方法来加强人工智能的社会技术观点,该方法调查了最终用户(即主题专家和非专业用户)在特定使用环境中的解释需求。该领域最有影响力的作品之一是Liao等人的XAI题库。作者提出了最终用户在使用人工智能系统时可能会问的一系列问题,这些问题反过来又旨在帮助开发人员和设计人员识别和解决解释需求。尽管XAIQB被广泛引用,但在实践中使用它的报道很少。特别是,目前还不清楚XAIQB在多大程度上充分捕获了最终用户的解释需求,以及在XAIQB的实际应用中存在哪些潜在问题。为了探索这些开放的问题,我们使用XAIQB作为基础,与主题专家(即艺术历史学家)一起分析12个有声思考软件探索。我们研究了XAIQB作为在特定使用上下文中识别解释需求的工具的适用性。我们的分析揭示了题库中缺失的一些解释需求,但当我们的研究参与者与人工智能系统互动时,这些需求会反复出现。我们还发现,XAIQB的一些问题很难区分,在使用过程中需要解释。我们的贡献是扩展了XAIQB,增加了11个新问题。此外,我们扩大了所有新的和现有的问题的描述,以方便他们的使用。我们希望这一扩展将使HCI研究人员和从业者能够在实践中使用XAIQB,并可能为未来在不同背景下识别解释需求的研究提供基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank
Explainable Artificial Intelligence (XAI) is concerned with making the decisions of AI systems interpretable to humans. Explanations are typically developed by AI experts and focus on algorithmic transparency and the inner workings of AI systems. Research has shown that such explanations do not meet the needs of users who do not have AI expertise. As a result, explanations are often ineffective in making system decisions interpretable and understandable. We aim to strengthen a socio-technical view of AI by following a Human-Centered Explainable Artificial Intelligence (HC-XAI) approach, which investigates the explanation needs of end-users (i.e., subject matter experts and lay users) in specific usage contexts. One of the most influential works in this area is the XAI Question Bank (XAIQB) by Liao et al. The authors propose a set of questions that end-users might ask when using an AI system, which in turn is intended to help developers and designers identify and address explanation needs. Although the XAIQB is widely referenced, there are few reports of its use in practice. In particular, it is unclear to what extent the XAIQB sufficiently captures the explanation needs of end-users and what potential problems exist in the practical application of the XAIQB. To explore these open questions, we used the XAIQB as the basis for analyzing 12 think-aloud software explorations with subject matter experts, i.e., art historians. We investigated the suitability of the XAIQB as a tool for identifying explanation needs in a specific usage context. Our analysis revealed a number of explanation needs that were missing from the question bank, but that emerged repeatedly as our study participants interacted with an AI system. We also found that some of the XAIQB questions were difficult to distinguish and required interpretation during use. Our contribution is an extension of the XAIQB with 11 new questions. In addition, we have expanded the descriptions of all new and existing questions to facilitate their use. We hope that this extension will enable HCI researchers and practitioners to use the XAIQB in practice and may provide a basis for future studies on the identification of explanation needs in different contexts.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信