以用户为中心的人工智能可解释性评价:一项全面的实证研究

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS
Szymon Bobek , Paloma Korycińska , Monika Krakowska , Maciej Mozolewski , Dorota Rak , Magdalena Zych , Magdalena Wójcik , Grzegorz J. Nalepa
{"title":"以用户为中心的人工智能可解释性评价:一项全面的实证研究","authors":"Szymon Bobek ,&nbsp;Paloma Korycińska ,&nbsp;Monika Krakowska ,&nbsp;Maciej Mozolewski ,&nbsp;Dorota Rak ,&nbsp;Magdalena Zych ,&nbsp;Magdalena Wójcik ,&nbsp;Grzegorz J. Nalepa","doi":"10.1016/j.ijhcs.2025.103625","DOIUrl":null,"url":null,"abstract":"<div><div>This study is located in the Human-Centered Artificial Intelligence (HCAI) and focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms, specifically investigating how humans understand and interact with the explanations provided by these algorithms. To achieve this, we employed a multi-disciplinary approach that included state-of-the-art research methods from social sciences to measure the comprehensibility of explanations generated by a state-of-the-art machine learning model, specifically the Gradient Boosting Classifier (XGBClassifier). We conducted an extensive empirical user study involving interviews with 39 participants from three different groups, each with varying expertise in data science, data visualisation, and domain-specific knowledge related to the dataset used for training the machine learning model. Participants were asked a series of questions to assess their understanding of the model’s explanations. To ensure replicability, we built the model using a publicly available dataset from the University of California Irvine Machine Learning Repository, focusing on edible and non-edible mushrooms. Our findings reveal limitations in existing XAI methods and confirm the need for new design principles and evaluation techniques that address the specific information needs and user perspectives of different classes of artificial intelligence (AI) stakeholders. We believe that the results of our research and the cross-disciplinary methodology we developed can be successfully adapted to various data types and user profiles, thus promoting dialogue and address opportunities in HCAI research. To support this, we are making the data resulting from our study publicly available.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"205 ","pages":"Article 103625"},"PeriodicalIF":5.1000,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"User-centric evaluation of explainability of AI with and for humans: A comprehensive empirical study\",\"authors\":\"Szymon Bobek ,&nbsp;Paloma Korycińska ,&nbsp;Monika Krakowska ,&nbsp;Maciej Mozolewski ,&nbsp;Dorota Rak ,&nbsp;Magdalena Zych ,&nbsp;Magdalena Wójcik ,&nbsp;Grzegorz J. Nalepa\",\"doi\":\"10.1016/j.ijhcs.2025.103625\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This study is located in the Human-Centered Artificial Intelligence (HCAI) and focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms, specifically investigating how humans understand and interact with the explanations provided by these algorithms. To achieve this, we employed a multi-disciplinary approach that included state-of-the-art research methods from social sciences to measure the comprehensibility of explanations generated by a state-of-the-art machine learning model, specifically the Gradient Boosting Classifier (XGBClassifier). We conducted an extensive empirical user study involving interviews with 39 participants from three different groups, each with varying expertise in data science, data visualisation, and domain-specific knowledge related to the dataset used for training the machine learning model. Participants were asked a series of questions to assess their understanding of the model’s explanations. To ensure replicability, we built the model using a publicly available dataset from the University of California Irvine Machine Learning Repository, focusing on edible and non-edible mushrooms. Our findings reveal limitations in existing XAI methods and confirm the need for new design principles and evaluation techniques that address the specific information needs and user perspectives of different classes of artificial intelligence (AI) stakeholders. We believe that the results of our research and the cross-disciplinary methodology we developed can be successfully adapted to various data types and user profiles, thus promoting dialogue and address opportunities in HCAI research. To support this, we are making the data resulting from our study publicly available.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"205 \",\"pages\":\"Article 103625\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S107158192500182X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S107158192500182X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

摘要

本研究定位于以人为中心的人工智能(HCAI),重点关注以用户为中心的常用可解释人工智能(XAI)算法的评估结果,具体调查人类如何理解这些算法提供的解释并与之互动。为了实现这一目标,我们采用了一种多学科的方法,其中包括来自社会科学的最先进的研究方法,以衡量由最先进的机器学习模型(特别是梯度增强分类器(XGBClassifier))生成的解释的可理解性。我们进行了广泛的经验用户研究,涉及对来自三个不同组的39名参与者的访谈,每个参与者在数据科学,数据可视化和与用于训练机器学习模型的数据集相关的领域特定知识方面具有不同的专业知识。参与者被要求回答一系列问题,以评估他们对模型解释的理解程度。为了确保可复制性,我们使用来自加州大学欧文分校机器学习存储库的公开数据集构建了模型,重点关注食用和非食用蘑菇。我们的研究结果揭示了现有人工智能方法的局限性,并确认需要新的设计原则和评估技术,以解决不同类别的人工智能(AI)利益相关者的特定信息需求和用户观点。我们相信,我们的研究成果和我们开发的跨学科方法可以成功地适应各种数据类型和用户概况,从而促进HCAI研究中的对话和解决机会。为了支持这一点,我们正在公开我们研究的数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
User-centric evaluation of explainability of AI with and for humans: A comprehensive empirical study
This study is located in the Human-Centered Artificial Intelligence (HCAI) and focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms, specifically investigating how humans understand and interact with the explanations provided by these algorithms. To achieve this, we employed a multi-disciplinary approach that included state-of-the-art research methods from social sciences to measure the comprehensibility of explanations generated by a state-of-the-art machine learning model, specifically the Gradient Boosting Classifier (XGBClassifier). We conducted an extensive empirical user study involving interviews with 39 participants from three different groups, each with varying expertise in data science, data visualisation, and domain-specific knowledge related to the dataset used for training the machine learning model. Participants were asked a series of questions to assess their understanding of the model’s explanations. To ensure replicability, we built the model using a publicly available dataset from the University of California Irvine Machine Learning Repository, focusing on edible and non-edible mushrooms. Our findings reveal limitations in existing XAI methods and confirm the need for new design principles and evaluation techniques that address the specific information needs and user perspectives of different classes of artificial intelligence (AI) stakeholders. We believe that the results of our research and the cross-disciplinary methodology we developed can be successfully adapted to various data types and user profiles, thus promoting dialogue and address opportunities in HCAI research. To support this, we are making the data resulting from our study publicly available.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Human-Computer Studies
International Journal of Human-Computer Studies 工程技术-计算机:控制论
CiteScore
11.50
自引率
5.60%
发文量
108
审稿时长
3 months
期刊介绍: The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities. Research areas relevant to the journal include, but are not limited to: • Innovative interaction techniques • Multimodal interaction • Speech interaction • Graphic interaction • Natural language interaction • Interaction in mobile and embedded systems • Interface design and evaluation methodologies • Design and evaluation of innovative interactive systems • User interface prototyping and management systems • Ubiquitous computing • Wearable computers • Pervasive computing • Affective computing • Empirical studies of user behaviour • Empirical studies of programming and software engineering • Computer supported cooperative work • Computer mediated communication • Virtual reality • Mixed and augmented Reality • Intelligent user interfaces • Presence ...
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信