人类与当前可解释人工智能的解释策略:图像分类的启示

IF 3.2 2区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY
Ruoxi Qi, Yueyuan Zheng, Yi Yang, Caleb Chen Cao, Janet H Hsiao
{"title":"人类与当前可解释人工智能的解释策略:图像分类的启示","authors":"Ruoxi Qi, Yueyuan Zheng, Yi Yang, Caleb Chen Cao, Janet H Hsiao","doi":"10.1111/bjop.12714","DOIUrl":null,"url":null,"abstract":"<p><p>Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.</p>","PeriodicalId":9300,"journal":{"name":"British journal of psychology","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification.\",\"authors\":\"Ruoxi Qi, Yueyuan Zheng, Yi Yang, Caleb Chen Cao, Janet H Hsiao\",\"doi\":\"10.1111/bjop.12714\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.</p>\",\"PeriodicalId\":9300,\"journal\":{\"name\":\"British journal of psychology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British journal of psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1111/bjop.12714\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British journal of psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/bjop.12714","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

可解释的人工智能(XAI)方法提供了对人工智能模型的解释,但我们对这些方法与人类解释的比较的了解仍然有限。在这里,我们通过眼动跟踪研究了人类参与者在对图像进行分类以及解释他们如何对图像进行分类时的注意力策略,并将他们的注意力策略与当前 XAI 方法中基于显著性的解释进行了比较。我们发现,与分类任务本身相比,人类在解释任务中采用了更具探索性的注意策略。通过聚类,我们确定了两种具有代表性的解释策略:其中一种涉及对前景物体进行集中视觉扫描,并提供更多概念性解释,这些解释包含更多用于推断类别标签的具体信息;而另一种涉及探索性扫描,并提供更多视觉解释,这些解释对早期类别学习的有效性评价更高。有趣的是,XAI 突出图的解释与人类探索性注意策略的相似度最高,而通过扰动调用可观察到的因果关系来突出区分特征的解释与人类策略的相似度高于那些突出与较高类别得分相关的内部特征的解释。因此,人类在解释过程中会同时使用视觉信息和概念信息,这两种信息的目的各不相同,而突出可观察因果关系特征的 XAI 方法与人类解释更加匹配,可能更容易被用户接受。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification.

Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
British journal of psychology
British journal of psychology PSYCHOLOGY, MULTIDISCIPLINARY-
CiteScore
7.60
自引率
2.50%
发文量
67
期刊介绍: The British Journal of Psychology publishes original research on all aspects of general psychology including cognition; health and clinical psychology; developmental, social and occupational psychology. For information on specific requirements, please view Notes for Contributors. We attract a large number of international submissions each year which make major contributions across the range of psychology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信