评估时间对可解释人工智能信任校准的影响

Ezekiel Bernardo, R. Seva
{"title":"评估时间对可解释人工智能信任校准的影响","authors":"Ezekiel Bernardo, R. Seva","doi":"10.54941/ahfe1003280","DOIUrl":null,"url":null,"abstract":"Explainable Artificial Intelligence (XAI) has played a significant role in human-computer interaction. The cognitive resources it carries allow humans to understand the complex algorithm powering Artificial Intelligence (AI), virtually resolving the acceptance and adoption barrier from the lack of transparency. This resulted in more systems leveraging XAI and triggering interest and efforts to develop newer and more capable techniques. However, though the research stream is expanding, little is known about the extent of its effectiveness on end-users. Current works have only measured XAI effects on either moment time effect or compared it cross-sectionally on various types of users. Filling this out can improve the understanding of existing studies and provide practical limitations on its use for trust calibration. To address this gap, a multi-time research experiment was conducted with 103 participants to use and evaluate XAI in an image classification application for three days. Measurement that was considered is on perceived usefulness for its cognitive contribution, integral emotions for affective change, trust, and reliance, and was analyzed via covariance-based structural equation modelling. Results showed that time only moderates the path from cognitive to trust and reliance as well as trust to reliance, with its effect dampening through time. On the other hand, affective change has remained consistent in all interactions. This shows that if an AI system uses XAI over a longer time frame, prioritization should be on its affective properties (i.e., things that will trigger emotional change) rather than purely on its cognitive purpose to maximize the positive effect of XAI.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Effect of Time on Trust Calibration of Explainable Artificial Intelligence\",\"authors\":\"Ezekiel Bernardo, R. Seva\",\"doi\":\"10.54941/ahfe1003280\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explainable Artificial Intelligence (XAI) has played a significant role in human-computer interaction. The cognitive resources it carries allow humans to understand the complex algorithm powering Artificial Intelligence (AI), virtually resolving the acceptance and adoption barrier from the lack of transparency. This resulted in more systems leveraging XAI and triggering interest and efforts to develop newer and more capable techniques. However, though the research stream is expanding, little is known about the extent of its effectiveness on end-users. Current works have only measured XAI effects on either moment time effect or compared it cross-sectionally on various types of users. Filling this out can improve the understanding of existing studies and provide practical limitations on its use for trust calibration. To address this gap, a multi-time research experiment was conducted with 103 participants to use and evaluate XAI in an image classification application for three days. Measurement that was considered is on perceived usefulness for its cognitive contribution, integral emotions for affective change, trust, and reliance, and was analyzed via covariance-based structural equation modelling. Results showed that time only moderates the path from cognitive to trust and reliance as well as trust to reliance, with its effect dampening through time. On the other hand, affective change has remained consistent in all interactions. This shows that if an AI system uses XAI over a longer time frame, prioritization should be on its affective properties (i.e., things that will trigger emotional change) rather than purely on its cognitive purpose to maximize the positive effect of XAI.\",\"PeriodicalId\":405313,\"journal\":{\"name\":\"Artificial Intelligence and Social Computing\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence and Social Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54941/ahfe1003280\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Social Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1003280","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

可解释人工智能(XAI)在人机交互中发挥着重要作用。它所携带的认知资源使人类能够理解驱动人工智能(AI)的复杂算法,实际上解决了由于缺乏透明度而导致的接受和采用障碍。这导致了更多的系统利用XAI,并激发了开发更新和更有能力的技术的兴趣和努力。然而,尽管研究流正在扩大,但对其对最终用户的效力程度知之甚少。目前的工作只测量了XAI对时刻效应的影响,或者对不同类型用户的横截面比较。填写此表可以提高对现有研究的理解,并为其用于信任校准提供实际限制。为了解决这一差距,103名参与者进行了为期三天的图像分类应用程序中使用和评估XAI的多次研究实验。考虑的测量是感知有用性的认知贡献,情感变化的整体情绪,信任和依赖,并通过基于协方差的结构方程模型进行分析。结果表明,时间只调节从认知到信任和依赖以及从信任到依赖的路径,其影响随着时间的推移而减弱。另一方面,情感变化在所有互动中保持一致。这表明,如果AI系统在较长时间内使用XAI,那么优先级应该是基于其情感属性(即将触发情绪变化的事物),而不是纯粹基于其认知目的来最大化XAI的积极影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating the Effect of Time on Trust Calibration of Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) has played a significant role in human-computer interaction. The cognitive resources it carries allow humans to understand the complex algorithm powering Artificial Intelligence (AI), virtually resolving the acceptance and adoption barrier from the lack of transparency. This resulted in more systems leveraging XAI and triggering interest and efforts to develop newer and more capable techniques. However, though the research stream is expanding, little is known about the extent of its effectiveness on end-users. Current works have only measured XAI effects on either moment time effect or compared it cross-sectionally on various types of users. Filling this out can improve the understanding of existing studies and provide practical limitations on its use for trust calibration. To address this gap, a multi-time research experiment was conducted with 103 participants to use and evaluate XAI in an image classification application for three days. Measurement that was considered is on perceived usefulness for its cognitive contribution, integral emotions for affective change, trust, and reliance, and was analyzed via covariance-based structural equation modelling. Results showed that time only moderates the path from cognitive to trust and reliance as well as trust to reliance, with its effect dampening through time. On the other hand, affective change has remained consistent in all interactions. This shows that if an AI system uses XAI over a longer time frame, prioritization should be on its affective properties (i.e., things that will trigger emotional change) rather than purely on its cognitive purpose to maximize the positive effect of XAI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信