VQA中解释对人工智能能力预测的影响

Kamran Alipour, Arijit Ray, Xiaoyu Lin, J. Schulze, Yi Yao, Giedrius Burachas
{"title":"VQA中解释对人工智能能力预测的影响","authors":"Kamran Alipour, Arijit Ray, Xiaoyu Lin, J. Schulze, Yi Yao, Giedrius Burachas","doi":"10.1109/HCCAI49649.2020.00010","DOIUrl":null,"url":null,"abstract":"Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability to predict the overall behavior of AI, in many applications, users need to understand an AI agent's competency in different aspects of the task domain. In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA). We quantify users' understanding of competency, based on the correlation between the actual system performance and user rankings. We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model. Each group of users sees only one kind of explanation to rank the competencies of the VQA model. The proposed model is evaluated through between-subject experiments to probe explanations' impact on the user's perception of competency. The comparison between two VQA models shows BERT based explanations and the use of object features improve the user's prediction of the model's competencies.","PeriodicalId":444855,"journal":{"name":"2020 IEEE International Conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"The Impact of Explanations on AI Competency Prediction in VQA\",\"authors\":\"Kamran Alipour, Arijit Ray, Xiaoyu Lin, J. Schulze, Yi Yao, Giedrius Burachas\",\"doi\":\"10.1109/HCCAI49649.2020.00010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability to predict the overall behavior of AI, in many applications, users need to understand an AI agent's competency in different aspects of the task domain. In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA). We quantify users' understanding of competency, based on the correlation between the actual system performance and user rankings. We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model. Each group of users sees only one kind of explanation to rank the competencies of the VQA model. The proposed model is evaluated through between-subject experiments to probe explanations' impact on the user's perception of competency. The comparison between two VQA models shows BERT based explanations and the use of object features improve the user's prediction of the model's competencies.\",\"PeriodicalId\":444855,\"journal\":{\"name\":\"2020 IEEE International Conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI)\",\"volume\":\"63 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HCCAI49649.2020.00010\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HCCAI49649.2020.00010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

可解释性是在人工智能系统中建立信任的关键因素之一。在众多使人工智能可解释的尝试中,量化解释的效果仍然是执行人类-人工智能协作任务的一个挑战。除了预测人工智能整体行为的能力外,在许多应用程序中,用户需要了解人工智能代理在任务域不同方面的能力。在本文中,我们评估了在视觉问答(VQA)任务中解释对人工智能代理能力的用户心理模型的影响。基于实际系统性能和用户排名之间的相关性,我们量化了用户对能力的理解。我们介绍了一个可解释的VQA系统,该系统使用空间和对象特征,并由BERT语言模型提供支持。每组用户只能看到一种解释来对VQA模型的能力进行排序。通过被试之间的实验来评估所提出的模型,以探讨解释对用户能力感知的影响。两个VQA模型之间的比较表明,基于BERT的解释和对象特征的使用提高了用户对模型能力的预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Impact of Explanations on AI Competency Prediction in VQA
Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability to predict the overall behavior of AI, in many applications, users need to understand an AI agent's competency in different aspects of the task domain. In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA). We quantify users' understanding of competency, based on the correlation between the actual system performance and user rankings. We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model. Each group of users sees only one kind of explanation to rank the competencies of the VQA model. The proposed model is evaluated through between-subject experiments to probe explanations' impact on the user's perception of competency. The comparison between two VQA models shows BERT based explanations and the use of object features improve the user's prediction of the model's competencies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信