Belief-based Agent Explanations to Encourage Behaviour Change

Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, S. Mascarenhas
{"title":"Belief-based Agent Explanations to Encourage Behaviour Change","authors":"Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, S. Mascarenhas","doi":"10.1145/3308532.3329444","DOIUrl":null,"url":null,"abstract":"Explainable? virtual agents provide insight into the agent's decision-making process, which aims to improve the user's acceptance of the agent's actions or recommendations. However, explainable agents commonly rely on their own knowledge and goals in providing explanations, rather than the beliefs, plans or goals of the user. Little is known about the user perception of such tailored explanations and their impact on their behaviour change. In this paper, we explore the role of belief-based explanation by proposing a user-aware explainable agent by embedding the cognitive agent architecture with a user model and explanation engine to provide a tailored explanation. To make a clear conclusion on the role of explanation in behaviour change intentions, we investigated whether the level of behaviour change intentions is due to building agent-user rapport through the use of empathic language or due to trusting the agent's understanding through providing explanation. Hence, we designed two versions of a virtual advisor agent, empathic and neutral, to reduce study stress among university students and measured students' rapport levels and intentions to change their behaviour. Our results showed that the agent could build a trusted relationship with the user with the help of the explanation regardless of the level of rapport. The results, further, showed that nearly all the recommendations provided by the agent highly significantly increased the intention of the user to change their behavior.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3308532.3329444","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Explainable? virtual agents provide insight into the agent's decision-making process, which aims to improve the user's acceptance of the agent's actions or recommendations. However, explainable agents commonly rely on their own knowledge and goals in providing explanations, rather than the beliefs, plans or goals of the user. Little is known about the user perception of such tailored explanations and their impact on their behaviour change. In this paper, we explore the role of belief-based explanation by proposing a user-aware explainable agent by embedding the cognitive agent architecture with a user model and explanation engine to provide a tailored explanation. To make a clear conclusion on the role of explanation in behaviour change intentions, we investigated whether the level of behaviour change intentions is due to building agent-user rapport through the use of empathic language or due to trusting the agent's understanding through providing explanation. Hence, we designed two versions of a virtual advisor agent, empathic and neutral, to reduce study stress among university students and measured students' rapport levels and intentions to change their behaviour. Our results showed that the agent could build a trusted relationship with the user with the help of the explanation regardless of the level of rapport. The results, further, showed that nearly all the recommendations provided by the agent highly significantly increased the intention of the user to change their behavior.
鼓励行为改变的基于信念的代理解释
可辩解的吗?虚拟代理提供了对代理决策过程的洞察,旨在提高用户对代理行为或建议的接受程度。然而,可解释的代理通常依靠自己的知识和目标来提供解释,而不是用户的信念、计划或目标。对于用户对这种量身定制的解释的看法以及它们对他们行为改变的影响,我们知之甚少。在本文中,我们通过将认知代理架构嵌入用户模型和解释引擎来提供量身定制的解释,提出了一个用户感知的可解释代理,从而探索了基于信念的解释的作用。为了明确解释在行为改变意图中的作用,我们调查了行为改变意图的水平是由于通过使用移情语言建立代理与用户的融洽关系,还是由于通过提供解释来信任代理的理解。因此,我们设计了两个版本的虚拟顾问代理,共情和中立,以减轻大学生的学习压力,并测量学生的融洽程度和改变行为的意图。我们的研究结果表明,无论关系的融洽程度如何,代理都可以在解释的帮助下与用户建立信任关系。结果进一步表明,代理提供的几乎所有建议都极大地增加了用户改变其行为的意愿。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信