{"title":"Conversational Agents Trust Calibration: A User-Centred Perspective to Design","authors":"Mateusz Dubiel, Sylvain Daronnat, Luis A. Leiva","doi":"10.1145/3543829.3544518","DOIUrl":null,"url":null,"abstract":"Previous work identified trust as one of the key requirements for adoption and continued use of conversational agents (CAs). Given recent advances in natural language processing and deep learning, it is currently possible to execute simple goal-oriented tasks by using voice. As CAs start to provide a gateway for purchasing products and booking services online, the question of trust and its impact on users’ reliance and agency becomes ever-more pertinent. This paper collates trust-related literature and proposes four design suggestions that are illustrated through example conversations. Our goal is to encourage discussion on ethical design practices to develop CAs that are capable of employing trust-calibration techniques that should, when relevant, reduce the user’s trust in the agent. We hope that our reflections, based on the synthesis of insights from the fields of human-agent interaction, explainable ai, and information retrieval, can serve as a reminder of the dangers of excessive trust in automation and contribute to more user-centred CA design.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th Conference on Conversational User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3543829.3544518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Previous work identified trust as one of the key requirements for adoption and continued use of conversational agents (CAs). Given recent advances in natural language processing and deep learning, it is currently possible to execute simple goal-oriented tasks by using voice. As CAs start to provide a gateway for purchasing products and booking services online, the question of trust and its impact on users’ reliance and agency becomes ever-more pertinent. This paper collates trust-related literature and proposes four design suggestions that are illustrated through example conversations. Our goal is to encourage discussion on ethical design practices to develop CAs that are capable of employing trust-calibration techniques that should, when relevant, reduce the user’s trust in the agent. We hope that our reflections, based on the synthesis of insights from the fields of human-agent interaction, explainable ai, and information retrieval, can serve as a reminder of the dangers of excessive trust in automation and contribute to more user-centred CA design.