Does a Conversational Robot Need to Have its own Values?: A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots
{"title":"Does a Conversational Robot Need to Have its own Values?: A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots","authors":"Takahisa Uchida, T. Minato, H. Ishiguro","doi":"10.1145/2974804.2974830","DOIUrl":null,"url":null,"abstract":"This work studies a dialogue strategy aimed at building people's motivation for talking with autonomous conversational robots. Even though spoken dialogue systems continue to develop rapidly, the existing systems are insufficient for continuous use because they fail to motivate users to talk with them. One reason is that users fail to realize that the intentions of the system's utterances are based on its values. Since people recognize the values of others and modify their own values in human-human conversations, we hypothesize that a dialogue strategy that makes users saliently feel the difference of their own values and those of the system will increase motivation for the dialogues. Our experiment, which evaluated human-human dialogues, supported our hypothesis. However, an experiment with human-android dialogues failed to produce identical results, suggesting that people did not attribute values to our android. For a conversational robot, we need additional techniques to convince people to believe a robot speaks based on its own values and opinions.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"93 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fourth International Conference on Human Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2974804.2974830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
This work studies a dialogue strategy aimed at building people's motivation for talking with autonomous conversational robots. Even though spoken dialogue systems continue to develop rapidly, the existing systems are insufficient for continuous use because they fail to motivate users to talk with them. One reason is that users fail to realize that the intentions of the system's utterances are based on its values. Since people recognize the values of others and modify their own values in human-human conversations, we hypothesize that a dialogue strategy that makes users saliently feel the difference of their own values and those of the system will increase motivation for the dialogues. Our experiment, which evaluated human-human dialogues, supported our hypothesis. However, an experiment with human-android dialogues failed to produce identical results, suggesting that people did not attribute values to our android. For a conversational robot, we need additional techniques to convince people to believe a robot speaks based on its own values and opinions.