Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi
{"title":"Robot Behavior Design Expressing Confidence/Unconfidence based on Human Behavior Analysis","authors":"Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi","doi":"10.1109/UR49135.2020.9144862","DOIUrl":null,"url":null,"abstract":"Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 17th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UR49135.2020.9144862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.