T. Takiguchi, Tomoyuki Yamagata, Atsushi Sako, Nobuyuki Miyake, Jérôme Revaud, Y. Ariki
{"title":"Human-Robot Interface Using System Request Utterance Detection Based on Acoustic Features","authors":"T. Takiguchi, Tomoyuki Yamagata, Atsushi Sako, Nobuyuki Miyake, Jérôme Revaud, Y. Ariki","doi":"10.1109/MUE.2008.87","DOIUrl":null,"url":null,"abstract":"For a mobile robot to serve people in actual environments, such as a living room or a party room, it must be easy to control because some users might not even be capable of operating a computer keyboard. For nonexpert users, speech recognition is one of the most effective communication tools when it comes to a hands-free (human-robot) interface. This paper describes a new mobile robot with hands-free speech recognition. For a hands- free speech interface, it is important to detect commands for a robot in spontaneous utterances. Our system can understand whether user's utterances are commands for the robot or not, where commands are discriminated from human- human conversations by acoustic features. Then the robot can move according to the user's voice (command). In order to capture the user's voice only, a robust voice detection system with AdaBoost is also described.","PeriodicalId":203066,"journal":{"name":"2008 International Conference on Multimedia and Ubiquitous Engineering (mue 2008)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 International Conference on Multimedia and Ubiquitous Engineering (mue 2008)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MUE.2008.87","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
For a mobile robot to serve people in actual environments, such as a living room or a party room, it must be easy to control because some users might not even be capable of operating a computer keyboard. For nonexpert users, speech recognition is one of the most effective communication tools when it comes to a hands-free (human-robot) interface. This paper describes a new mobile robot with hands-free speech recognition. For a hands- free speech interface, it is important to detect commands for a robot in spontaneous utterances. Our system can understand whether user's utterances are commands for the robot or not, where commands are discriminated from human- human conversations by acoustic features. Then the robot can move according to the user's voice (command). In order to capture the user's voice only, a robust voice detection system with AdaBoost is also described.