{"title":"Hierarchical database based on feature parameters for various multimodal expression generation of robot","authors":"W. Kim, J. Park, Won Hyong Lee, M. Chung","doi":"10.1109/ARSO.2010.5679627","DOIUrl":null,"url":null,"abstract":"In this paper, we propose reliable, diverse, expansible, and usable expression generation system. Proposed system is to generate synchronized multimodal expression automatically based on hierarchical database and context information such as robot's emotional state and sentence robot is trying to say. Compared to prior system, our system based on feature parameters is much easier to generate new expression and modify expressions according to the robot's emotion. In our system, there are sentence module, emotion module, and expression module. We focus on only robot's expression module. In order to generate expressions automatically, we use outputs of the sentence and emotion modules. We have classified robot sentence under 13 types and robot emotion under 3 types. About all 39 categories and body language, we have constructed behavior database with 128 expressions. For the reliability and the variety of expressions, a professional actor's expression data have been obtained and we requested a cartoonist to draw sketch of robot's expressions corresponding to defined categories.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO.2010.5679627","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper, we propose reliable, diverse, expansible, and usable expression generation system. Proposed system is to generate synchronized multimodal expression automatically based on hierarchical database and context information such as robot's emotional state and sentence robot is trying to say. Compared to prior system, our system based on feature parameters is much easier to generate new expression and modify expressions according to the robot's emotion. In our system, there are sentence module, emotion module, and expression module. We focus on only robot's expression module. In order to generate expressions automatically, we use outputs of the sentence and emotion modules. We have classified robot sentence under 13 types and robot emotion under 3 types. About all 39 categories and body language, we have constructed behavior database with 128 expressions. For the reliability and the variety of expressions, a professional actor's expression data have been obtained and we requested a cartoonist to draw sketch of robot's expressions corresponding to defined categories.