{"title":"Research and experiment of lip coordination with speech for the humanoid head robot-“H&Frobot-III”","authors":"Meng Qingmei, Wu Weiguo, Zhong Yusheng, Song Ce","doi":"10.1109/ICHR.2008.4756012","DOIUrl":null,"url":null,"abstract":"This paper proposes a method of lip shape coordination with speech in the facial expression robot system ldquoH&Frobot-IIIrdquo. The proposed method can model the talking robotpsilas lip shape using visual speech system, which includes three modules: speech recognition, lip shape recognition and lip pose actuator. In the lip shape recognition module, a viseme representation method is proposed for synthesising the human visual speech. To analyze the robotpsilas lip shape, lip shape model is developed based on the anatomy and facial action coding system (FACS). When robot speaking, the lip shape coordination with speech can be realized through basic lip shape or the combination of basic lip shape. In the ldquoH&Frobot-IIIrdquo system, the lip shape is realized through slide and guide slot mechanism, which implements the two-way movement of muscle in the lip. Finally, the result of the experiment, which is the lip coordination with speech, is shown. When speaking same word, the lip shape of robot is similarity to that of human.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICHR.2008.4756012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper proposes a method of lip shape coordination with speech in the facial expression robot system ldquoH&Frobot-IIIrdquo. The proposed method can model the talking robotpsilas lip shape using visual speech system, which includes three modules: speech recognition, lip shape recognition and lip pose actuator. In the lip shape recognition module, a viseme representation method is proposed for synthesising the human visual speech. To analyze the robotpsilas lip shape, lip shape model is developed based on the anatomy and facial action coding system (FACS). When robot speaking, the lip shape coordination with speech can be realized through basic lip shape or the combination of basic lip shape. In the ldquoH&Frobot-IIIrdquo system, the lip shape is realized through slide and guide slot mechanism, which implements the two-way movement of muscle in the lip. Finally, the result of the experiment, which is the lip coordination with speech, is shown. When speaking same word, the lip shape of robot is similarity to that of human.