{"title":"情绪化身:自动文本驱动的头部运动合成","authors":"Kaihui Mu, J. Tao, Jianfeng Che, Minghao Yang","doi":"10.1145/1891903.1891951","DOIUrl":null,"url":null,"abstract":"Natural head motion is an indispensable part of realistic facial animation. This paper presents a novel approach to synthesize natural head motion automatically based on grammatical and prosodic features, which are extracted by the text analysis part of a Chinese Text-to-Speech (TTS) system. A two-layer clustering method is proposed to determine elementary head motion patterns from a multimodal database which covers six emotional states. The mapping problem between textual information and elementary head motion patterns is modeled by Classification and Regression Trees (CART). With the emotional state specified by users, results from text analysis are utilized to drive corresponding CART model to create emotional head motion sequence. Then, the generated sequence is interpolated by spineand us ed to drive a Chinese text-driven avatar. The comparison experiment indicates that this approach provides a better head motion and an engaging human-computer comparing to random or none head motion.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"511 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Mood avatar: automatic text-driven head motion synthesis\",\"authors\":\"Kaihui Mu, J. Tao, Jianfeng Che, Minghao Yang\",\"doi\":\"10.1145/1891903.1891951\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural head motion is an indispensable part of realistic facial animation. This paper presents a novel approach to synthesize natural head motion automatically based on grammatical and prosodic features, which are extracted by the text analysis part of a Chinese Text-to-Speech (TTS) system. A two-layer clustering method is proposed to determine elementary head motion patterns from a multimodal database which covers six emotional states. The mapping problem between textual information and elementary head motion patterns is modeled by Classification and Regression Trees (CART). With the emotional state specified by users, results from text analysis are utilized to drive corresponding CART model to create emotional head motion sequence. Then, the generated sequence is interpolated by spineand us ed to drive a Chinese text-driven avatar. The comparison experiment indicates that this approach provides a better head motion and an engaging human-computer comparing to random or none head motion.\",\"PeriodicalId\":181145,\"journal\":{\"name\":\"ICMI-MLMI '10\",\"volume\":\"511 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICMI-MLMI '10\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1891903.1891951\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICMI-MLMI '10","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1891903.1891951","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mood avatar: automatic text-driven head motion synthesis
Natural head motion is an indispensable part of realistic facial animation. This paper presents a novel approach to synthesize natural head motion automatically based on grammatical and prosodic features, which are extracted by the text analysis part of a Chinese Text-to-Speech (TTS) system. A two-layer clustering method is proposed to determine elementary head motion patterns from a multimodal database which covers six emotional states. The mapping problem between textual information and elementary head motion patterns is modeled by Classification and Regression Trees (CART). With the emotional state specified by users, results from text analysis are utilized to drive corresponding CART model to create emotional head motion sequence. Then, the generated sequence is interpolated by spineand us ed to drive a Chinese text-driven avatar. The comparison experiment indicates that this approach provides a better head motion and an engaging human-computer comparing to random or none head motion.