{"title":"Real-time lip-synch face animation driven by human voice","authors":"Fu Jie Huang, Tsuhan Chen","doi":"10.1109/MMSP.1998.738959","DOIUrl":null,"url":null,"abstract":"In this demo, we present a technique for synthesizing the mouth movement from acoustic speech information. The algorithm maps the audio parameter set to the visual parameter set using the Gaussian mixture model and the hidden Markov model. With this technique, we can create smooth and realistic lip movements.","PeriodicalId":180426,"journal":{"name":"1998 IEEE Second Workshop on Multimedia Signal Processing (Cat. No.98EX175)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"45","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1998 IEEE Second Workshop on Multimedia Signal Processing (Cat. No.98EX175)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.1998.738959","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 45
Abstract
In this demo, we present a technique for synthesizing the mouth movement from acoustic speech information. The algorithm maps the audio parameter set to the visual parameter set using the Gaussian mixture model and the hidden Markov model. With this technique, we can create smooth and realistic lip movements.