{"title":"合成语音驱动的MPEG-4人脸动画系统","authors":"C. Lande, Gianluca Francini","doi":"10.1109/MULMM.1998.723005","DOIUrl":null,"url":null,"abstract":"The paper provides an overview of the activities that have led to the development of a prototype application of a \"talking head\" compliant with MPEG-4 specification. The work done so far fits into the context of one of the recent work-items defined by the ISO/IEC JTC1/SC29/WG11, known world-wide as MPEG. This ISO Working Group is now in the process of defining a unifying framework where natural and synthetic audiovisual objects can be combined and rendered as a unique combination of synchronised interactive media. The developed facial animation system implements the following features: animation of predefined/downloaded face models and animation of face models driven either by speech synthesis applications or by MPEG-4 animation parameters. To improve photo-realism, face models can be texture mapped and calibrated according to the countenances of real people.","PeriodicalId":305422,"journal":{"name":"Proceedings 1998 MultiMedia Modeling. MMM'98 (Cat. No.98EX200)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"An MPEG-4 facial animation system driven by synthetic speech\",\"authors\":\"C. Lande, Gianluca Francini\",\"doi\":\"10.1109/MULMM.1998.723005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper provides an overview of the activities that have led to the development of a prototype application of a \\\"talking head\\\" compliant with MPEG-4 specification. The work done so far fits into the context of one of the recent work-items defined by the ISO/IEC JTC1/SC29/WG11, known world-wide as MPEG. This ISO Working Group is now in the process of defining a unifying framework where natural and synthetic audiovisual objects can be combined and rendered as a unique combination of synchronised interactive media. The developed facial animation system implements the following features: animation of predefined/downloaded face models and animation of face models driven either by speech synthesis applications or by MPEG-4 animation parameters. To improve photo-realism, face models can be texture mapped and calibrated according to the countenances of real people.\",\"PeriodicalId\":305422,\"journal\":{\"name\":\"Proceedings 1998 MultiMedia Modeling. MMM'98 (Cat. No.98EX200)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 1998 MultiMedia Modeling. MMM'98 (Cat. No.98EX200)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MULMM.1998.723005\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 1998 MultiMedia Modeling. MMM'98 (Cat. No.98EX200)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MULMM.1998.723005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An MPEG-4 facial animation system driven by synthetic speech
The paper provides an overview of the activities that have led to the development of a prototype application of a "talking head" compliant with MPEG-4 specification. The work done so far fits into the context of one of the recent work-items defined by the ISO/IEC JTC1/SC29/WG11, known world-wide as MPEG. This ISO Working Group is now in the process of defining a unifying framework where natural and synthetic audiovisual objects can be combined and rendered as a unique combination of synchronised interactive media. The developed facial animation system implements the following features: animation of predefined/downloaded face models and animation of face models driven either by speech synthesis applications or by MPEG-4 animation parameters. To improve photo-realism, face models can be texture mapped and calibrated according to the countenances of real people.