Yijie Zhou, Gang Wu, Qiang Lin, Dingguo Yu, Hui Wu
{"title":"Text-based Talking Facial Synthesis for Virtual Host System","authors":"Yijie Zhou, Gang Wu, Qiang Lin, Dingguo Yu, Hui Wu","doi":"10.1109/cost57098.2022.00019","DOIUrl":null,"url":null,"abstract":"With the prevailing of deep learning technology, automatic virtual image synthesis has made huge progress and the popularity of virtual portraits has been growing rapidly. Traditional virtual synthesis system rely on computer graphics method driven by motion capture of a real person, which need labor and equipment costs. In view of this, our paper proposes a virtual host synthesis method based on text driven to generate lip shape and facial animation(include eye movement and head pose) from a signal facial image of a virtual host. More precisely, we use three main modules to construct a virtual host synthesis system: a speech synthesis module based on Tacotron2, a speech to landmark points module to extract mixture landmarks movement to speech, and video generation module based on conditional generative adversarial network to generate video frames and realize time-continuous automatic sport news reporting.","PeriodicalId":135595,"journal":{"name":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","volume":"73 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Culture-Oriented Science and Technology (CoST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/cost57098.2022.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the prevailing of deep learning technology, automatic virtual image synthesis has made huge progress and the popularity of virtual portraits has been growing rapidly. Traditional virtual synthesis system rely on computer graphics method driven by motion capture of a real person, which need labor and equipment costs. In view of this, our paper proposes a virtual host synthesis method based on text driven to generate lip shape and facial animation(include eye movement and head pose) from a signal facial image of a virtual host. More precisely, we use three main modules to construct a virtual host synthesis system: a speech synthesis module based on Tacotron2, a speech to landmark points module to extract mixture landmarks movement to speech, and video generation module based on conditional generative adversarial network to generate video frames and realize time-continuous automatic sport news reporting.