{"title":"在移动平台上独立于说话人的连续语音到面部动画","authors":"G. Feldhoffer","doi":"10.1109/ELMAR.2007.4418820","DOIUrl":null,"url":null,"abstract":"In this paper a speaker independent training method is presented for continuous voice to facial animation systems. An audiovisual database with multiple voices and only one speaker's video information was created using dynamic time warping. The video information is aligned to more speakers' voice. The fit is measured with subjective and objective tests. Suitability of implementations on mobile devices is discussed.","PeriodicalId":170000,"journal":{"name":"ELMAR 2007","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Speaker independent continuous voice to facial animation on mobile platforms\",\"authors\":\"G. Feldhoffer\",\"doi\":\"10.1109/ELMAR.2007.4418820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper a speaker independent training method is presented for continuous voice to facial animation systems. An audiovisual database with multiple voices and only one speaker's video information was created using dynamic time warping. The video information is aligned to more speakers' voice. The fit is measured with subjective and objective tests. Suitability of implementations on mobile devices is discussed.\",\"PeriodicalId\":170000,\"journal\":{\"name\":\"ELMAR 2007\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ELMAR 2007\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ELMAR.2007.4418820\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ELMAR 2007","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ELMAR.2007.4418820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Speaker independent continuous voice to facial animation on mobile platforms
In this paper a speaker independent training method is presented for continuous voice to facial animation systems. An audiovisual database with multiple voices and only one speaker's video information was created using dynamic time warping. The video information is aligned to more speakers' voice. The fit is measured with subjective and objective tests. Suitability of implementations on mobile devices is discussed.