{"title":"面部表情的自适应模拟","authors":"Yu Zhang, E. Sung, E. Prakash","doi":"10.1109/ICME.2001.1237859","DOIUrl":null,"url":null,"abstract":"This paper presents a space-time adaptive refinement approach to simulate facial expressions on the deformable 3D face model at interactive rate. This new computational model benefits from an adaptive sampling of both space and time to minimize computational cost. Using this technique, we achieve a guaranteed frame rate of the dynamic facial expression simulation at low computational expense.","PeriodicalId":405589,"journal":{"name":"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.","volume":"188 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Adaptive simulation of facial expressions\",\"authors\":\"Yu Zhang, E. Sung, E. Prakash\",\"doi\":\"10.1109/ICME.2001.1237859\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a space-time adaptive refinement approach to simulate facial expressions on the deformable 3D face model at interactive rate. This new computational model benefits from an adaptive sampling of both space and time to minimize computational cost. Using this technique, we achieve a guaranteed frame rate of the dynamic facial expression simulation at low computational expense.\",\"PeriodicalId\":405589,\"journal\":{\"name\":\"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.\",\"volume\":\"188 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME.2001.1237859\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2001.1237859","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper presents a space-time adaptive refinement approach to simulate facial expressions on the deformable 3D face model at interactive rate. This new computational model benefits from an adaptive sampling of both space and time to minimize computational cost. Using this technique, we achieve a guaranteed frame rate of the dynamic facial expression simulation at low computational expense.