M. Shin, Dmitry Goldgof, Carlos Kim, J. Zhong, Dongbai Guo
{"title":"Customizable MPEG-4 face player using real-time 2D image sequence","authors":"M. Shin, Dmitry Goldgof, Carlos Kim, J. Zhong, Dongbai Guo","doi":"10.1109/WACV.2000.895408","DOIUrl":null,"url":null,"abstract":"This paper presents a framework for a customizable MPEG-4 face player using a FAPs (Facial Animation Parameters) sequence recovered from a real-time image sequence. First, the 3D nonrigid motion and structure of the facial features is recovered from a 2D image sequence and a \"person-specific\" model of the face. The model consists of the intensity and range image of the face. Then, the FAPs are computed from the recovered 3D structure. The customizable MPEG4 face animation is generated using a FAP sequence and a model of a specific person. The dataset of four face image sequences from three different face orientations equipped with range images are used. The GT (ground truth) results are generated using the 3D structure provided by range images. The results are evaluated quantitatively by comparing recovered FAP values, and qualitatively by comparing the generated MPEG-4 animations. FAPs are recovered up to 8% (relative) and 16 FAP units (absolute) accuracy and the animation is nearly identical to the MPEG-4 animation using GT FAPs.","PeriodicalId":306720,"journal":{"name":"Proceedings Fifth IEEE Workshop on Applications of Computer Vision","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Fifth IEEE Workshop on Applications of Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV.2000.895408","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper presents a framework for a customizable MPEG-4 face player using a FAPs (Facial Animation Parameters) sequence recovered from a real-time image sequence. First, the 3D nonrigid motion and structure of the facial features is recovered from a 2D image sequence and a "person-specific" model of the face. The model consists of the intensity and range image of the face. Then, the FAPs are computed from the recovered 3D structure. The customizable MPEG4 face animation is generated using a FAP sequence and a model of a specific person. The dataset of four face image sequences from three different face orientations equipped with range images are used. The GT (ground truth) results are generated using the 3D structure provided by range images. The results are evaluated quantitatively by comparing recovered FAP values, and qualitatively by comparing the generated MPEG-4 animations. FAPs are recovered up to 8% (relative) and 16 FAP units (absolute) accuracy and the animation is nearly identical to the MPEG-4 animation using GT FAPs.