{"title":"基于纹理的实时结构特征混合","authors":"T. Jebara, Kenneth B. Russell, A. Pentland","doi":"10.1109/ICCV.1998.710710","DOIUrl":null,"url":null,"abstract":"We describe a face modeling system which estimates complete facial structure and texture from a real-time video stream. The system begins with a face trading algorithm which detects and stabilizes live facial images into a canonical 3D pose. The resulting canonical texture is then processed by a statistical model to filter imperfections and estimate unknown components such as missing pixels and underlying 3D structure. This statistical model is a soft mixture of eigenfeature selectors which span the 3D deformations and texture changes across a training set of laser scanned faces. An iterative algorithm is introduced for determining the dimensional partitioning of the eigenfeatures to maximize their generalization capability over a cross-validation set of data. The model's abilities to filter and estimate absent facial components are then demonstrated over incomplete 3D data. This ultimately allows the model to span known and regress unknown facial information front stabilized natural video sequences generated by a face tracking algorithm. The resulting continuous and dynamic estimation of the model's parameters over a video sequence generates a compact temporal description of the 3D deformations and texture changes of the face.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"74","resultStr":"{\"title\":\"Mixtures of eigenfeatures for real-time structure from texture\",\"authors\":\"T. Jebara, Kenneth B. Russell, A. Pentland\",\"doi\":\"10.1109/ICCV.1998.710710\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We describe a face modeling system which estimates complete facial structure and texture from a real-time video stream. The system begins with a face trading algorithm which detects and stabilizes live facial images into a canonical 3D pose. The resulting canonical texture is then processed by a statistical model to filter imperfections and estimate unknown components such as missing pixels and underlying 3D structure. This statistical model is a soft mixture of eigenfeature selectors which span the 3D deformations and texture changes across a training set of laser scanned faces. An iterative algorithm is introduced for determining the dimensional partitioning of the eigenfeatures to maximize their generalization capability over a cross-validation set of data. The model's abilities to filter and estimate absent facial components are then demonstrated over incomplete 3D data. This ultimately allows the model to span known and regress unknown facial information front stabilized natural video sequences generated by a face tracking algorithm. The resulting continuous and dynamic estimation of the model's parameters over a video sequence generates a compact temporal description of the 3D deformations and texture changes of the face.\",\"PeriodicalId\":270671,\"journal\":{\"name\":\"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)\",\"volume\":\"136 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-01-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"74\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCV.1998.710710\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCV.1998.710710","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mixtures of eigenfeatures for real-time structure from texture
We describe a face modeling system which estimates complete facial structure and texture from a real-time video stream. The system begins with a face trading algorithm which detects and stabilizes live facial images into a canonical 3D pose. The resulting canonical texture is then processed by a statistical model to filter imperfections and estimate unknown components such as missing pixels and underlying 3D structure. This statistical model is a soft mixture of eigenfeature selectors which span the 3D deformations and texture changes across a training set of laser scanned faces. An iterative algorithm is introduced for determining the dimensional partitioning of the eigenfeatures to maximize their generalization capability over a cross-validation set of data. The model's abilities to filter and estimate absent facial components are then demonstrated over incomplete 3D data. This ultimately allows the model to span known and regress unknown facial information front stabilized natural video sequences generated by a face tracking algorithm. The resulting continuous and dynamic estimation of the model's parameters over a video sequence generates a compact temporal description of the 3D deformations and texture changes of the face.