Maryam Sabzevari, S. Toosizadeh, S. R. Quchani, Vahid Abrishami
{"title":"A fast and accurate facial expression synthesis system for color face images using face graph and deep belief network","authors":"Maryam Sabzevari, S. Toosizadeh, S. R. Quchani, Vahid Abrishami","doi":"10.1109/ICEIE.2010.5559797","DOIUrl":null,"url":null,"abstract":"For a given animated avatar or face, synthesizing facial expressions in a fast and accurate way is a challenging problem. This paper presents a multi-cue methodology in order to generate facial expressions in a real time manner. The proposed approach first extracts the graph of the face using constrained local model (CLM) and generates a shape based feature vector. Secondly, it employs this feature vector to train a 3 layer deep belief network. After training, the deep belief network has the ability to generate the shape of an ideal facial expression for an input face graph. A post processing step then is applied to produce proper wrinkles and illumination changes which are related to that special facial expression. Employing a small feature vector, instead of a vector which includes all pixels of the face image, increases the speed of both training and generation phase for a deep belief network and makes it intrinsically suitable for real-time purposes. In addition, this approach is independent from the format of the input image and can be used for various types of images, including color images. The experimental results demonstrate the accuracy of our algorithm.","PeriodicalId":211301,"journal":{"name":"2010 International Conference on Electronics and Information Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2010-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 International Conference on Electronics and Information Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEIE.2010.5559797","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
For a given animated avatar or face, synthesizing facial expressions in a fast and accurate way is a challenging problem. This paper presents a multi-cue methodology in order to generate facial expressions in a real time manner. The proposed approach first extracts the graph of the face using constrained local model (CLM) and generates a shape based feature vector. Secondly, it employs this feature vector to train a 3 layer deep belief network. After training, the deep belief network has the ability to generate the shape of an ideal facial expression for an input face graph. A post processing step then is applied to produce proper wrinkles and illumination changes which are related to that special facial expression. Employing a small feature vector, instead of a vector which includes all pixels of the face image, increases the speed of both training and generation phase for a deep belief network and makes it intrinsically suitable for real-time purposes. In addition, this approach is independent from the format of the input image and can be used for various types of images, including color images. The experimental results demonstrate the accuracy of our algorithm.
对于给定的动画角色或面部,快速准确地合成面部表情是一个具有挑战性的问题。本文提出了一种实时生成面部表情的多线索方法。该方法首先利用约束局部模型(constrained local model, CLM)提取人脸图形,生成基于形状的特征向量;其次,利用该特征向量训练3层深度信念网络;经过训练,深度信念网络具有为输入的人脸图生成理想面部表情形状的能力。然后应用后处理步骤来产生与特殊面部表情相关的适当皱纹和照明变化。采用小的特征向量,而不是包含人脸图像所有像素的向量,提高了深度信念网络的训练和生成阶段的速度,使其本质上适合于实时目的。此外,这种方法与输入图像的格式无关,可用于各种类型的图像,包括彩色图像。实验结果证明了算法的准确性。