{"title":"Fine-grained Micro-Expression Generation based on Thin-Plate Spline and Relative AU Constraint","authors":"Sirui Zhao, Shukang Yin, Huaying Tang, Rijin Jin, Yifan Xu, Tong Xu, Enhong Chen","doi":"10.1145/3503161.3551597","DOIUrl":null,"url":null,"abstract":"As a typical psychological stress reaction, micro-expression (ME) is usually quickly leaked on a human face and can reveal the true feeling and emotional cognition. Therefore,automatic ME analysis (MEA) has essential applications in safety, clinical and other fields. However, the lack of adequate ME data has severely hindered MEA research. To overcome this dilemma and encouraged by current image generation techniques, this paper proposes a fine-grained ME generation method to enhance ME data in terms of data volume and diversity. Specifically, we first estimate non-linear ME motion using thin-plate spline transformation with a dense motion network. Then, the estimated ME motion transformations, including optical flow and occlusion masks, are sent to the generation network to synthesize the target facial micro-expression. In particular, we obtain the relative action units (AUs) of the source ME to the target face as a constraint to encourage the network to ignore expression-irrelevant movements, thereby generating fine-grained MEs. Through comparative experiments on CASME II, SMIC and SAMM datasets, we demonstrate the effectiveness and superiority of our method. Source code is provided in https://github.com/MEA-LAB-421/MEGC2022-Generation.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503161.3551597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
As a typical psychological stress reaction, micro-expression (ME) is usually quickly leaked on a human face and can reveal the true feeling and emotional cognition. Therefore,automatic ME analysis (MEA) has essential applications in safety, clinical and other fields. However, the lack of adequate ME data has severely hindered MEA research. To overcome this dilemma and encouraged by current image generation techniques, this paper proposes a fine-grained ME generation method to enhance ME data in terms of data volume and diversity. Specifically, we first estimate non-linear ME motion using thin-plate spline transformation with a dense motion network. Then, the estimated ME motion transformations, including optical flow and occlusion masks, are sent to the generation network to synthesize the target facial micro-expression. In particular, we obtain the relative action units (AUs) of the source ME to the target face as a constraint to encourage the network to ignore expression-irrelevant movements, thereby generating fine-grained MEs. Through comparative experiments on CASME II, SMIC and SAMM datasets, we demonstrate the effectiveness and superiority of our method. Source code is provided in https://github.com/MEA-LAB-421/MEGC2022-Generation.