Win Shwe Sin Khine, Prarinya Siritanawan, K. Kotani
{"title":"Generation of Compound Emotions Expressions with Emotion Generative Adversarial Networks (EmoGANs)","authors":"Win Shwe Sin Khine, Prarinya Siritanawan, K. Kotani","doi":"10.23919/SICE48898.2020.9240306","DOIUrl":null,"url":null,"abstract":"Facial expressions of human emotions play an essential role in gaining insights into human cognition. They are crucial for designing human-computer interaction models. Although human emotional states are not limited to basic emotions such as happiness, sadness, anger, fear, disgust, and surprise, most of the current researches are focusing on those basic emotions. In this study, we proposed a new methodology to create facial expressions of compound emotions that evolve from combining those of basic emotions. In our experiments, we train our proposed model, namely Emotion Generative Adversarial Network (EmoGANs), in both unsupervised and supervised manners to improve the quality of generated images. To demonstrate the efficiency of the proposed method, we use the Extended Cohn-Kanade Dataset (CK+) and Japanese Female Facial Expressions Dataset (JAFFE) as inputs and visualize the generated images from our proposed EmoGANs. In the experiment, our proposed methodology can manipulate basic facial expressions to create facial expressions of compound emotions.","PeriodicalId":240352,"journal":{"name":"2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)","volume":"189 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 59th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/SICE48898.2020.9240306","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Facial expressions of human emotions play an essential role in gaining insights into human cognition. They are crucial for designing human-computer interaction models. Although human emotional states are not limited to basic emotions such as happiness, sadness, anger, fear, disgust, and surprise, most of the current researches are focusing on those basic emotions. In this study, we proposed a new methodology to create facial expressions of compound emotions that evolve from combining those of basic emotions. In our experiments, we train our proposed model, namely Emotion Generative Adversarial Network (EmoGANs), in both unsupervised and supervised manners to improve the quality of generated images. To demonstrate the efficiency of the proposed method, we use the Extended Cohn-Kanade Dataset (CK+) and Japanese Female Facial Expressions Dataset (JAFFE) as inputs and visualize the generated images from our proposed EmoGANs. In the experiment, our proposed methodology can manipulate basic facial expressions to create facial expressions of compound emotions.