Foreground Extraction Based Facial Emotion Recognition Using Deep Learning Xception Model

Alwin Poulose, Chinthala Sreya Reddy, Jung Hwan Kim, Dong Seog Han
{"title":"Foreground Extraction Based Facial Emotion Recognition Using Deep Learning Xception Model","authors":"Alwin Poulose, Chinthala Sreya Reddy, Jung Hwan Kim, Dong Seog Han","doi":"10.1109/ICUFN49451.2021.9528706","DOIUrl":null,"url":null,"abstract":"The facial emotion recognition (FER) system has a very significant role in the autonomous driving system (ADS). In ADS, the FER system identifies the driver's emotions and provides the current driver's mental status for safe driving. The driver's mental status determines the safety of the vehicle and prevents the chances of road accidents. In FER, the system identifies the driver's emotions such as happy, sad, angry, surprise, disgust, fear, and neutral. To identify these emotions, the FER system needs to train with large FER datasets and the system's performance completely depends on the type of the FER dataset used in the model training. The recent FER system uses publicly available datasets such as FER 2013, extended Cohn-Kanade (CK+), AffectNet, JAFFE, etc. for model training. However, the model trained with these datasets has some major flaws when the system tries to extract the FER features from the datasets. To address the feature extraction problem in the FER system, in this paper, we propose a foreground extraction technique to identify the user emotions. The proposed foreground extraction-based FER approach accurately extracts the FER features and the deep learning model used in the system effectively utilizes these features for model training. The model training with our FER approach shows accurate classification results than the conventional FER approach. To validate our proposed FER approach, we collected user emotions from 9 people and used the Xception architecture as the deep learning model. From the FER experiment and result analysis, the proposed foreground extraction-based approach reduces the classification error that exists in the conventional FER approach. The FER results from the proposed approach show a 3.33% model accuracy improvement than the conventional FER approach.","PeriodicalId":318542,"journal":{"name":"2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICUFN49451.2021.9528706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

The facial emotion recognition (FER) system has a very significant role in the autonomous driving system (ADS). In ADS, the FER system identifies the driver's emotions and provides the current driver's mental status for safe driving. The driver's mental status determines the safety of the vehicle and prevents the chances of road accidents. In FER, the system identifies the driver's emotions such as happy, sad, angry, surprise, disgust, fear, and neutral. To identify these emotions, the FER system needs to train with large FER datasets and the system's performance completely depends on the type of the FER dataset used in the model training. The recent FER system uses publicly available datasets such as FER 2013, extended Cohn-Kanade (CK+), AffectNet, JAFFE, etc. for model training. However, the model trained with these datasets has some major flaws when the system tries to extract the FER features from the datasets. To address the feature extraction problem in the FER system, in this paper, we propose a foreground extraction technique to identify the user emotions. The proposed foreground extraction-based FER approach accurately extracts the FER features and the deep learning model used in the system effectively utilizes these features for model training. The model training with our FER approach shows accurate classification results than the conventional FER approach. To validate our proposed FER approach, we collected user emotions from 9 people and used the Xception architecture as the deep learning model. From the FER experiment and result analysis, the proposed foreground extraction-based approach reduces the classification error that exists in the conventional FER approach. The FER results from the proposed approach show a 3.33% model accuracy improvement than the conventional FER approach.
基于前景提取的深度学习异常模型面部情绪识别
面部情绪识别(FER)系统在自动驾驶系统(ADS)中有着非常重要的作用。在ADS中,FER系统识别驾驶员的情绪,并提供当前驾驶员的心理状态,以确保安全驾驶。驾驶员的精神状态决定着车辆的安全,防止道路交通事故的发生。在FER中,系统识别驾驶员的情绪,如快乐、悲伤、愤怒、惊讶、厌恶、恐惧和中立。为了识别这些情绪,FER系统需要使用大型FER数据集进行训练,系统的性能完全取决于模型训练中使用的FER数据集的类型。最近的FER系统使用公开可用的数据集,如fer2013、扩展的Cohn-Kanade (CK+)、AffectNet、JAFFE等进行模型训练。然而,当系统试图从这些数据集中提取FER特征时,使用这些数据集训练的模型存在一些主要缺陷。为了解决FER系统中的特征提取问题,本文提出了一种前景提取技术来识别用户情绪。提出的基于前景提取的FER方法能够准确提取出FER特征,系统中使用的深度学习模型有效地利用这些特征进行模型训练。用我们的方法训练的模型比传统的方法分类结果更准确。为了验证我们提出的FER方法,我们收集了9个人的用户情感,并使用exception架构作为深度学习模型。从实验和结果分析来看,本文提出的基于前景提取的方法降低了传统方法存在的分类误差。结果表明,该方法的模型精度比传统方法提高了3.33%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信