Enhancing Feature Extraction Technique Through Spatial Deep Learning Model for Facial Emotion Detection

Q2 Computer Science
Nizamuddin Khan, A. Singh, Rajeev Agrawal
{"title":"Enhancing Feature Extraction Technique Through Spatial Deep Learning Model for Facial Emotion Detection","authors":"Nizamuddin Khan, A. Singh, Rajeev Agrawal","doi":"10.33166/aetic.2023.02.002","DOIUrl":null,"url":null,"abstract":"Automatic facial expression analysis is a fascinating and difficult subject that has implications in a wide range of fields, including human–computer interaction and data-driven approaches. Based on face traits, a variety of techniques are employed to identify emotions. This article examines various recent explorations into automatic data-driven approaches and handcrafted approaches for recognising face emotions. These approaches offer computationally complex solutions that provide good accuracy when training and testing are conducted on the same datasets, but they perform less well on the most difficult realistic dataset, FER-2013. The article's goal is to present a robust model with lower computational complexity that can predict emotion classes more accurately than current methods and aid society in finding a realistic, all-encompassing solution for the facial expression system. A crucial step in good facial expression identification is extracting appropriate features from the face images. In this paper, we examine how well-known deep learning techniques perform when it comes to facial expression recognition and propose a convolutional neural network-based enhanced version of a spatial deep learning model for the most relevant feature extraction with less computational complexity. That gives a significant improvement on the most challenging dataset, FER-2013, which has the problems of occlusions, scale, and illumination variations, resulting in the best feature extraction and classification and maximizing the accuracy, i.e., 74.92%. It also maximizes the correct prediction of emotions at 99.47%, and 98.5% for a large number of samples on the CK+ and FERG datasets, respectively. It is capable of focusing on the major features of the face and achieving greater accuracy over previous fashions.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Emerging Technologies in Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33166/aetic.2023.02.002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 3

Abstract

Automatic facial expression analysis is a fascinating and difficult subject that has implications in a wide range of fields, including human–computer interaction and data-driven approaches. Based on face traits, a variety of techniques are employed to identify emotions. This article examines various recent explorations into automatic data-driven approaches and handcrafted approaches for recognising face emotions. These approaches offer computationally complex solutions that provide good accuracy when training and testing are conducted on the same datasets, but they perform less well on the most difficult realistic dataset, FER-2013. The article's goal is to present a robust model with lower computational complexity that can predict emotion classes more accurately than current methods and aid society in finding a realistic, all-encompassing solution for the facial expression system. A crucial step in good facial expression identification is extracting appropriate features from the face images. In this paper, we examine how well-known deep learning techniques perform when it comes to facial expression recognition and propose a convolutional neural network-based enhanced version of a spatial deep learning model for the most relevant feature extraction with less computational complexity. That gives a significant improvement on the most challenging dataset, FER-2013, which has the problems of occlusions, scale, and illumination variations, resulting in the best feature extraction and classification and maximizing the accuracy, i.e., 74.92%. It also maximizes the correct prediction of emotions at 99.47%, and 98.5% for a large number of samples on the CK+ and FERG datasets, respectively. It is capable of focusing on the major features of the face and achieving greater accuracy over previous fashions.
基于空间深度学习模型的人脸情感检测特征提取技术的改进
面部表情自动分析是一个有趣而困难的学科,涉及广泛的领域,包括人机交互和数据驱动方法。基于人脸特征,人们采用了多种技术来识别情绪。本文考察了最近对自动数据驱动方法和手工制作方法的各种探索,以识别面部情绪。这些方法提供了计算复杂的解决方案,在相同的数据集上进行训练和测试时提供了良好的准确性,但在最困难的现实数据集FER-2013上表现较差。这篇文章的目标是提出一个计算复杂度较低的稳健模型,该模型可以比当前的方法更准确地预测情绪类别,并帮助社会为面部表情系统找到一个现实的、包罗万象的解决方案。良好的面部表情识别的关键步骤是从面部图像中提取适当的特征。在本文中,我们研究了众所周知的深度学习技术在面部表情识别方面的表现,并提出了一种基于卷积神经网络的空间深度学习模型的增强版本,用于以较低的计算复杂度提取最相关的特征。这对最具挑战性的数据集FER-2013有了显著的改进,该数据集存在遮挡、尺度和光照变化的问题,从而实现了最佳的特征提取和分类,并最大限度地提高了准确率,即74.92%。它还最大限度地将情绪的正确预测提高到99.47%,对于CK+和FERG数据集上的大量样本,准确预测率为98.5%,分别地它能够专注于面部的主要特征,并比以前的时尚更准确。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Annals of Emerging Technologies in Computing
Annals of Emerging Technologies in Computing Computer Science-Computer Science (all)
CiteScore
3.50
自引率
0.00%
发文量
26
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信