Facial Expression Recognition based on Image Gradient and Deep Convolutional Neural Network

M. R. Fallahzadeh, F. Farokhi, A. Harimi, R. Sabbaghi‐Nadooshan
{"title":"Facial Expression Recognition based on Image Gradient and Deep Convolutional Neural Network","authors":"M. R. Fallahzadeh, F. Farokhi, A. Harimi, R. Sabbaghi‐Nadooshan","doi":"10.22044/JADM.2021.9898.2121","DOIUrl":null,"url":null,"abstract":"Facial Expression Recognition (FER) is one of the basic ways of interacting with machines and has been getting more attention in recent years. In this paper, a novel FER system based on a deep convolutional neural network (DCNN) is presented. Motivated by the powerful ability of DCNN to learn features and image classification, the goal of this research is to design a compatible and discriminative input for pre-trained AlexNet-DCNN. The proposed method consists of 4 steps: first, extracting three channels of the image including the original gray-level image, in addition to horizontal and vertical gradients of the image similar to the red, green, and blue color channels of an RGB image as the DCNN input. Second, data augmentation including scale, rotation, width shift, height shift, zoom, horizontal flip, and vertical flip of the images are prepared in addition to the original images for training the DCNN. Then, the AlexNet-DCNN model is applied to learn high-level features corresponding to different emotion classes. Finally, transfer learning is implemented on the proposed model and the presented model is fine-tuned on target datasets. The average recognition accuracy of 92.41% and 93.66% were achieved for JAFEE and CK+ datasets, respectively. Experimental results on two benchmark emotional datasets show promising performance of the proposed model that can improve the performance of current FER systems.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Artificial Intelligence and Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22044/JADM.2021.9898.2121","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Facial Expression Recognition (FER) is one of the basic ways of interacting with machines and has been getting more attention in recent years. In this paper, a novel FER system based on a deep convolutional neural network (DCNN) is presented. Motivated by the powerful ability of DCNN to learn features and image classification, the goal of this research is to design a compatible and discriminative input for pre-trained AlexNet-DCNN. The proposed method consists of 4 steps: first, extracting three channels of the image including the original gray-level image, in addition to horizontal and vertical gradients of the image similar to the red, green, and blue color channels of an RGB image as the DCNN input. Second, data augmentation including scale, rotation, width shift, height shift, zoom, horizontal flip, and vertical flip of the images are prepared in addition to the original images for training the DCNN. Then, the AlexNet-DCNN model is applied to learn high-level features corresponding to different emotion classes. Finally, transfer learning is implemented on the proposed model and the presented model is fine-tuned on target datasets. The average recognition accuracy of 92.41% and 93.66% were achieved for JAFEE and CK+ datasets, respectively. Experimental results on two benchmark emotional datasets show promising performance of the proposed model that can improve the performance of current FER systems.
基于图像梯度和深度卷积神经网络的面部表情识别
面部表情识别是人类与机器交互的基本方式之一,近年来受到越来越多的关注。本文提出了一种基于深度卷积神经网络(DCNN)的FER系统。基于DCNN强大的特征学习能力和图像分类能力,本研究的目标是为预训练的AlexNet-DCNN设计一个兼容的判别输入。该方法包括4个步骤:首先,提取图像的三个通道,包括原始灰度图像,以及类似于RGB图像的红、绿、蓝颜色通道的图像水平和垂直梯度作为DCNN输入。其次,在原始图像的基础上,对图像进行缩放、旋转、宽度移位、高度移位、缩放、水平翻转、垂直翻转等数据增强,用于训练DCNN。然后,应用AlexNet-DCNN模型学习不同情绪类对应的高级特征。最后,对所提出的模型进行迁移学习,并在目标数据集上对模型进行微调。JAFEE和CK+数据集的平均识别准确率分别为92.41%和93.66%。在两个基准情感数据集上的实验结果表明,所提出的模型具有良好的性能,可以提高现有FER系统的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信