Analyzing the Influence of Different Activation Functions Based on Deep Learning Model for Facial Expression Recognition

Tian Xia
{"title":"Analyzing the Influence of Different Activation Functions Based on Deep Learning Model for Facial Expression Recognition","authors":"Tian Xia","doi":"10.1109/AIAM57466.2022.00143","DOIUrl":null,"url":null,"abstract":"Facial expressions are an important channel for people to communicate their emotions. To more accurately recognize people's facial expressions, researchers are constantly exploring the possibilities of convolutional neural networks. For convolutional neural network models, many factors can have a significant impact on the performance, including the structure and parameters. In this paper, it analyze the impact of different activation functions on the deep learning model of facial expression recognition with the FER-2013 dataset, compare the advantages and disadvantages between traditional and new activation functions, and finally build a deep learning model of facial expression recognition with better performance. In addition to the baseline CNN model, the paper also analyzes the performance of famous deep learning models such as ResNet, VGG and Inception, from which the best-performing baseline CNN model is selected to explore the impact of different activation functions. The results show that the GELU activation function-based facial expression deep learning model has the best performance and the highest recognition accuracy among the activation functions ReLU, L-ReLU/P-ReLU, Swish, etc. Compared with the deep learning model with the traditional ReLU activation function, the facial expression deep learning model based on GELU activation function constructed in this paper approximately improves the accuracy by 1%.","PeriodicalId":439903,"journal":{"name":"2022 4th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIAM57466.2022.00143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Facial expressions are an important channel for people to communicate their emotions. To more accurately recognize people's facial expressions, researchers are constantly exploring the possibilities of convolutional neural networks. For convolutional neural network models, many factors can have a significant impact on the performance, including the structure and parameters. In this paper, it analyze the impact of different activation functions on the deep learning model of facial expression recognition with the FER-2013 dataset, compare the advantages and disadvantages between traditional and new activation functions, and finally build a deep learning model of facial expression recognition with better performance. In addition to the baseline CNN model, the paper also analyzes the performance of famous deep learning models such as ResNet, VGG and Inception, from which the best-performing baseline CNN model is selected to explore the impact of different activation functions. The results show that the GELU activation function-based facial expression deep learning model has the best performance and the highest recognition accuracy among the activation functions ReLU, L-ReLU/P-ReLU, Swish, etc. Compared with the deep learning model with the traditional ReLU activation function, the facial expression deep learning model based on GELU activation function constructed in this paper approximately improves the accuracy by 1%.
基于深度学习模型的不同激活函数对面部表情识别的影响分析
面部表情是人们表达情感的重要渠道。为了更准确地识别人的面部表情,研究人员不断探索卷积神经网络的可能性。对于卷积神经网络模型,包括结构和参数在内的许多因素都会对其性能产生重大影响。本文以FER-2013数据集为基础,分析了不同激活函数对面部表情识别深度学习模型的影响,比较了传统激活函数与新型激活函数的优缺点,最终构建了性能更好的面部表情识别深度学习模型。除了基线CNN模型外,本文还分析了ResNet、VGG、Inception等著名深度学习模型的性能,从中选择性能最好的基线CNN模型,探讨不同激活函数的影响。结果表明,基于GELU激活函数的面部表情深度学习模型在ReLU、L-ReLU/P-ReLU、Swish等激活函数中表现最好,识别准确率最高。与采用传统ReLU激活函数的深度学习模型相比,本文构建的基于GELU激活函数的面部表情深度学习模型准确率提高了约1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信