Advanced Deep Convolution Based Jellyfish VGG-19 Model for Face Emotion Recognition

IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS
P. V. V. S. Srinivas, Gayathri Kota, Bhavitha Kola, Jahnavi Durga Tirumani, Dunti Sarath Sai Chowdary Kantamneni
{"title":"Advanced Deep Convolution Based Jellyfish VGG-19 Model for Face Emotion Recognition","authors":"P. V. V. S. Srinivas,&nbsp;Gayathri Kota,&nbsp;Bhavitha Kola,&nbsp;Jahnavi Durga Tirumani,&nbsp;Dunti Sarath Sai Chowdary Kantamneni","doi":"10.1002/ett.70176","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>For many applications, facial emotion recognition (FER) is an essential yet unsolved procedure. In the past, artificial intelligence methods like convolutional neural networks have typically been used to recognize emotions. However, in terms of complexity and processing time, this method is quite costly. An optimization-based deep convolution network that uses attention-based Densenet-264 for feature extraction is presented in order to address this issue. In the first step, the images are pre-processed using image resizing and equalized joint histogram-based contrast enhancement (Eq-JH-CE) to enhance the image quality. Next, an enhanced attention-based DenseNet-264 architecture is developed for feature extraction, which helps improve classification accuracy. Finally, the extracted features are used by the Advanced Deep Convolutional based Jellyfish VGG-19 model (DeepCon_JVGG-19) for classifying face emotions like angry, disgust, fear, happy, neutral, sad, and surprise. Here, Jellyfish Optimization is used to fine-tune the optimal parameters and increase the performance of the classified model. The Python tool is used for implementation. The JAFFE and FER-2013 are used to test the proposed model performance. The experimental analysis proves the strength of the proposed study by attaining 98.5% accuracy.</p>\n </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"36 6","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Emerging Telecommunications Technologies","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ett.70176","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

For many applications, facial emotion recognition (FER) is an essential yet unsolved procedure. In the past, artificial intelligence methods like convolutional neural networks have typically been used to recognize emotions. However, in terms of complexity and processing time, this method is quite costly. An optimization-based deep convolution network that uses attention-based Densenet-264 for feature extraction is presented in order to address this issue. In the first step, the images are pre-processed using image resizing and equalized joint histogram-based contrast enhancement (Eq-JH-CE) to enhance the image quality. Next, an enhanced attention-based DenseNet-264 architecture is developed for feature extraction, which helps improve classification accuracy. Finally, the extracted features are used by the Advanced Deep Convolutional based Jellyfish VGG-19 model (DeepCon_JVGG-19) for classifying face emotions like angry, disgust, fear, happy, neutral, sad, and surprise. Here, Jellyfish Optimization is used to fine-tune the optimal parameters and increase the performance of the classified model. The Python tool is used for implementation. The JAFFE and FER-2013 are used to test the proposed model performance. The experimental analysis proves the strength of the proposed study by attaining 98.5% accuracy.

基于深度卷积的水母VGG-19人脸情感识别模型
在许多应用中,面部情绪识别(FER)是一个必不可少但尚未解决的问题。过去,像卷积神经网络这样的人工智能方法通常被用来识别情绪。然而,就复杂性和处理时间而言,这种方法的成本相当高。为了解决这一问题,提出了一种基于优化的深度卷积网络,该网络使用基于注意力的Densenet-264进行特征提取。第一步,对图像进行预处理,采用图像大小调整和基于均衡化联合直方图的对比度增强(Eq-JH-CE)来增强图像质量。其次,开发了一种增强的基于注意力的DenseNet-264结构,用于特征提取,有助于提高分类精度。最后,提取的特征被基于高级深度卷积的水母VGG-19模型(DeepCon_JVGG-19)用于分类面部情绪,如愤怒、厌恶、恐惧、快乐、中性、悲伤和惊讶。在这里,水母优化是用来微调最优参数,提高分类模型的性能。Python工具用于实现。使用JAFFE和FER-2013测试了所提出的模型的性能。实验分析证明了该方法的有效性,准确率达到98.5%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.90
自引率
13.90%
发文量
249
期刊介绍: ransactions on Emerging Telecommunications Technologies (ETT), formerly known as European Transactions on Telecommunications (ETT), has the following aims: - to attract cutting-edge publications from leading researchers and research groups around the world - to become a highly cited source of timely research findings in emerging fields of telecommunications - to limit revision and publication cycles to a few months and thus significantly increase attractiveness to publish - to become the leading journal for publishing the latest developments in telecommunications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信