Deep Learning-Based Human Emotion Detection Framework Using Facial Expressions

Jie Hou
{"title":"Deep Learning-Based Human Emotion Detection Framework Using Facial Expressions","authors":"Jie Hou","doi":"10.1142/s0219265921410188","DOIUrl":null,"url":null,"abstract":"Automatic recognition of facial expression is an emerging study in the recognition of emotions. Emotion plays a significant role in understanding people and is usually related to sound decisions, behaviors, human activities, and intellect. The scientific community needs accurate and deployable technologies to understand human beings’ emotional states to establish practical and emotional interactions between human beings and machines. In the paper, a deep learning-based human emotion detection framework (DL-HEDF) has been proposed to evaluate the probability of digital representation, identification, and estimation of feelings. The proposed DL-HEDF analyzes the impact of emotional models on multimodal identification. The paper introduces emerging works that use existing methods like convolutional neural networks (CNN) for human emotion identification based on language, sound, image, video, and physiological signals. The proposed emphasis on the province study illustrates the shape and display of sample size emotional stimulation. While the findings obtained are not a province, the evidence collected indicates that deep learning could be sufficient to classify face emotion. Deep learning can enhance interaction with people because it allows computers to acquire perception by learning characteristics. And by perception, robots can offer better responses, enhancing the user experience dramatically. Six basic emotional levels have been successfully classified. The suggested way of recognizing emotions has then proven effective. The output results are obtained as an analysis of the ratio of the facial expression of 87.16%, accuracy evaluation ratio being 88.7%, improving facial recognition ratio is 84.5%, and the expression intensity ratio is 82.2%. The emotional simulation ratio is 93.0%.","PeriodicalId":153590,"journal":{"name":"J. Interconnect. Networks","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Interconnect. Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0219265921410188","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Automatic recognition of facial expression is an emerging study in the recognition of emotions. Emotion plays a significant role in understanding people and is usually related to sound decisions, behaviors, human activities, and intellect. The scientific community needs accurate and deployable technologies to understand human beings’ emotional states to establish practical and emotional interactions between human beings and machines. In the paper, a deep learning-based human emotion detection framework (DL-HEDF) has been proposed to evaluate the probability of digital representation, identification, and estimation of feelings. The proposed DL-HEDF analyzes the impact of emotional models on multimodal identification. The paper introduces emerging works that use existing methods like convolutional neural networks (CNN) for human emotion identification based on language, sound, image, video, and physiological signals. The proposed emphasis on the province study illustrates the shape and display of sample size emotional stimulation. While the findings obtained are not a province, the evidence collected indicates that deep learning could be sufficient to classify face emotion. Deep learning can enhance interaction with people because it allows computers to acquire perception by learning characteristics. And by perception, robots can offer better responses, enhancing the user experience dramatically. Six basic emotional levels have been successfully classified. The suggested way of recognizing emotions has then proven effective. The output results are obtained as an analysis of the ratio of the facial expression of 87.16%, accuracy evaluation ratio being 88.7%, improving facial recognition ratio is 84.5%, and the expression intensity ratio is 82.2%. The emotional simulation ratio is 93.0%.
基于深度学习的人类面部表情情感检测框架
面部表情自动识别是一项新兴的情绪识别研究。情感在理解他人方面发挥着重要作用,通常与正确的决策、行为、人类活动和智力有关。科学界需要精确和可部署的技术来了解人类的情绪状态,以建立人类与机器之间实际和情感的互动。本文提出了一种基于深度学习的人类情感检测框架(DL-HEDF),用于评估情感的数字表示、识别和估计的概率。提出的DL-HEDF分析了情绪模型对多模态识别的影响。本文介绍了使用卷积神经网络(CNN)等现有方法进行基于语言、声音、图像、视频和生理信号的人类情感识别的新兴作品。提出的重点省份研究说明了样本量情绪刺激的形态和表现。虽然获得的发现不是一个省,但收集到的证据表明,深度学习可能足以对面部情绪进行分类。深度学习可以增强与人的互动,因为它允许计算机通过学习特征来获得感知。通过感知,机器人可以提供更好的反应,极大地增强用户体验。六种基本的情绪水平已经被成功地分类。所建议的识别情绪的方法后来被证明是有效的。输出结果为面部表情分析率为87.16%,准确率评价率为88.7%,改善面部识别率为84.5%,表情强度比为82.2%。情绪模拟率为93.0%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信