Multimodal information fusion method in emotion recognition in the background of artificial intelligence

IF 0.9 Q4 TELECOMMUNICATIONS
Zhen Dai, Hongxiao Fei, Chunyan Lian
{"title":"Multimodal information fusion method in emotion recognition in the background of artificial intelligence","authors":"Zhen Dai,&nbsp;Hongxiao Fei,&nbsp;Chunyan Lian","doi":"10.1002/itl2.520","DOIUrl":null,"url":null,"abstract":"<p>Recent advances in Semantic IoT data integration have highlighted the importance of multimodal fusion in emotion recognition systems. Human emotions, formed through innate learning and communication, are often revealed through speech and facial expressions. In response, this study proposes a hidden Markov model-based multimodal fusion emotion detection system, combining speech recognition with facial expressions to enhance emotion recognition rates. The integration of such emotion recognition systems with Semantic IoT data can offer unprecedented insights into human behavior and sentiment analysis, contributing to the advancement of data integration techniques in the context of the Internet of Things. Experimental findings indicate that in single-modal emotion detection, speech recognition achieves a 76% accuracy rate, while facial expression recognition achieves 78%. However, when state information fusion is applied, the recognition rate increases to 95%, surpassing the national average by 19% and 17% for speech and facial expressions, respectively. This demonstrates the effectiveness of multimodal fusion in emotion recognition, leading to higher recognition rates and reduced workload compared to single-modal approaches.</p>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":null,"pages":null},"PeriodicalIF":0.9000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/itl2.520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advances in Semantic IoT data integration have highlighted the importance of multimodal fusion in emotion recognition systems. Human emotions, formed through innate learning and communication, are often revealed through speech and facial expressions. In response, this study proposes a hidden Markov model-based multimodal fusion emotion detection system, combining speech recognition with facial expressions to enhance emotion recognition rates. The integration of such emotion recognition systems with Semantic IoT data can offer unprecedented insights into human behavior and sentiment analysis, contributing to the advancement of data integration techniques in the context of the Internet of Things. Experimental findings indicate that in single-modal emotion detection, speech recognition achieves a 76% accuracy rate, while facial expression recognition achieves 78%. However, when state information fusion is applied, the recognition rate increases to 95%, surpassing the national average by 19% and 17% for speech and facial expressions, respectively. This demonstrates the effectiveness of multimodal fusion in emotion recognition, leading to higher recognition rates and reduced workload compared to single-modal approaches.

人工智能背景下情绪识别中的多模态信息融合方法
语义物联网数据集成的最新进展凸显了多模态融合在情感识别系统中的重要性。人类的情感是通过与生俱来的学习和交流形成的,通常通过语音和面部表情来揭示。为此,本研究提出了一种基于隐马尔可夫模型的多模态融合情感检测系统,将语音识别与面部表情相结合,以提高情感识别率。将这种情感识别系统与语义物联网数据相结合,可以为人类行为和情感分析提供前所未有的洞察力,从而推动物联网背景下数据整合技术的发展。实验结果表明,在单模态情感检测中,语音识别的准确率为 76%,面部表情识别的准确率为 78%。然而,当应用状态信息融合技术时,识别率提高到 95%,语音和面部表情的识别率分别比全国平均水平高出 19% 和 17%。这证明了多模态融合在情绪识别中的有效性,与单模态方法相比,它能带来更高的识别率并减少工作量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信