{"title":"Multimodal information fusion method in emotion recognition in the background of artificial intelligence","authors":"Zhen Dai, Hongxiao Fei, Chunyan Lian","doi":"10.1002/itl2.520","DOIUrl":null,"url":null,"abstract":"<p>Recent advances in Semantic IoT data integration have highlighted the importance of multimodal fusion in emotion recognition systems. Human emotions, formed through innate learning and communication, are often revealed through speech and facial expressions. In response, this study proposes a hidden Markov model-based multimodal fusion emotion detection system, combining speech recognition with facial expressions to enhance emotion recognition rates. The integration of such emotion recognition systems with Semantic IoT data can offer unprecedented insights into human behavior and sentiment analysis, contributing to the advancement of data integration techniques in the context of the Internet of Things. Experimental findings indicate that in single-modal emotion detection, speech recognition achieves a 76% accuracy rate, while facial expression recognition achieves 78%. However, when state information fusion is applied, the recognition rate increases to 95%, surpassing the national average by 19% and 17% for speech and facial expressions, respectively. This demonstrates the effectiveness of multimodal fusion in emotion recognition, leading to higher recognition rates and reduced workload compared to single-modal approaches.</p>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":null,"pages":null},"PeriodicalIF":0.9000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/itl2.520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advances in Semantic IoT data integration have highlighted the importance of multimodal fusion in emotion recognition systems. Human emotions, formed through innate learning and communication, are often revealed through speech and facial expressions. In response, this study proposes a hidden Markov model-based multimodal fusion emotion detection system, combining speech recognition with facial expressions to enhance emotion recognition rates. The integration of such emotion recognition systems with Semantic IoT data can offer unprecedented insights into human behavior and sentiment analysis, contributing to the advancement of data integration techniques in the context of the Internet of Things. Experimental findings indicate that in single-modal emotion detection, speech recognition achieves a 76% accuracy rate, while facial expression recognition achieves 78%. However, when state information fusion is applied, the recognition rate increases to 95%, surpassing the national average by 19% and 17% for speech and facial expressions, respectively. This demonstrates the effectiveness of multimodal fusion in emotion recognition, leading to higher recognition rates and reduced workload compared to single-modal approaches.