基于隐马尔可夫模型和极限学习机的认知计算的多模态情绪感知和情绪识别

Diksha Verma, Sweta Kumari Barnwal, Amit Barve, M. J. Kannan, Rajesh Gupta, R. Swaminathan
{"title":"基于隐马尔可夫模型和极限学习机的认知计算的多模态情绪感知和情绪识别","authors":"Diksha Verma, Sweta Kumari Barnwal, Amit Barve, M. J. Kannan, Rajesh Gupta, R. Swaminathan","doi":"10.17762/ijcnis.v14i2.5496","DOIUrl":null,"url":null,"abstract":"In today's competitive business environment, exponential increase of multimodal content results in a massive amount of shapeless data. Big data that is unstructured has no specific format or organisation and can take any form, including text, audio, photos, and video. Many assumptions and algorithms are generally required to recognize different emotions as per literature survey, and the main focus for emotion recognition is based on single modality, such as voice, facial expression and bio signals. This paper proposed the novel technique in multimodal sentiment sensing with emotion recognition using artificial intelligence technique. Here the audio and visual data has been collected based on social media review and classified using hidden Markov model based extreme learning machine (HMM_ExLM). The features are trained using this method. Simultaneously, these speech emotional traits are suitably maximised. The strategy of splitting areas is employed in the research for expression photographs and various weights are provided to each area to extract information. Speech as well as facial expression data are then merged using decision level fusion and speech properties of each expression in region of face are utilized to categorize. Findings of experiments show that combining features of speech and expression boosts effect greatly when compared to using either speech or expression alone. In terms of accuracy, recall, precision, and optimization level, a parametric comparison was made.","PeriodicalId":232613,"journal":{"name":"Int. J. Commun. Networks Inf. Secur.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Multimodal Sentiment Sensing and Emotion Recognition Based on Cognitive Computing Using Hidden Markov Model with Extreme Learning Machine\",\"authors\":\"Diksha Verma, Sweta Kumari Barnwal, Amit Barve, M. J. Kannan, Rajesh Gupta, R. Swaminathan\",\"doi\":\"10.17762/ijcnis.v14i2.5496\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In today's competitive business environment, exponential increase of multimodal content results in a massive amount of shapeless data. Big data that is unstructured has no specific format or organisation and can take any form, including text, audio, photos, and video. Many assumptions and algorithms are generally required to recognize different emotions as per literature survey, and the main focus for emotion recognition is based on single modality, such as voice, facial expression and bio signals. This paper proposed the novel technique in multimodal sentiment sensing with emotion recognition using artificial intelligence technique. Here the audio and visual data has been collected based on social media review and classified using hidden Markov model based extreme learning machine (HMM_ExLM). The features are trained using this method. Simultaneously, these speech emotional traits are suitably maximised. The strategy of splitting areas is employed in the research for expression photographs and various weights are provided to each area to extract information. Speech as well as facial expression data are then merged using decision level fusion and speech properties of each expression in region of face are utilized to categorize. Findings of experiments show that combining features of speech and expression boosts effect greatly when compared to using either speech or expression alone. In terms of accuracy, recall, precision, and optimization level, a parametric comparison was made.\",\"PeriodicalId\":232613,\"journal\":{\"name\":\"Int. J. Commun. Networks Inf. Secur.\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Commun. Networks Inf. Secur.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17762/ijcnis.v14i2.5496\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Commun. Networks Inf. Secur.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17762/ijcnis.v14i2.5496","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在当今竞争激烈的商业环境中,多模式内容的指数级增长导致了大量的无形数据。非结构化的大数据没有特定的格式或组织,可以采取任何形式,包括文本、音频、照片和视频。从文献综述来看,识别不同的情绪通常需要许多假设和算法,而情感识别的主要关注点是基于单一的模态,如语音、面部表情和生物信号。本文提出了一种基于人工智能技术的多模态情感感知与情感识别新技术。在这里,音频和视频数据是基于社交媒体评论收集的,并使用基于隐马尔可夫模型的极限学习机(HMM_ExLM)进行分类。使用该方法对特征进行训练。同时,这些言语情感特征被适当地最大化。在表情照片的研究中采用了分割区域的策略,并对每个区域赋予不同的权重来提取信息。然后使用决策级融合将语音和面部表情数据合并,并利用面部区域中每个表情的语音属性进行分类。实验结果表明,与单独使用语言或表情相比,将语言和表情特征结合起来可以大大提高效果。在正确率、召回率、精密度和优化水平方面进行了参数比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multimodal Sentiment Sensing and Emotion Recognition Based on Cognitive Computing Using Hidden Markov Model with Extreme Learning Machine
In today's competitive business environment, exponential increase of multimodal content results in a massive amount of shapeless data. Big data that is unstructured has no specific format or organisation and can take any form, including text, audio, photos, and video. Many assumptions and algorithms are generally required to recognize different emotions as per literature survey, and the main focus for emotion recognition is based on single modality, such as voice, facial expression and bio signals. This paper proposed the novel technique in multimodal sentiment sensing with emotion recognition using artificial intelligence technique. Here the audio and visual data has been collected based on social media review and classified using hidden Markov model based extreme learning machine (HMM_ExLM). The features are trained using this method. Simultaneously, these speech emotional traits are suitably maximised. The strategy of splitting areas is employed in the research for expression photographs and various weights are provided to each area to extract information. Speech as well as facial expression data are then merged using decision level fusion and speech properties of each expression in region of face are utilized to categorize. Findings of experiments show that combining features of speech and expression boosts effect greatly when compared to using either speech or expression alone. In terms of accuracy, recall, precision, and optimization level, a parametric comparison was made.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信