图像情感信息增强的多模态视觉问答模型

Q3 Arts and Humanities
Icon Pub Date : 2023-03-01 DOI:10.1109/ICNLP58431.2023.00056
Jin Cai, Guoyong Cai
{"title":"图像情感信息增强的多模态视觉问答模型","authors":"Jin Cai, Guoyong Cai","doi":"10.1109/ICNLP58431.2023.00056","DOIUrl":null,"url":null,"abstract":"Visual Question Answering is a multimedia understanding task that gives an image and natural language questions related to its content and allows the computer to answer them correctly. The early visual question answering models often ignore the emotional information in the image, resulting in insufficient performance in answering emotional-related questions; on the other hand, the existing visual question answering models that integrate emotional information do not make full use of the key areas of the image and text keywords, and do not understand fine-grained questions deeply enough, resulting in low accuracy. In order to fully integrate image emotional information into the visual question answering model and enhance the ability of the model to answer questions, a multimodal visual question answering model (IEMVQA) enhanced by image emotional information is proposed, and experiments are carried out on the visual question answering benchmark dataset. The final experiment shows that the IEMVQA model performs better than other comparison methods in comprehensive indicators, and verifies the effectiveness of using emotional information to assist visual question answering model.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal Visual Question Answering Model Enhanced with Image Emotional Information\",\"authors\":\"Jin Cai, Guoyong Cai\",\"doi\":\"10.1109/ICNLP58431.2023.00056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visual Question Answering is a multimedia understanding task that gives an image and natural language questions related to its content and allows the computer to answer them correctly. The early visual question answering models often ignore the emotional information in the image, resulting in insufficient performance in answering emotional-related questions; on the other hand, the existing visual question answering models that integrate emotional information do not make full use of the key areas of the image and text keywords, and do not understand fine-grained questions deeply enough, resulting in low accuracy. In order to fully integrate image emotional information into the visual question answering model and enhance the ability of the model to answer questions, a multimodal visual question answering model (IEMVQA) enhanced by image emotional information is proposed, and experiments are carried out on the visual question answering benchmark dataset. The final experiment shows that the IEMVQA model performs better than other comparison methods in comprehensive indicators, and verifies the effectiveness of using emotional information to assist visual question answering model.\",\"PeriodicalId\":53637,\"journal\":{\"name\":\"Icon\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Icon\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNLP58431.2023.00056\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Icon","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNLP58431.2023.00056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

摘要

视觉问答是一种多媒体理解任务,它给出与其内容相关的图像和自然语言问题,并允许计算机正确回答它们。早期的视觉问答模型往往忽略了图像中的情感信息,导致在回答情感相关问题时表现不佳;另一方面,现有的整合情感信息的视觉问答模型没有充分利用图像和文本关键词的关键区域,对细粒度问题的理解不够深入,导致准确率较低。为了将图像情感信息充分整合到视觉问答模型中,增强模型的答题能力,提出了一种基于图像情感信息增强的多模态视觉问答模型(IEMVQA),并在视觉问答基准数据集上进行了实验。最后的实验表明,IEMVQA模型在综合指标上优于其他比较方法,验证了利用情感信息辅助视觉问答模型的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multimodal Visual Question Answering Model Enhanced with Image Emotional Information
Visual Question Answering is a multimedia understanding task that gives an image and natural language questions related to its content and allows the computer to answer them correctly. The early visual question answering models often ignore the emotional information in the image, resulting in insufficient performance in answering emotional-related questions; on the other hand, the existing visual question answering models that integrate emotional information do not make full use of the key areas of the image and text keywords, and do not understand fine-grained questions deeply enough, resulting in low accuracy. In order to fully integrate image emotional information into the visual question answering model and enhance the ability of the model to answer questions, a multimodal visual question answering model (IEMVQA) enhanced by image emotional information is proposed, and experiments are carried out on the visual question answering benchmark dataset. The final experiment shows that the IEMVQA model performs better than other comparison methods in comprehensive indicators, and verifies the effectiveness of using emotional information to assist visual question answering model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Icon
Icon Arts and Humanities-History and Philosophy of Science
CiteScore
0.30
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信