Enhancing visual question answering with a two-way co-attention mechanism and integrated multimodal features

IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Mayank Agrawal, Anand Singh Jalal, Himanshu Sharma
{"title":"Enhancing visual question answering with a two-way co-attention mechanism and integrated multimodal features","authors":"Mayank Agrawal,&nbsp;Anand Singh Jalal,&nbsp;Himanshu Sharma","doi":"10.1111/coin.12624","DOIUrl":null,"url":null,"abstract":"<p>In Visual question answering (VQA), a natural language answer is generated for a given image and a question related to that image. There is a significant growth in the VQA task by applying an efficient attention mechanism. However, current VQA models use region features or object features that are not adequate to improve the accuracy of generated answers. To deal with this issue, we have used a Two-way Co-Attention Mechanism (TCAM), which is capable enough to fuse different visual features (region, object, and concept) from diverse perspectives. These diverse features lead to different sets of answers, and also, there is an inherent relationship between these visual features. We have developed a powerful attention mechanism that uses these two critical aspects by using both bottom-up and top-down TCAM to extract discriminative feature information. We have proposed a Collective Feature Integration Module (CFIM) to combine multimodal attention features and thus capture the valuable information from these visual features by employing a TCAM. Further, we have formulated a Vertical CFIM for fusing the features belonging to the same class and a Horizontal CFIM for combining the features belonging to different types, thus balancing the influence of top-down and bottom-up co-attention. The experiments are conducted on two significant datasets, VQA 1.0 and VQA 2.0. On VQA 1.0, the overall accuracy of our proposed method is 71.23 on the test-dev set and 71.94 on the test-std set. On VQA 2.0, the overall accuracy of our proposed method is 75.89 on the test-dev set and 76.32 on the test-std set. The above overall accuracy clearly reflecting the superiority of our proposed TCAM based approach over the existing methods.</p>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"40 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coin.12624","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In Visual question answering (VQA), a natural language answer is generated for a given image and a question related to that image. There is a significant growth in the VQA task by applying an efficient attention mechanism. However, current VQA models use region features or object features that are not adequate to improve the accuracy of generated answers. To deal with this issue, we have used a Two-way Co-Attention Mechanism (TCAM), which is capable enough to fuse different visual features (region, object, and concept) from diverse perspectives. These diverse features lead to different sets of answers, and also, there is an inherent relationship between these visual features. We have developed a powerful attention mechanism that uses these two critical aspects by using both bottom-up and top-down TCAM to extract discriminative feature information. We have proposed a Collective Feature Integration Module (CFIM) to combine multimodal attention features and thus capture the valuable information from these visual features by employing a TCAM. Further, we have formulated a Vertical CFIM for fusing the features belonging to the same class and a Horizontal CFIM for combining the features belonging to different types, thus balancing the influence of top-down and bottom-up co-attention. The experiments are conducted on two significant datasets, VQA 1.0 and VQA 2.0. On VQA 1.0, the overall accuracy of our proposed method is 71.23 on the test-dev set and 71.94 on the test-std set. On VQA 2.0, the overall accuracy of our proposed method is 75.89 on the test-dev set and 76.32 on the test-std set. The above overall accuracy clearly reflecting the superiority of our proposed TCAM based approach over the existing methods.

利用双向共同注意机制和综合多模态特征增强视觉问题解答能力
在视觉问题解答(VQA)中,针对给定图像和与该图像相关的问题生成自然语言答案。通过应用高效的注意力机制,VQA 任务有了显著增长。然而,当前的 VQA 模型使用的区域特征或对象特征不足以提高生成答案的准确性。为了解决这个问题,我们采用了双向协同注意机制(TCAM),它能够从不同角度融合不同的视觉特征(区域、物体和概念)。这些不同的特征会产生不同的答案,而且这些视觉特征之间存在着内在联系。我们开发了一种强大的注意力机制,通过使用自下而上和自上而下的 TCAM 来提取辨别特征信息,从而利用这两个关键方面。我们提出了集体特征整合模块(CFIM),通过使用 TCAM 来组合多模态注意特征,从而从这些视觉特征中获取有价值的信息。此外,我们还提出了一个用于融合同类特征的垂直特征集成模块(Vertical CFIM)和一个用于融合不同类型特征的水平特征集成模块(Horizontal CFIM),从而平衡了自上而下和自下而上共同注意的影响。实验在两个重要的数据集 VQA 1.0 和 VQA 2.0 上进行。在 VQA 1.0 上,我们提出的方法在 test-dev 集上的总体准确率为 71.23,在 test-std 集上的准确率为 71.94。在 VQA 2.0 上,我们提出的方法在 test-dev 集上的总体准确率为 75.89,在 test-std 集上的总体准确率为 76.32。上述总体准确率清楚地反映了我们提出的基于 TCAM 的方法优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computational Intelligence
Computational Intelligence 工程技术-计算机:人工智能
CiteScore
6.90
自引率
3.60%
发文量
65
审稿时长
>12 weeks
期刊介绍: This leading international journal promotes and stimulates research in the field of artificial intelligence (AI). Covering a wide range of issues - from the tools and languages of AI to its philosophical implications - Computational Intelligence provides a vigorous forum for the publication of both experimental and theoretical research, as well as surveys and impact studies. The journal is designed to meet the needs of a wide range of AI workers in academic and industrial research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信