{"title":"Context Relation Fusion Model for Visual Question Answering","authors":"Haotian Zhang, Wei Wu","doi":"10.1109/ICIP46576.2022.9897563","DOIUrl":null,"url":null,"abstract":"Traditional VQA models tend to rely on language priors as a shortcut to answer questions and neglect visual information. To solve this problem, the latest approaches divide language priors into \"good\" language context and \"bad\" language bias through global features to benefit the language context and suppress the language bias. However, language priors cannot be meticulously divided by global features. In this paper, we propose a novel Context Relation Fusion Model (CRFM), which produces comprehensive contextual features forcing the VQA model to more carefully distinguish language priors into \"good\" language context and \"bad\" language bias. Specifically, we utilize the Visual Relation Fusion Model (VRFM) and Question Relation Fusion Model (QRFM) to learn local critical contextual information and then perform information enhancement through the Attended Features Fusion Model (AFFM). Experiments show that our CRFM achieves state-of-the-art performance on the VQA-CP v2 dataset.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP46576.2022.9897563","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Traditional VQA models tend to rely on language priors as a shortcut to answer questions and neglect visual information. To solve this problem, the latest approaches divide language priors into "good" language context and "bad" language bias through global features to benefit the language context and suppress the language bias. However, language priors cannot be meticulously divided by global features. In this paper, we propose a novel Context Relation Fusion Model (CRFM), which produces comprehensive contextual features forcing the VQA model to more carefully distinguish language priors into "good" language context and "bad" language bias. Specifically, we utilize the Visual Relation Fusion Model (VRFM) and Question Relation Fusion Model (QRFM) to learn local critical contextual information and then perform information enhancement through the Attended Features Fusion Model (AFFM). Experiments show that our CRFM achieves state-of-the-art performance on the VQA-CP v2 dataset.