An-An Liu , Quanhan Wu , Chenxi Huang , Chao Xue , Xianzhu Liu , Ning Xu
{"title":"A fine-grained deconfounding study for knowledge-based visual dialog","authors":"An-An Liu , Quanhan Wu , Chenxi Huang , Chao Xue , Xianzhu Liu , Ning Xu","doi":"10.1016/j.visinf.2024.09.007","DOIUrl":null,"url":null,"abstract":"<div><div>Knowledge-based Visual Dialog is a challenging vision-language task, where an agent engages in dialog to answer questions with humans based on the input image and corresponding commonsense knowledge. The debiasing methods based on causal graphs have gradually sparked much attention in the field of Visual Dialog (VD), yielding impressive achievements. However, existing studies focus on the coarse-grained deconfounding, which lacks a principled analysis of the bias. In this paper, we propose a fined-grained study of deconfounding on: (1) We define the confounder from two perspectives. The first is user preference (denoted as <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>), derived from human-annotated dialog history, which may introduce spurious correlations between questions and answers. The second is commonsense language bias (denoted as <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi></mrow></msub></math></span>), where certain words appear so frequently in the retrieved commonsense knowledge that the model tends to memorize these patterns, thereby establishing spurious correlations between the commonsense knowledge and the answers. (2) Given that the current question directly influences answer generation, we further decompose the confounders into <span><math><mrow><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>2</mn></mrow></msub></mrow></math></span> and <span><math><mrow><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>2</mn></mrow></msub></mrow></math></span>, based on their relevance to the current question. Specifically, <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>1</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>1</mn></mrow></msub></math></span> represent dialog history and high-frequency words that are highly correlated with the current question, while <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>2</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>2</mn></mrow></msub></math></span> are sampled from dialog history and words with low relevance to the current question. Through a comprehensive evaluation and comparison of all components, we demonstrate the necessity of jointly considering both <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi></mrow></msub></math></span>. Fine-grained deconfounding, particularly with respect to the current question, proves to be more effective. Ablation studies, quantitative results, and visualizations further confirm the effectiveness of the proposed method.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 36-47"},"PeriodicalIF":3.8000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X24000482","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Knowledge-based Visual Dialog is a challenging vision-language task, where an agent engages in dialog to answer questions with humans based on the input image and corresponding commonsense knowledge. The debiasing methods based on causal graphs have gradually sparked much attention in the field of Visual Dialog (VD), yielding impressive achievements. However, existing studies focus on the coarse-grained deconfounding, which lacks a principled analysis of the bias. In this paper, we propose a fined-grained study of deconfounding on: (1) We define the confounder from two perspectives. The first is user preference (denoted as ), derived from human-annotated dialog history, which may introduce spurious correlations between questions and answers. The second is commonsense language bias (denoted as ), where certain words appear so frequently in the retrieved commonsense knowledge that the model tends to memorize these patterns, thereby establishing spurious correlations between the commonsense knowledge and the answers. (2) Given that the current question directly influences answer generation, we further decompose the confounders into and , based on their relevance to the current question. Specifically, and represent dialog history and high-frequency words that are highly correlated with the current question, while and are sampled from dialog history and words with low relevance to the current question. Through a comprehensive evaluation and comparison of all components, we demonstrate the necessity of jointly considering both and . Fine-grained deconfounding, particularly with respect to the current question, proves to be more effective. Ablation studies, quantitative results, and visualizations further confirm the effectiveness of the proposed method.