Rui Hu , Yahan Tu , Shuyu Wei , Dongyuan Lu , Jitao Sang
{"title":"开出正确的药方:通过有针对性的指令调整来减轻大型视觉语言模型中的幻觉","authors":"Rui Hu , Yahan Tu , Shuyu Wei , Dongyuan Lu , Jitao Sang","doi":"10.1016/j.ins.2025.122361","DOIUrl":null,"url":null,"abstract":"<div><div>Despite achieving outstanding performance on various cross-modal tasks, current large vision-language models (LVLMs) still suffer from hallucination issues, which manifest as inconsistencies between their generated responses and the corresponding images. Prior research has indicated that the low quality of instruction data, especially the skewed balance between positive and negative samples, is a significant contributor to model hallucinations. Recently, researchers have developed high-quality instruction datasets, such as LRV-Instruction, to mitigate model hallucinations. Nonetheless, our investigation reveals that hallucinatory concepts from different LVLMs exhibit specificity, i.e. the distribution of hallucinatory concepts varies significantly across models. Existing datasets did not consider the hallucination specificity of different models in the design process, thus limiting their efficacy in mitigating model hallucination. In this paper, we propose a targeted instruction data generation framework named <span>DFTG</span> that tailored for the hallucination specificity of different models. Concretely, <span>DFTG</span> consists of two stages: hallucination diagnosis, which extracts the necessary information from the model's responses and images for hallucination diagnosis; and targeted data generation, which generates targeted instruction data based on diagnostic results. The experimental results on hallucination benchmarks demonstrate that the targeted instruction data generated by our method are more effective in mitigating hallucinations compared to previous datasets.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"718 ","pages":"Article 122361"},"PeriodicalIF":6.8000,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prescribing the right remedy: Mitigating hallucinations in large vision-language models via targeted instruction tuning\",\"authors\":\"Rui Hu , Yahan Tu , Shuyu Wei , Dongyuan Lu , Jitao Sang\",\"doi\":\"10.1016/j.ins.2025.122361\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Despite achieving outstanding performance on various cross-modal tasks, current large vision-language models (LVLMs) still suffer from hallucination issues, which manifest as inconsistencies between their generated responses and the corresponding images. Prior research has indicated that the low quality of instruction data, especially the skewed balance between positive and negative samples, is a significant contributor to model hallucinations. Recently, researchers have developed high-quality instruction datasets, such as LRV-Instruction, to mitigate model hallucinations. Nonetheless, our investigation reveals that hallucinatory concepts from different LVLMs exhibit specificity, i.e. the distribution of hallucinatory concepts varies significantly across models. Existing datasets did not consider the hallucination specificity of different models in the design process, thus limiting their efficacy in mitigating model hallucination. In this paper, we propose a targeted instruction data generation framework named <span>DFTG</span> that tailored for the hallucination specificity of different models. Concretely, <span>DFTG</span> consists of two stages: hallucination diagnosis, which extracts the necessary information from the model's responses and images for hallucination diagnosis; and targeted data generation, which generates targeted instruction data based on diagnostic results. The experimental results on hallucination benchmarks demonstrate that the targeted instruction data generated by our method are more effective in mitigating hallucinations compared to previous datasets.</div></div>\",\"PeriodicalId\":51063,\"journal\":{\"name\":\"Information Sciences\",\"volume\":\"718 \",\"pages\":\"Article 122361\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2025-06-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0020025525004931\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020025525004931","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Prescribing the right remedy: Mitigating hallucinations in large vision-language models via targeted instruction tuning
Despite achieving outstanding performance on various cross-modal tasks, current large vision-language models (LVLMs) still suffer from hallucination issues, which manifest as inconsistencies between their generated responses and the corresponding images. Prior research has indicated that the low quality of instruction data, especially the skewed balance between positive and negative samples, is a significant contributor to model hallucinations. Recently, researchers have developed high-quality instruction datasets, such as LRV-Instruction, to mitigate model hallucinations. Nonetheless, our investigation reveals that hallucinatory concepts from different LVLMs exhibit specificity, i.e. the distribution of hallucinatory concepts varies significantly across models. Existing datasets did not consider the hallucination specificity of different models in the design process, thus limiting their efficacy in mitigating model hallucination. In this paper, we propose a targeted instruction data generation framework named DFTG that tailored for the hallucination specificity of different models. Concretely, DFTG consists of two stages: hallucination diagnosis, which extracts the necessary information from the model's responses and images for hallucination diagnosis; and targeted data generation, which generates targeted instruction data based on diagnostic results. The experimental results on hallucination benchmarks demonstrate that the targeted instruction data generated by our method are more effective in mitigating hallucinations compared to previous datasets.
期刊介绍:
Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions.
Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.