Enhanced Dual-Pattern Matching with Vision-Language Representation for out-of-Distribution Detection.

IF 20.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiang Xiang,Zhuo Xu,Zihan Zhang,Zhigang Zeng,Xilin Chen
{"title":"Enhanced Dual-Pattern Matching with Vision-Language Representation for out-of-Distribution Detection.","authors":"Xiang Xiang,Zhuo Xu,Zihan Zhang,Zhigang Zeng,Xilin Chen","doi":"10.1109/tpami.2025.3590717","DOIUrl":null,"url":null,"abstract":"Out-of-distribution (OOD) detection presents a significant challenge in deploying pattern recognition and machine learning models, as they frequently fail to generalize to data from unseen distributions. Recent advancements in vision-language models (VLMs), particularly CLIP, have demonstrated promising results in OOD detection through their rich multimodal representations. However, current CLIP-based OOD detection methods predominantly rely on single-modality in-distribution (ID) data (e.g., textual cues), overlooking the valuable information contained in ID visual cues. In this work, we demonstrate that incorporating ID visual information is crucial for unlocking CLIP's full potential in OOD detection. We propose a novel approach, Dual-Pattern Matching (DPM), which effectively adapts CLIP for OOD detection by jointly exploiting both textual and visual ID patterns. Specifically, DPM refines visual and textual features through the proposed Domain-Specific Feature Aggregation (DSFA) and Prompt Enhancement (PE) modules. Subsequently, DPM stores class-wise textual features as textual patterns and aggregates ID visual features as visual patterns. During inference, DPM calculates similarity scores relative to both patterns to identify OOD data. Furthermore, we enhance DPM with lightweight adaptation mechanisms to further boost OOD detection performance. Comprehensive experiments demonstrate that DPM surpasses state-of-the-art methods on multiple benchmarks, highlighting the effectiveness of leveraging multimodal information for OOD detection. The proposed dual-pattern approach provides a simple yet robust framework for leveraging vision-language representations in OOD detection tasks.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"109 1","pages":""},"PeriodicalIF":20.8000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3590717","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Out-of-distribution (OOD) detection presents a significant challenge in deploying pattern recognition and machine learning models, as they frequently fail to generalize to data from unseen distributions. Recent advancements in vision-language models (VLMs), particularly CLIP, have demonstrated promising results in OOD detection through their rich multimodal representations. However, current CLIP-based OOD detection methods predominantly rely on single-modality in-distribution (ID) data (e.g., textual cues), overlooking the valuable information contained in ID visual cues. In this work, we demonstrate that incorporating ID visual information is crucial for unlocking CLIP's full potential in OOD detection. We propose a novel approach, Dual-Pattern Matching (DPM), which effectively adapts CLIP for OOD detection by jointly exploiting both textual and visual ID patterns. Specifically, DPM refines visual and textual features through the proposed Domain-Specific Feature Aggregation (DSFA) and Prompt Enhancement (PE) modules. Subsequently, DPM stores class-wise textual features as textual patterns and aggregates ID visual features as visual patterns. During inference, DPM calculates similarity scores relative to both patterns to identify OOD data. Furthermore, we enhance DPM with lightweight adaptation mechanisms to further boost OOD detection performance. Comprehensive experiments demonstrate that DPM surpasses state-of-the-art methods on multiple benchmarks, highlighting the effectiveness of leveraging multimodal information for OOD detection. The proposed dual-pattern approach provides a simple yet robust framework for leveraging vision-language representations in OOD detection tasks.
基于视觉语言表示的增强双模式匹配非分布检测。
分布外(OOD)检测对部署模式识别和机器学习模型提出了重大挑战,因为它们经常无法推广到来自不可见分布的数据。视觉语言模型(VLMs)的最新进展,特别是CLIP,通过其丰富的多模态表示,在OOD检测方面显示了有希望的结果。然而,目前基于clip的OOD检测方法主要依赖于单模态分布(ID)数据(例如文本线索),忽略了ID视觉线索中包含的有价值的信息。在这项工作中,我们证明了结合ID视觉信息对于释放CLIP在OOD检测中的全部潜力至关重要。我们提出了一种新的方法,双模式匹配(DPM),该方法通过联合利用文本和视觉ID模式,有效地使CLIP适应OOD检测。具体来说,DPM通过提出的领域特定特征聚合(DSFA)和提示增强(PE)模块来细化视觉和文本特征。随后,DPM将类文本特征存储为文本模式,并将ID视觉特征聚合为视觉模式。在推理过程中,DPM计算相对于两种模式的相似性得分来识别OOD数据。此外,我们通过轻量级自适应机制增强DPM,进一步提高OOD检测性能。综合实验表明,DPM在多个基准上超过了最先进的方法,突出了利用多模式信息进行OOD检测的有效性。提出的双模式方法为在OOD检测任务中利用视觉语言表示提供了一个简单而健壮的框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
28.40
自引率
3.00%
发文量
885
审稿时长
8.5 months
期刊介绍: The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信