{"title":"Enhanced Dual-Pattern Matching with Vision-Language Representation for out-of-Distribution Detection.","authors":"Xiang Xiang,Zhuo Xu,Zihan Zhang,Zhigang Zeng,Xilin Chen","doi":"10.1109/tpami.2025.3590717","DOIUrl":null,"url":null,"abstract":"Out-of-distribution (OOD) detection presents a significant challenge in deploying pattern recognition and machine learning models, as they frequently fail to generalize to data from unseen distributions. Recent advancements in vision-language models (VLMs), particularly CLIP, have demonstrated promising results in OOD detection through their rich multimodal representations. However, current CLIP-based OOD detection methods predominantly rely on single-modality in-distribution (ID) data (e.g., textual cues), overlooking the valuable information contained in ID visual cues. In this work, we demonstrate that incorporating ID visual information is crucial for unlocking CLIP's full potential in OOD detection. We propose a novel approach, Dual-Pattern Matching (DPM), which effectively adapts CLIP for OOD detection by jointly exploiting both textual and visual ID patterns. Specifically, DPM refines visual and textual features through the proposed Domain-Specific Feature Aggregation (DSFA) and Prompt Enhancement (PE) modules. Subsequently, DPM stores class-wise textual features as textual patterns and aggregates ID visual features as visual patterns. During inference, DPM calculates similarity scores relative to both patterns to identify OOD data. Furthermore, we enhance DPM with lightweight adaptation mechanisms to further boost OOD detection performance. Comprehensive experiments demonstrate that DPM surpasses state-of-the-art methods on multiple benchmarks, highlighting the effectiveness of leveraging multimodal information for OOD detection. The proposed dual-pattern approach provides a simple yet robust framework for leveraging vision-language representations in OOD detection tasks.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"109 1","pages":""},"PeriodicalIF":20.8000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3590717","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Out-of-distribution (OOD) detection presents a significant challenge in deploying pattern recognition and machine learning models, as they frequently fail to generalize to data from unseen distributions. Recent advancements in vision-language models (VLMs), particularly CLIP, have demonstrated promising results in OOD detection through their rich multimodal representations. However, current CLIP-based OOD detection methods predominantly rely on single-modality in-distribution (ID) data (e.g., textual cues), overlooking the valuable information contained in ID visual cues. In this work, we demonstrate that incorporating ID visual information is crucial for unlocking CLIP's full potential in OOD detection. We propose a novel approach, Dual-Pattern Matching (DPM), which effectively adapts CLIP for OOD detection by jointly exploiting both textual and visual ID patterns. Specifically, DPM refines visual and textual features through the proposed Domain-Specific Feature Aggregation (DSFA) and Prompt Enhancement (PE) modules. Subsequently, DPM stores class-wise textual features as textual patterns and aggregates ID visual features as visual patterns. During inference, DPM calculates similarity scores relative to both patterns to identify OOD data. Furthermore, we enhance DPM with lightweight adaptation mechanisms to further boost OOD detection performance. Comprehensive experiments demonstrate that DPM surpasses state-of-the-art methods on multiple benchmarks, highlighting the effectiveness of leveraging multimodal information for OOD detection. The proposed dual-pattern approach provides a simple yet robust framework for leveraging vision-language representations in OOD detection tasks.
期刊介绍:
The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.