SCAD: A self-constrained solution to automate context-guided zero-shot image anomaly detection

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Neural Networks Pub Date : 2026-07-01 Epub Date: 2026-01-19 DOI:10.1016/j.neunet.2026.108577
Siqi Wang , Guangpu Wang , Xinwang Liu , Jie Liu , Jiyuan Liu , Siwei Wang
{"title":"SCAD: A self-constrained solution to automate context-guided zero-shot image anomaly detection","authors":"Siqi Wang ,&nbsp;Guangpu Wang ,&nbsp;Xinwang Liu ,&nbsp;Jie Liu ,&nbsp;Jiyuan Liu ,&nbsp;Siwei Wang","doi":"10.1016/j.neunet.2026.108577","DOIUrl":null,"url":null,"abstract":"<div><div>Image anomaly detection (IAD) usually requires a separated train set to build an inductive model, which then infers on the test set. However, the cost of collecting and labeling training images has inspired <em>zero-shot IAD</em> (ZS-IAD), which directly processes the test set without the train set. Most ZS-IAD methods resort to pre-trained foundation models (e.g., CLIP), which rely on external prompts and lack adaptation to the target IAD scene. By contrast, <em>context-guided ZS-IAD</em> methods have recently attracted a growing interest: They not only avoid using external prompts by exploiting scene-specific context clues within unlabeled images, but also achieve superior performance to prior ZS-IAD counterparts. Unfortunately, existing context-guided ZS-IAD methods suffer from two vital flaws: The absence of train set forces them to set key hyperparameters blindly, which leads to unreliable performance. Besides, they do not actively handle mixed anomalies that disturb the learning process. To this end, we propose to automate context-guided ZS-IAD by a novel <strong>S</strong>elf-<strong>C</strong>onstrained <strong>A</strong>nomaly <strong>D</strong>etector (SCAD), which makes the following contributions: <strong>(1)</strong> We propose a novel self-constrained mechanism that can automatically determine proper values for key hyperparameters. <strong>(2)</strong> We design a new online self-constrained sampler that terminates the time-consuming sampling process by a proper stopping point, which can significantly reduce the computational cost. <strong>(3)</strong> We develop self-constrained normality refinement strategies that can actively constrain anomalies’ impact and automatically rectify the stopping threshold. To the best of our knowledge, this is also the first work that addresses hyperparameter selection in the IAD realm. Experiments show that SCAD not only yields comparable performance to classic IAD solutions, but also matches ZS-IAD solutions enhanced by hindsight knowledge (i.e., hyperparameters validated on the test set).</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108577"},"PeriodicalIF":6.3000,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608026000407","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Image anomaly detection (IAD) usually requires a separated train set to build an inductive model, which then infers on the test set. However, the cost of collecting and labeling training images has inspired zero-shot IAD (ZS-IAD), which directly processes the test set without the train set. Most ZS-IAD methods resort to pre-trained foundation models (e.g., CLIP), which rely on external prompts and lack adaptation to the target IAD scene. By contrast, context-guided ZS-IAD methods have recently attracted a growing interest: They not only avoid using external prompts by exploiting scene-specific context clues within unlabeled images, but also achieve superior performance to prior ZS-IAD counterparts. Unfortunately, existing context-guided ZS-IAD methods suffer from two vital flaws: The absence of train set forces them to set key hyperparameters blindly, which leads to unreliable performance. Besides, they do not actively handle mixed anomalies that disturb the learning process. To this end, we propose to automate context-guided ZS-IAD by a novel Self-Constrained Anomaly Detector (SCAD), which makes the following contributions: (1) We propose a novel self-constrained mechanism that can automatically determine proper values for key hyperparameters. (2) We design a new online self-constrained sampler that terminates the time-consuming sampling process by a proper stopping point, which can significantly reduce the computational cost. (3) We develop self-constrained normality refinement strategies that can actively constrain anomalies’ impact and automatically rectify the stopping threshold. To the best of our knowledge, this is also the first work that addresses hyperparameter selection in the IAD realm. Experiments show that SCAD not only yields comparable performance to classic IAD solutions, but also matches ZS-IAD solutions enhanced by hindsight knowledge (i.e., hyperparameters validated on the test set).
SCAD:一种自我约束的解决方案,用于自动进行上下文引导的零拍摄图像异常检测
图像异常检测(IAD)通常需要一个分离的训练集来建立归纳模型,然后在测试集上进行推理。然而,训练图像的收集和标记成本激发了零射击IAD (ZS-IAD),它直接处理测试集而不处理训练集。大多数ZS-IAD方法采用预先训练的基础模型(例如CLIP),这些模型依赖于外部提示,缺乏对目标IAD场景的适应性。相比之下,上下文引导的ZS-IAD方法最近引起了越来越多的兴趣:它们不仅通过利用未标记图像中的特定场景上下文线索来避免使用外部提示,而且比之前的ZS-IAD方法具有更好的性能。不幸的是,现有的上下文引导的ZS-IAD方法存在两个重要缺陷:缺乏训练集迫使它们盲目地设置关键超参数,从而导致性能不可靠。此外,他们没有积极处理干扰学习过程的混合异常。为此,我们提出了一种新的自约束异常检测器(Self-Constrained Anomaly Detector, SCAD)来实现上下文引导下的ZS-IAD自动化,它做出了以下贡献:(1)我们提出了一种新的自约束机制,可以自动确定关键超参数的适当值。(2)我们设计了一种新的在线自约束采样器,通过适当的停止点终止耗时的采样过程,可以显著降低计算成本。(3)开发了能够主动约束异常影响并自动校正停止阈值的自约束正态性改进策略。据我们所知,这也是第一个在IAD领域解决超参数选择的工作。实验表明,SCAD不仅可以产生与经典IAD解决方案相当的性能,而且可以与经过后见之明(即在测试集上验证的超参数)增强的ZS-IAD解决方案相匹配。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书