Siqi Wang , Guangpu Wang , Xinwang Liu , Jie Liu , Jiyuan Liu , Siwei Wang
{"title":"SCAD: A self-constrained solution to automate context-guided zero-shot image anomaly detection","authors":"Siqi Wang , Guangpu Wang , Xinwang Liu , Jie Liu , Jiyuan Liu , Siwei Wang","doi":"10.1016/j.neunet.2026.108577","DOIUrl":null,"url":null,"abstract":"<div><div>Image anomaly detection (IAD) usually requires a separated train set to build an inductive model, which then infers on the test set. However, the cost of collecting and labeling training images has inspired <em>zero-shot IAD</em> (ZS-IAD), which directly processes the test set without the train set. Most ZS-IAD methods resort to pre-trained foundation models (e.g., CLIP), which rely on external prompts and lack adaptation to the target IAD scene. By contrast, <em>context-guided ZS-IAD</em> methods have recently attracted a growing interest: They not only avoid using external prompts by exploiting scene-specific context clues within unlabeled images, but also achieve superior performance to prior ZS-IAD counterparts. Unfortunately, existing context-guided ZS-IAD methods suffer from two vital flaws: The absence of train set forces them to set key hyperparameters blindly, which leads to unreliable performance. Besides, they do not actively handle mixed anomalies that disturb the learning process. To this end, we propose to automate context-guided ZS-IAD by a novel <strong>S</strong>elf-<strong>C</strong>onstrained <strong>A</strong>nomaly <strong>D</strong>etector (SCAD), which makes the following contributions: <strong>(1)</strong> We propose a novel self-constrained mechanism that can automatically determine proper values for key hyperparameters. <strong>(2)</strong> We design a new online self-constrained sampler that terminates the time-consuming sampling process by a proper stopping point, which can significantly reduce the computational cost. <strong>(3)</strong> We develop self-constrained normality refinement strategies that can actively constrain anomalies’ impact and automatically rectify the stopping threshold. To the best of our knowledge, this is also the first work that addresses hyperparameter selection in the IAD realm. Experiments show that SCAD not only yields comparable performance to classic IAD solutions, but also matches ZS-IAD solutions enhanced by hindsight knowledge (i.e., hyperparameters validated on the test set).</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"199 ","pages":"Article 108577"},"PeriodicalIF":6.3000,"publicationDate":"2026-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608026000407","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Image anomaly detection (IAD) usually requires a separated train set to build an inductive model, which then infers on the test set. However, the cost of collecting and labeling training images has inspired zero-shot IAD (ZS-IAD), which directly processes the test set without the train set. Most ZS-IAD methods resort to pre-trained foundation models (e.g., CLIP), which rely on external prompts and lack adaptation to the target IAD scene. By contrast, context-guided ZS-IAD methods have recently attracted a growing interest: They not only avoid using external prompts by exploiting scene-specific context clues within unlabeled images, but also achieve superior performance to prior ZS-IAD counterparts. Unfortunately, existing context-guided ZS-IAD methods suffer from two vital flaws: The absence of train set forces them to set key hyperparameters blindly, which leads to unreliable performance. Besides, they do not actively handle mixed anomalies that disturb the learning process. To this end, we propose to automate context-guided ZS-IAD by a novel Self-Constrained Anomaly Detector (SCAD), which makes the following contributions: (1) We propose a novel self-constrained mechanism that can automatically determine proper values for key hyperparameters. (2) We design a new online self-constrained sampler that terminates the time-consuming sampling process by a proper stopping point, which can significantly reduce the computational cost. (3) We develop self-constrained normality refinement strategies that can actively constrain anomalies’ impact and automatically rectify the stopping threshold. To the best of our knowledge, this is also the first work that addresses hyperparameter selection in the IAD realm. Experiments show that SCAD not only yields comparable performance to classic IAD solutions, but also matches ZS-IAD solutions enhanced by hindsight knowledge (i.e., hyperparameters validated on the test set).
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.