{"title":"Zero-Shot Defect Detection With Anomaly Attribute Awareness via Textual Domain Bridge","authors":"Zhe Zhang;Shu Chen;Jian Huang;Jie Ma","doi":"10.1109/JSEN.2025.3544407","DOIUrl":null,"url":null,"abstract":"Visual defect detection is crucial for industrial quality control in intelligent manufacturing. Previous research requires target-specific data to train the model for each inspection task. However, due to the challenges of collecting proprietary data and model-training time costs, zero-shot defect detection (ZSDD) has become an emerging topic in the field. ZSDD, which requires models trained with auxiliary data, can detect defects on different products without target-data training. Recently, large pretrained vision-language models (VLMs), such as contrastive language-image pre-training model (CLIP), have demonstrated revolutionary generality with competitive zero-shot performance across various downstream tasks. However, VLMs have limitations in defect detection, which are designed to focus on identifying category semantics of the objects rather than sensing object attributes (defective/nondefective). The current VLMs-based ZSDD methods require manually crafted text prompts to guide the discovery of anomaly attributes. In this article, we propose a novel ZSDD method, namely attribute-aware CLIP, to adapt CLIP for anomaly attribute discovery without designing specific textual prompts. The core is designing a textual domain bridge, which transforms simple general textual prompt features into prompt embeddings better aligned with the attribute awareness. This enables the model to perceive the attributes of objects by text-image feature matching, bridging the gap between object semantic recognition and attribute discovery. Additionally, we perform component clustering on the images to break down the overall object semantics, encouraging the model to focus on attribute awareness. Extensive experiments on 16 real-world defect datasets demonstrate that our method achieves state-of-the-art (SOTA) ZSDD performance in diverse class-semantic datasets.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 7","pages":"11759-11771"},"PeriodicalIF":4.3000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/10909027/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Visual defect detection is crucial for industrial quality control in intelligent manufacturing. Previous research requires target-specific data to train the model for each inspection task. However, due to the challenges of collecting proprietary data and model-training time costs, zero-shot defect detection (ZSDD) has become an emerging topic in the field. ZSDD, which requires models trained with auxiliary data, can detect defects on different products without target-data training. Recently, large pretrained vision-language models (VLMs), such as contrastive language-image pre-training model (CLIP), have demonstrated revolutionary generality with competitive zero-shot performance across various downstream tasks. However, VLMs have limitations in defect detection, which are designed to focus on identifying category semantics of the objects rather than sensing object attributes (defective/nondefective). The current VLMs-based ZSDD methods require manually crafted text prompts to guide the discovery of anomaly attributes. In this article, we propose a novel ZSDD method, namely attribute-aware CLIP, to adapt CLIP for anomaly attribute discovery without designing specific textual prompts. The core is designing a textual domain bridge, which transforms simple general textual prompt features into prompt embeddings better aligned with the attribute awareness. This enables the model to perceive the attributes of objects by text-image feature matching, bridging the gap between object semantic recognition and attribute discovery. Additionally, we perform component clustering on the images to break down the overall object semantics, encouraging the model to focus on attribute awareness. Extensive experiments on 16 real-world defect datasets demonstrate that our method achieves state-of-the-art (SOTA) ZSDD performance in diverse class-semantic datasets.
期刊介绍:
The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following:
-Sensor Phenomenology, Modelling, and Evaluation
-Sensor Materials, Processing, and Fabrication
-Chemical and Gas Sensors
-Microfluidics and Biosensors
-Optical Sensors
-Physical Sensors: Temperature, Mechanical, Magnetic, and others
-Acoustic and Ultrasonic Sensors
-Sensor Packaging
-Sensor Networks
-Sensor Applications
-Sensor Systems: Signals, Processing, and Interfaces
-Actuators and Sensor Power Systems
-Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting
-Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data)
-Sensors in Industrial Practice