{"title":"OneN: Guided attention for natively-explainable anomaly detection","authors":"Pasquale Coscia, Angelo Genovese, Vincenzo Piuri, Fabio Scotti","doi":"10.1016/j.imavis.2025.105741","DOIUrl":null,"url":null,"abstract":"<div><div>In industrial computer vision applications, anomaly detection (AD) is a critical task for ensuring product quality and system reliability. However, many existing AD systems follow a modular design that decouples classification from detection and localization tasks. Although this separation simplifies model development, it often limits generalizability and reduces practical effectiveness in real-world scenarios. Deep neural networks offer strong potential for unified solutions. Nonetheless, most current approaches still treat detection, localization and classification as separate components, hindering the development of more integrated and efficient AD pipelines. To bridge this gap, we propose OneN (One Network), a unified architecture that performs detection, localization, and classification within a single framework. Our approach distills knowledge from a high-capacity convolutional neural network (CNN) into an attention-based architecture trained under varying levels of supervision. The resulting attention maps act as interpretable pseudo-segmentation masks, enabling accurate localization of anomalous regions. To further enhance localization quality, we introduce a progressive focal loss that guides attention maps at each layer to focus on critical features. We validate our method through extensive experiments on both standardized and custom-defined industrial benchmarks. Even under weak supervision, it improves performance, reduces annotation effort, and facilitates scalable deployment in industrial environments.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"163 ","pages":"Article 105741"},"PeriodicalIF":4.2000,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625003294","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In industrial computer vision applications, anomaly detection (AD) is a critical task for ensuring product quality and system reliability. However, many existing AD systems follow a modular design that decouples classification from detection and localization tasks. Although this separation simplifies model development, it often limits generalizability and reduces practical effectiveness in real-world scenarios. Deep neural networks offer strong potential for unified solutions. Nonetheless, most current approaches still treat detection, localization and classification as separate components, hindering the development of more integrated and efficient AD pipelines. To bridge this gap, we propose OneN (One Network), a unified architecture that performs detection, localization, and classification within a single framework. Our approach distills knowledge from a high-capacity convolutional neural network (CNN) into an attention-based architecture trained under varying levels of supervision. The resulting attention maps act as interpretable pseudo-segmentation masks, enabling accurate localization of anomalous regions. To further enhance localization quality, we introduce a progressive focal loss that guides attention maps at each layer to focus on critical features. We validate our method through extensive experiments on both standardized and custom-defined industrial benchmarks. Even under weak supervision, it improves performance, reduces annotation effort, and facilitates scalable deployment in industrial environments.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.