{"title":"Mixed Distillation for Unsupervised Anomaly Detection","authors":"Fuzhen Cai, Siyu Xia","doi":"10.23919/MVA57639.2023.10215597","DOIUrl":null,"url":null,"abstract":"Anomaly detection is typically a class of unsupervised learning problems in which the model is trained with only normal samples. Knowledge distillation (KD) has shown promising results in the field of image anomaly detection, especially for texture images. However, the knowledge of the classical KD model is step-by-step transferred from the shallow layers to the deep, which causes the deep layers not to be well-fitted due to an incomplete match of the shallow layers of the student network. For this problem, we propose a skip distillation method, which allows the deep layers of the student network to learn directly from the shallow of the teacher, avoiding a worse deep fit. We also design a symmetric path that allows the shallow layers of the student network to learn directly from the deep of the teacher. These two paths encode sufficient information for the student network. We have done thorough experiments on the anomaly detection benchmark dataset MvtecAD, and the experimental results show that our model exceeds the current state-of-the-art anomaly detection methods in terms of texture classes.","PeriodicalId":338734,"journal":{"name":"2023 18th International Conference on Machine Vision and Applications (MVA)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 18th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA57639.2023.10215597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Anomaly detection is typically a class of unsupervised learning problems in which the model is trained with only normal samples. Knowledge distillation (KD) has shown promising results in the field of image anomaly detection, especially for texture images. However, the knowledge of the classical KD model is step-by-step transferred from the shallow layers to the deep, which causes the deep layers not to be well-fitted due to an incomplete match of the shallow layers of the student network. For this problem, we propose a skip distillation method, which allows the deep layers of the student network to learn directly from the shallow of the teacher, avoiding a worse deep fit. We also design a symmetric path that allows the shallow layers of the student network to learn directly from the deep of the teacher. These two paths encode sufficient information for the student network. We have done thorough experiments on the anomaly detection benchmark dataset MvtecAD, and the experimental results show that our model exceeds the current state-of-the-art anomaly detection methods in terms of texture classes.