{"title":"一种改进的反蒸馏模型用于无监督异常检测","authors":"Van-Duc Nguyen, Hoang Huu Bach, L. Trang","doi":"10.1109/IMCOM56909.2023.10035610","DOIUrl":null,"url":null,"abstract":"Using knowledge distillation for unsupervised anomaly detection problems is more efficient. Recently, a reverse distillation (RD) model has been presented a novel teacher-student (T-S) model for the problem [7]. In the model, the student network uses the one-class embedding from the teacher model as input with the goal of restoring the teacher's rep-resentations. The knowledge distillation starts with high-level abstract presentations and moves down to low-level aspects using a model called one-class bottleneck embedding (OCBE). Although its performance is expressive, it still leverages the power of transforming input images before applying this architecture. Instead of only using raw images, in this paper, we transform them using augmentation techniques. The teacher will encode raw and transformed inputs to get raw representation (encoded from raw inputs) and transformed representation (encoded from transformed inputs). The student must restore the transformed representation from the bottleneck to the raw representation. Testing results obtained on benchmarks for AD and one-class novelty detection showed that our proposed model outperforms the SOTA ones, proving the utility and applicability of the suggested strategy.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Improved Reverse Distillation Model for Unsupervised Anomaly Detection\",\"authors\":\"Van-Duc Nguyen, Hoang Huu Bach, L. Trang\",\"doi\":\"10.1109/IMCOM56909.2023.10035610\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Using knowledge distillation for unsupervised anomaly detection problems is more efficient. Recently, a reverse distillation (RD) model has been presented a novel teacher-student (T-S) model for the problem [7]. In the model, the student network uses the one-class embedding from the teacher model as input with the goal of restoring the teacher's rep-resentations. The knowledge distillation starts with high-level abstract presentations and moves down to low-level aspects using a model called one-class bottleneck embedding (OCBE). Although its performance is expressive, it still leverages the power of transforming input images before applying this architecture. Instead of only using raw images, in this paper, we transform them using augmentation techniques. The teacher will encode raw and transformed inputs to get raw representation (encoded from raw inputs) and transformed representation (encoded from transformed inputs). The student must restore the transformed representation from the bottleneck to the raw representation. Testing results obtained on benchmarks for AD and one-class novelty detection showed that our proposed model outperforms the SOTA ones, proving the utility and applicability of the suggested strategy.\",\"PeriodicalId\":230213,\"journal\":{\"name\":\"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IMCOM56909.2023.10035610\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMCOM56909.2023.10035610","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Improved Reverse Distillation Model for Unsupervised Anomaly Detection
Using knowledge distillation for unsupervised anomaly detection problems is more efficient. Recently, a reverse distillation (RD) model has been presented a novel teacher-student (T-S) model for the problem [7]. In the model, the student network uses the one-class embedding from the teacher model as input with the goal of restoring the teacher's rep-resentations. The knowledge distillation starts with high-level abstract presentations and moves down to low-level aspects using a model called one-class bottleneck embedding (OCBE). Although its performance is expressive, it still leverages the power of transforming input images before applying this architecture. Instead of only using raw images, in this paper, we transform them using augmentation techniques. The teacher will encode raw and transformed inputs to get raw representation (encoded from raw inputs) and transformed representation (encoded from transformed inputs). The student must restore the transformed representation from the bottleneck to the raw representation. Testing results obtained on benchmarks for AD and one-class novelty detection showed that our proposed model outperforms the SOTA ones, proving the utility and applicability of the suggested strategy.