一种改进的反蒸馏模型用于无监督异常检测

Van-Duc Nguyen, Hoang Huu Bach, L. Trang
{"title":"一种改进的反蒸馏模型用于无监督异常检测","authors":"Van-Duc Nguyen, Hoang Huu Bach, L. Trang","doi":"10.1109/IMCOM56909.2023.10035610","DOIUrl":null,"url":null,"abstract":"Using knowledge distillation for unsupervised anomaly detection problems is more efficient. Recently, a reverse distillation (RD) model has been presented a novel teacher-student (T-S) model for the problem [7]. In the model, the student network uses the one-class embedding from the teacher model as input with the goal of restoring the teacher's rep-resentations. The knowledge distillation starts with high-level abstract presentations and moves down to low-level aspects using a model called one-class bottleneck embedding (OCBE). Although its performance is expressive, it still leverages the power of transforming input images before applying this architecture. Instead of only using raw images, in this paper, we transform them using augmentation techniques. The teacher will encode raw and transformed inputs to get raw representation (encoded from raw inputs) and transformed representation (encoded from transformed inputs). The student must restore the transformed representation from the bottleneck to the raw representation. Testing results obtained on benchmarks for AD and one-class novelty detection showed that our proposed model outperforms the SOTA ones, proving the utility and applicability of the suggested strategy.","PeriodicalId":230213,"journal":{"name":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Improved Reverse Distillation Model for Unsupervised Anomaly Detection\",\"authors\":\"Van-Duc Nguyen, Hoang Huu Bach, L. Trang\",\"doi\":\"10.1109/IMCOM56909.2023.10035610\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Using knowledge distillation for unsupervised anomaly detection problems is more efficient. Recently, a reverse distillation (RD) model has been presented a novel teacher-student (T-S) model for the problem [7]. In the model, the student network uses the one-class embedding from the teacher model as input with the goal of restoring the teacher's rep-resentations. The knowledge distillation starts with high-level abstract presentations and moves down to low-level aspects using a model called one-class bottleneck embedding (OCBE). Although its performance is expressive, it still leverages the power of transforming input images before applying this architecture. Instead of only using raw images, in this paper, we transform them using augmentation techniques. The teacher will encode raw and transformed inputs to get raw representation (encoded from raw inputs) and transformed representation (encoded from transformed inputs). The student must restore the transformed representation from the bottleneck to the raw representation. Testing results obtained on benchmarks for AD and one-class novelty detection showed that our proposed model outperforms the SOTA ones, proving the utility and applicability of the suggested strategy.\",\"PeriodicalId\":230213,\"journal\":{\"name\":\"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IMCOM56909.2023.10035610\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMCOM56909.2023.10035610","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

将知识蒸馏用于无监督异常检测问题,效率更高。最近,反蒸馏(RD)模型提出了一种新的师生(T-S)模型[7]。在该模型中,学生网络使用来自教师模型的单类嵌入作为输入,目的是恢复教师的表示。知识蒸馏从高级抽象表示开始,并使用称为单类瓶颈嵌入(OCBE)的模型向下移动到低级方面。尽管它的性能很有表现力,但在应用该体系结构之前,它仍然利用了转换输入图像的能力。在本文中,我们使用增强技术对原始图像进行了转换,而不是只使用原始图像。教师将对原始输入和转换后的输入进行编码,以获得原始表示(从原始输入编码)和转换后的表示(从转换后的输入编码)。学生必须将转换后的表示从瓶颈恢复到原始表示。在AD和一类新颖性检测的基准测试结果表明,我们提出的模型优于SOTA模型,证明了所提策略的实用性和适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Improved Reverse Distillation Model for Unsupervised Anomaly Detection
Using knowledge distillation for unsupervised anomaly detection problems is more efficient. Recently, a reverse distillation (RD) model has been presented a novel teacher-student (T-S) model for the problem [7]. In the model, the student network uses the one-class embedding from the teacher model as input with the goal of restoring the teacher's rep-resentations. The knowledge distillation starts with high-level abstract presentations and moves down to low-level aspects using a model called one-class bottleneck embedding (OCBE). Although its performance is expressive, it still leverages the power of transforming input images before applying this architecture. Instead of only using raw images, in this paper, we transform them using augmentation techniques. The teacher will encode raw and transformed inputs to get raw representation (encoded from raw inputs) and transformed representation (encoded from transformed inputs). The student must restore the transformed representation from the bottleneck to the raw representation. Testing results obtained on benchmarks for AD and one-class novelty detection showed that our proposed model outperforms the SOTA ones, proving the utility and applicability of the suggested strategy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信