Xinyi Yu, Ling Yan, Yang Yang, Libo Zhou, Linlin Ou
{"title":"Conditional generative data-free knowledge distillation","authors":"Xinyi Yu, Ling Yan, Yang Yang, Libo Zhou, Linlin Ou","doi":"10.2139/ssrn.4039886","DOIUrl":null,"url":null,"abstract":"Knowledge distillation has made remarkable achievements in model compression. However, most existing methods require the original training data, which is usually unavailable due to privacy and security issues. In this paper, we propose a conditional generative data-free knowledge distillation (CGDD) framework for training lightweight networks without any training data. This method realizes efficient knowledge distillation based on conditional image generation. Specifically, we treat the preset labels as ground truth to train a conditional generator in a semi-supervised manner. The trained generator can produce specified classes of training images. For training the student network, we force it to extract the knowledge hidden in teacher feature maps, which provide crucial cues for the learning process. Moreover, an adversarial training framework for promoting distillation performance is constructed by designing several loss functions. This framework helps the student model to explore larger data space. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on different datasets. Compared with other data-free works, our work obtains state-of-the-art results on CIFAR100, Caltech101, and different versions of ImageNet datasets. The codes will be released.","PeriodicalId":10549,"journal":{"name":"Comput. Vis. Image Underst.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Comput. Vis. Image Underst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.4039886","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Knowledge distillation has made remarkable achievements in model compression. However, most existing methods require the original training data, which is usually unavailable due to privacy and security issues. In this paper, we propose a conditional generative data-free knowledge distillation (CGDD) framework for training lightweight networks without any training data. This method realizes efficient knowledge distillation based on conditional image generation. Specifically, we treat the preset labels as ground truth to train a conditional generator in a semi-supervised manner. The trained generator can produce specified classes of training images. For training the student network, we force it to extract the knowledge hidden in teacher feature maps, which provide crucial cues for the learning process. Moreover, an adversarial training framework for promoting distillation performance is constructed by designing several loss functions. This framework helps the student model to explore larger data space. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on different datasets. Compared with other data-free works, our work obtains state-of-the-art results on CIFAR100, Caltech101, and different versions of ImageNet datasets. The codes will be released.