{"title":"BERT Model Compression With Decoupled Knowledge Distillation And Representation Learning","authors":"Linna Zhang, Yuehui Chen, Yi Cao, Ya-ou Zhao","doi":"10.1145/3573834.3574482","DOIUrl":null,"url":null,"abstract":"Pre-trained language models such as BERT have proven essential in natural language processing(NLP). However, their huge number of parameters and training cost make them very limited in practical deployment. To overcome BERT’s lack of computing resources, we propose a BERT compression method by applying decoupled knowledge distillation and representation learning, compressing the large model(teacher) into a lightweight network(student). Decoupled knowledge distillation divides the classical distillation loss into target related knowledge distillation(TRKD) and non-target related knowledge distillation(NRKD). Representation learning pools the Transformer output of each two layers, and the student network learns the intermediate features of the teacher network. It has better results on tasks of Sentiment Classification and Paraphrase Similarity Matching, retaining 98.9% performance of the large model.","PeriodicalId":345434,"journal":{"name":"Proceedings of the 4th International Conference on Advanced Information Science and System","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Advanced Information Science and System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573834.3574482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pre-trained language models such as BERT have proven essential in natural language processing(NLP). However, their huge number of parameters and training cost make them very limited in practical deployment. To overcome BERT’s lack of computing resources, we propose a BERT compression method by applying decoupled knowledge distillation and representation learning, compressing the large model(teacher) into a lightweight network(student). Decoupled knowledge distillation divides the classical distillation loss into target related knowledge distillation(TRKD) and non-target related knowledge distillation(NRKD). Representation learning pools the Transformer output of each two layers, and the student network learns the intermediate features of the teacher network. It has better results on tasks of Sentiment Classification and Paraphrase Similarity Matching, retaining 98.9% performance of the large model.