Peiyang Liu, Xi Wang, Lin Wang, Wei Ye, Xiangyu Xi, Shikun Zhang
{"title":"Distilling Knowledge from BERT into Simple Fully Connected Neural Networks for Efficient Vertical Retrieval","authors":"Peiyang Liu, Xi Wang, Lin Wang, Wei Ye, Xiangyu Xi, Shikun Zhang","doi":"10.1145/3459637.3481909","DOIUrl":null,"url":null,"abstract":"Distilled BERT models are more suitable for efficient vertical retrieval in online sponsored vertical search with low-latency requirements than BERT due to fewer parameters and faster inference. Unfortunately, most of these models are still far from ideal inference speed. This paper presents a novel and effective method to distill knowledge from BERT into simple fully connected neural networks (FNN). Results of extensive experiments on English and Chinese datasets demonstrate that our method achieves comparable results with existing distilled BERT models while the inference is accelerated by more than ten times. We have successfully applied our method on our online sponsored vertical search engine and get remarkable improvements.","PeriodicalId":405296,"journal":{"name":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459637.3481909","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Distilled BERT models are more suitable for efficient vertical retrieval in online sponsored vertical search with low-latency requirements than BERT due to fewer parameters and faster inference. Unfortunately, most of these models are still far from ideal inference speed. This paper presents a novel and effective method to distill knowledge from BERT into simple fully connected neural networks (FNN). Results of extensive experiments on English and Chinese datasets demonstrate that our method achieves comparable results with existing distilled BERT models while the inference is accelerated by more than ten times. We have successfully applied our method on our online sponsored vertical search engine and get remarkable improvements.