{"title":"基于BERT嵌入的DCNN-BiGRU文本分类模型","authors":"He Huang, Xiaoyuan Jing, Fei Wu, Yong-Fang Yao, Xinyu Zhang, Xiwei Dong","doi":"10.1109/IUCC/DSCI/SmartCNS.2019.00132","DOIUrl":null,"url":null,"abstract":"Text Classification is a hot topic in natural language processing. In view of the strong correlation the structure of natural language, direct translation the text into vector will lead to too high dimension. In addition, traditional word vector usually maps words with a single vector, which cannot represent the polyseme of words and affect the accuracy of the final classification. In this paper, we propose a novel DCNN-BiGRU (Deep Convolutional Neural Network Bidirection Gated Recurrent) text classification model based on BERT(Bidirectional Encoder Representations from Transformer) embedding. The model adopts the BERT to train the language model of word semantic representation. The semantic vector is generated dynamically according to the context of the word, and then it is put into the DCNN-BiGRU hybrid model. By doing so, the semantic vector not only contains the local features of text but also the context features of text. Experiments on CCERT Chinese email sample set and movie comment data set verify the validity of this model.","PeriodicalId":410905,"journal":{"name":"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"DCNN-BiGRU Text Classification Model Based on BERT Embedding\",\"authors\":\"He Huang, Xiaoyuan Jing, Fei Wu, Yong-Fang Yao, Xinyu Zhang, Xiwei Dong\",\"doi\":\"10.1109/IUCC/DSCI/SmartCNS.2019.00132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Text Classification is a hot topic in natural language processing. In view of the strong correlation the structure of natural language, direct translation the text into vector will lead to too high dimension. In addition, traditional word vector usually maps words with a single vector, which cannot represent the polyseme of words and affect the accuracy of the final classification. In this paper, we propose a novel DCNN-BiGRU (Deep Convolutional Neural Network Bidirection Gated Recurrent) text classification model based on BERT(Bidirectional Encoder Representations from Transformer) embedding. The model adopts the BERT to train the language model of word semantic representation. The semantic vector is generated dynamically according to the context of the word, and then it is put into the DCNN-BiGRU hybrid model. By doing so, the semantic vector not only contains the local features of text but also the context features of text. Experiments on CCERT Chinese email sample set and movie comment data set verify the validity of this model.\",\"PeriodicalId\":410905,\"journal\":{\"name\":\"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IUCC/DSCI/SmartCNS.2019.00132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IUCC/DSCI/SmartCNS.2019.00132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DCNN-BiGRU Text Classification Model Based on BERT Embedding
Text Classification is a hot topic in natural language processing. In view of the strong correlation the structure of natural language, direct translation the text into vector will lead to too high dimension. In addition, traditional word vector usually maps words with a single vector, which cannot represent the polyseme of words and affect the accuracy of the final classification. In this paper, we propose a novel DCNN-BiGRU (Deep Convolutional Neural Network Bidirection Gated Recurrent) text classification model based on BERT(Bidirectional Encoder Representations from Transformer) embedding. The model adopts the BERT to train the language model of word semantic representation. The semantic vector is generated dynamically according to the context of the word, and then it is put into the DCNN-BiGRU hybrid model. By doing so, the semantic vector not only contains the local features of text but also the context features of text. Experiments on CCERT Chinese email sample set and movie comment data set verify the validity of this model.