{"title":"一种基于区域嵌入和LSTM的文本分类模型","authors":"Ying Li, Ming Ye","doi":"10.1145/3404555.3404643","DOIUrl":null,"url":null,"abstract":"In the field of natural language processing, recurrent neural networks are good at capturing long-range dependent information and can effectively complete text classification tasks. However, Recurrent neural network is model the entire sentence in the process of text feature extraction, which easily ignores the deep semantic information of the local phrase of the text. To further enhance the expressiveness of text features, we propose a text classification model base on region embedding and LSTM (RELSTM). RELSTM first divides regions for text and then generates region embedding. We introduce the learnable local context unit(LCU) to calculate the relative position information of the middle word and its influence on the context words in the region, and obtain a region matrix representation. In order to reduce the complexity of the model, the max pooling operation is applied to the region matrix and we obtain a dense region embedding. Then, we use LSTM's long-term memory of text information to extract the global characteristics. The model is verified on public data sets, and the results are compared using 5 benchmark models. Experimental results on three dataset show that RELSTM has better overall performance and is effective in improving the accuracy of text classification compared with traditional deep learning models.","PeriodicalId":220526,"journal":{"name":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Text Classification Model Base On Region Embedding AND LSTM\",\"authors\":\"Ying Li, Ming Ye\",\"doi\":\"10.1145/3404555.3404643\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of natural language processing, recurrent neural networks are good at capturing long-range dependent information and can effectively complete text classification tasks. However, Recurrent neural network is model the entire sentence in the process of text feature extraction, which easily ignores the deep semantic information of the local phrase of the text. To further enhance the expressiveness of text features, we propose a text classification model base on region embedding and LSTM (RELSTM). RELSTM first divides regions for text and then generates region embedding. We introduce the learnable local context unit(LCU) to calculate the relative position information of the middle word and its influence on the context words in the region, and obtain a region matrix representation. In order to reduce the complexity of the model, the max pooling operation is applied to the region matrix and we obtain a dense region embedding. Then, we use LSTM's long-term memory of text information to extract the global characteristics. The model is verified on public data sets, and the results are compared using 5 benchmark models. Experimental results on three dataset show that RELSTM has better overall performance and is effective in improving the accuracy of text classification compared with traditional deep learning models.\",\"PeriodicalId\":220526,\"journal\":{\"name\":\"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3404555.3404643\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3404555.3404643","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Text Classification Model Base On Region Embedding AND LSTM
In the field of natural language processing, recurrent neural networks are good at capturing long-range dependent information and can effectively complete text classification tasks. However, Recurrent neural network is model the entire sentence in the process of text feature extraction, which easily ignores the deep semantic information of the local phrase of the text. To further enhance the expressiveness of text features, we propose a text classification model base on region embedding and LSTM (RELSTM). RELSTM first divides regions for text and then generates region embedding. We introduce the learnable local context unit(LCU) to calculate the relative position information of the middle word and its influence on the context words in the region, and obtain a region matrix representation. In order to reduce the complexity of the model, the max pooling operation is applied to the region matrix and we obtain a dense region embedding. Then, we use LSTM's long-term memory of text information to extract the global characteristics. The model is verified on public data sets, and the results are compared using 5 benchmark models. Experimental results on three dataset show that RELSTM has better overall performance and is effective in improving the accuracy of text classification compared with traditional deep learning models.