Qinting Tang, Jian Li, Jiayu Chen, Hengtong Lu, Yu Du, Kehan Yang
{"title":"基于全注意力的Bi-GRU神经网络新闻文本分类","authors":"Qinting Tang, Jian Li, Jiayu Chen, Hengtong Lu, Yu Du, Kehan Yang","doi":"10.1109/ICCC47050.2019.9064061","DOIUrl":null,"url":null,"abstract":"This paper proposes a novel approach for text classification by using attention mechanism. In recent works, several models based on deep learning with traditional attention mechanism mainly learn the weights of steps in the entire text. However, the information of each step is filtered by the encoder, and the same information has different effects on different steps. This paper proposes a full attention-based bidirectional GRU (Bi-GRU) neural network, which is called FABG. FABG uses a Bi-GRU to learn the semantic information of text, and uses full attention mechanism to learn the weights of previous and current outputs of the Bi-GRU at each step, which enables the representation of each step to obtain the important information and ignore the irrelevant information. Finally, through a pooling layer, we get the representation of the text. Thereby FABG can learn more information, which enhances the effect of text classification. Experiments on the English news dataset agnews and the Chinese news dataset chnews show that FABG achieve better performance than the baselines.","PeriodicalId":6739,"journal":{"name":"2019 IEEE 5th International Conference on Computer and Communications (ICCC)","volume":"1 1","pages":"1970-1974"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Full Attention-Based Bi-GRU Neural Network for News Text Classification\",\"authors\":\"Qinting Tang, Jian Li, Jiayu Chen, Hengtong Lu, Yu Du, Kehan Yang\",\"doi\":\"10.1109/ICCC47050.2019.9064061\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a novel approach for text classification by using attention mechanism. In recent works, several models based on deep learning with traditional attention mechanism mainly learn the weights of steps in the entire text. However, the information of each step is filtered by the encoder, and the same information has different effects on different steps. This paper proposes a full attention-based bidirectional GRU (Bi-GRU) neural network, which is called FABG. FABG uses a Bi-GRU to learn the semantic information of text, and uses full attention mechanism to learn the weights of previous and current outputs of the Bi-GRU at each step, which enables the representation of each step to obtain the important information and ignore the irrelevant information. Finally, through a pooling layer, we get the representation of the text. Thereby FABG can learn more information, which enhances the effect of text classification. Experiments on the English news dataset agnews and the Chinese news dataset chnews show that FABG achieve better performance than the baselines.\",\"PeriodicalId\":6739,\"journal\":{\"name\":\"2019 IEEE 5th International Conference on Computer and Communications (ICCC)\",\"volume\":\"1 1\",\"pages\":\"1970-1974\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 5th International Conference on Computer and Communications (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC47050.2019.9064061\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 5th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC47050.2019.9064061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Full Attention-Based Bi-GRU Neural Network for News Text Classification
This paper proposes a novel approach for text classification by using attention mechanism. In recent works, several models based on deep learning with traditional attention mechanism mainly learn the weights of steps in the entire text. However, the information of each step is filtered by the encoder, and the same information has different effects on different steps. This paper proposes a full attention-based bidirectional GRU (Bi-GRU) neural network, which is called FABG. FABG uses a Bi-GRU to learn the semantic information of text, and uses full attention mechanism to learn the weights of previous and current outputs of the Bi-GRU at each step, which enables the representation of each step to obtain the important information and ignore the irrelevant information. Finally, through a pooling layer, we get the representation of the text. Thereby FABG can learn more information, which enhances the effect of text classification. Experiments on the English news dataset agnews and the Chinese news dataset chnews show that FABG achieve better performance than the baselines.