{"title":"具有方向性自关注的BiGRU文本分类","authors":"Tiantian Jiang, Zhanguo Wang","doi":"10.1109/ICTech55460.2022.00085","DOIUrl":null,"url":null,"abstract":"In the field of natural language processing, text classification is a key daily task. The main goal is to obtain effective features from text information, find the correspondence between feature representations and category labels, so as to classify the text. From the perspective of data flow, it is mainly divided into five stages: text preprocessing, vector representation of text, feature extraction, classifier classification and model training to complete text classification tasks. Among them, feature extraction is a very important stage, and it is also the focus of this article. GRU can learn long-term dependencies from learned local features, and bidirectional GRU can learn hidden features in sentences. The self-attention mechanism exhibits superior performance in many fields in natural language processing. It can mine the autocorrelation of data and highlight key information by adjusting the weight of keywords. Therefore, in view of the shortcomings of existing models in text global information modeling, this paper combines bidirectional GRU and self-attention mechanism, and proposes a hybrid model BiGRU-MA for text classification, which can extract deep semantic features and solve the problem of classification performance degradation due to the lack of semantic information. This article uses text classification related technology to model, describes the modeling ideas, and introduces the technology used, and finally compares experiments with existing models to verify the effectiveness of the model.","PeriodicalId":290836,"journal":{"name":"2022 11th International Conference of Information and Communication Technology (ICTech))","volume":"180 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Text Classification Using BiGRU with Directional Self-Attention\",\"authors\":\"Tiantian Jiang, Zhanguo Wang\",\"doi\":\"10.1109/ICTech55460.2022.00085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the field of natural language processing, text classification is a key daily task. The main goal is to obtain effective features from text information, find the correspondence between feature representations and category labels, so as to classify the text. From the perspective of data flow, it is mainly divided into five stages: text preprocessing, vector representation of text, feature extraction, classifier classification and model training to complete text classification tasks. Among them, feature extraction is a very important stage, and it is also the focus of this article. GRU can learn long-term dependencies from learned local features, and bidirectional GRU can learn hidden features in sentences. The self-attention mechanism exhibits superior performance in many fields in natural language processing. It can mine the autocorrelation of data and highlight key information by adjusting the weight of keywords. Therefore, in view of the shortcomings of existing models in text global information modeling, this paper combines bidirectional GRU and self-attention mechanism, and proposes a hybrid model BiGRU-MA for text classification, which can extract deep semantic features and solve the problem of classification performance degradation due to the lack of semantic information. This article uses text classification related technology to model, describes the modeling ideas, and introduces the technology used, and finally compares experiments with existing models to verify the effectiveness of the model.\",\"PeriodicalId\":290836,\"journal\":{\"name\":\"2022 11th International Conference of Information and Communication Technology (ICTech))\",\"volume\":\"180 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 11th International Conference of Information and Communication Technology (ICTech))\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTech55460.2022.00085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 11th International Conference of Information and Communication Technology (ICTech))","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTech55460.2022.00085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Text Classification Using BiGRU with Directional Self-Attention
In the field of natural language processing, text classification is a key daily task. The main goal is to obtain effective features from text information, find the correspondence between feature representations and category labels, so as to classify the text. From the perspective of data flow, it is mainly divided into five stages: text preprocessing, vector representation of text, feature extraction, classifier classification and model training to complete text classification tasks. Among them, feature extraction is a very important stage, and it is also the focus of this article. GRU can learn long-term dependencies from learned local features, and bidirectional GRU can learn hidden features in sentences. The self-attention mechanism exhibits superior performance in many fields in natural language processing. It can mine the autocorrelation of data and highlight key information by adjusting the weight of keywords. Therefore, in view of the shortcomings of existing models in text global information modeling, this paper combines bidirectional GRU and self-attention mechanism, and proposes a hybrid model BiGRU-MA for text classification, which can extract deep semantic features and solve the problem of classification performance degradation due to the lack of semantic information. This article uses text classification related technology to model, describes the modeling ideas, and introduces the technology used, and finally compares experiments with existing models to verify the effectiveness of the model.