{"title":"基于Bi-GRU的中文实体关系抽取方法","authors":"Jian-qiong Xiao, Zhi-yong Zhou, Xingrong Luo","doi":"10.2991/MASTA-19.2019.9","DOIUrl":null,"url":null,"abstract":"In order to solve some defects of single deep neural network in Chinese entity Relationship Extraction task, a hybrid neural network entity relationship extraction model is designed and implemented in this paper. The model combines convolution network and bidirectional GRU model with a unified architecture, by defining varisized regional list embedding, it produces nonobjective feature representations of word vectors in distinction positions, and it has only Chinese character vectors and Chinese character word vectors, without position embedding. The laboratory findings show that our method is very effective on the Chinese corpus ACE2005 dataset about entities extraction task. Introduction Aiming at the problem that the traditional CNN model ignores the text context and leads to the lack of text semantics, a convolution layer improvement algorithm is proposed in this paper. By stripping the convolution layer from the CNN model, the convolution layer structure is improved, by defining a varisized sizes regional list embedding, and producing nonobjective feature representation of word vectors at imparity locations, which results in more accurate feature representation by fusing multiple local features, and position vectors aren’t required. Related Work Many researchers have putted forward to a number of solid relational extraction methods based on deep neural networks. Liu et al. [1] is first team who used convolution neural network to automatically learn sentence representation for relational classification tasks, the characteristics of lexical features, lexicality and so on are added to the model, their model get F1 value in the corpus ACE2005 dataset, and exceed the kernel function method 9%. Dong-xu Zhang et al. [2] used RNN to get varisized location feature by training corpus. Zhang [3] proposed using bidirectional long short-term memory network(Bi-LSTM) to build whole sentences extract model, this model used many features, include NLP tools and lexical resources, POS, NER and so on, of course, it achieved the state-of-the-art result. In order to make full use of the competitive advantages of existing neural networks, Sun ziyang et al. [4] also used BiLSTM to model sentence dependency shortest path, and this model use out of CNN as input of LSTM to train the model. This method took full comprehensive advantage of Bi-LSTM model and the CNN model, because Bi-LSTM is good at capturing distance mutual relationship, while CNN can capture the local flat characteristics of sentences. In view of above research, we propose a method for Chinese entity relationship extraction based on bidirectional Gated Recurrent Unit (RLE-BiGRU), it is a new means by defining a new vector (RLE). Experiments result is F1-score of 85.3% on the Chinese corpus ACE2005 dataset, which shows the validity of the work in this paper for the Chinese entity relations extraction. Model In this section we introduce our RLE-BiGRU model in detail. Figure 1 is the model photograph. Our model has six constituent parts: International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019) Copyright © 2019, the Authors. Published by Atlantis Press. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/). Advances in Intelligent Systems Research, volume 168","PeriodicalId":103896,"journal":{"name":"Proceedings of the 2019 International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Method for Chinese Entity Relationship Extraction Based on Bi-GRU\",\"authors\":\"Jian-qiong Xiao, Zhi-yong Zhou, Xingrong Luo\",\"doi\":\"10.2991/MASTA-19.2019.9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to solve some defects of single deep neural network in Chinese entity Relationship Extraction task, a hybrid neural network entity relationship extraction model is designed and implemented in this paper. The model combines convolution network and bidirectional GRU model with a unified architecture, by defining varisized regional list embedding, it produces nonobjective feature representations of word vectors in distinction positions, and it has only Chinese character vectors and Chinese character word vectors, without position embedding. The laboratory findings show that our method is very effective on the Chinese corpus ACE2005 dataset about entities extraction task. Introduction Aiming at the problem that the traditional CNN model ignores the text context and leads to the lack of text semantics, a convolution layer improvement algorithm is proposed in this paper. By stripping the convolution layer from the CNN model, the convolution layer structure is improved, by defining a varisized sizes regional list embedding, and producing nonobjective feature representation of word vectors at imparity locations, which results in more accurate feature representation by fusing multiple local features, and position vectors aren’t required. Related Work Many researchers have putted forward to a number of solid relational extraction methods based on deep neural networks. Liu et al. [1] is first team who used convolution neural network to automatically learn sentence representation for relational classification tasks, the characteristics of lexical features, lexicality and so on are added to the model, their model get F1 value in the corpus ACE2005 dataset, and exceed the kernel function method 9%. Dong-xu Zhang et al. [2] used RNN to get varisized location feature by training corpus. Zhang [3] proposed using bidirectional long short-term memory network(Bi-LSTM) to build whole sentences extract model, this model used many features, include NLP tools and lexical resources, POS, NER and so on, of course, it achieved the state-of-the-art result. In order to make full use of the competitive advantages of existing neural networks, Sun ziyang et al. [4] also used BiLSTM to model sentence dependency shortest path, and this model use out of CNN as input of LSTM to train the model. This method took full comprehensive advantage of Bi-LSTM model and the CNN model, because Bi-LSTM is good at capturing distance mutual relationship, while CNN can capture the local flat characteristics of sentences. In view of above research, we propose a method for Chinese entity relationship extraction based on bidirectional Gated Recurrent Unit (RLE-BiGRU), it is a new means by defining a new vector (RLE). Experiments result is F1-score of 85.3% on the Chinese corpus ACE2005 dataset, which shows the validity of the work in this paper for the Chinese entity relations extraction. Model In this section we introduce our RLE-BiGRU model in detail. Figure 1 is the model photograph. Our model has six constituent parts: International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019) Copyright © 2019, the Authors. Published by Atlantis Press. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/). Advances in Intelligent Systems Research, volume 168\",\"PeriodicalId\":103896,\"journal\":{\"name\":\"Proceedings of the 2019 International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2991/MASTA-19.2019.9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2991/MASTA-19.2019.9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
A Method for Chinese Entity Relationship Extraction Based on Bi-GRU
In order to solve some defects of single deep neural network in Chinese entity Relationship Extraction task, a hybrid neural network entity relationship extraction model is designed and implemented in this paper. The model combines convolution network and bidirectional GRU model with a unified architecture, by defining varisized regional list embedding, it produces nonobjective feature representations of word vectors in distinction positions, and it has only Chinese character vectors and Chinese character word vectors, without position embedding. The laboratory findings show that our method is very effective on the Chinese corpus ACE2005 dataset about entities extraction task. Introduction Aiming at the problem that the traditional CNN model ignores the text context and leads to the lack of text semantics, a convolution layer improvement algorithm is proposed in this paper. By stripping the convolution layer from the CNN model, the convolution layer structure is improved, by defining a varisized sizes regional list embedding, and producing nonobjective feature representation of word vectors at imparity locations, which results in more accurate feature representation by fusing multiple local features, and position vectors aren’t required. Related Work Many researchers have putted forward to a number of solid relational extraction methods based on deep neural networks. Liu et al. [1] is first team who used convolution neural network to automatically learn sentence representation for relational classification tasks, the characteristics of lexical features, lexicality and so on are added to the model, their model get F1 value in the corpus ACE2005 dataset, and exceed the kernel function method 9%. Dong-xu Zhang et al. [2] used RNN to get varisized location feature by training corpus. Zhang [3] proposed using bidirectional long short-term memory network(Bi-LSTM) to build whole sentences extract model, this model used many features, include NLP tools and lexical resources, POS, NER and so on, of course, it achieved the state-of-the-art result. In order to make full use of the competitive advantages of existing neural networks, Sun ziyang et al. [4] also used BiLSTM to model sentence dependency shortest path, and this model use out of CNN as input of LSTM to train the model. This method took full comprehensive advantage of Bi-LSTM model and the CNN model, because Bi-LSTM is good at capturing distance mutual relationship, while CNN can capture the local flat characteristics of sentences. In view of above research, we propose a method for Chinese entity relationship extraction based on bidirectional Gated Recurrent Unit (RLE-BiGRU), it is a new means by defining a new vector (RLE). Experiments result is F1-score of 85.3% on the Chinese corpus ACE2005 dataset, which shows the validity of the work in this paper for the Chinese entity relations extraction. Model In this section we introduce our RLE-BiGRU model in detail. Figure 1 is the model photograph. Our model has six constituent parts: International Conference on Modeling, Analysis, Simulation Technologies and Applications (MASTA 2019) Copyright © 2019, the Authors. Published by Atlantis Press. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/). Advances in Intelligent Systems Research, volume 168