Xu Jiang , Yurong Cheng , Siyi Zhang , Juan Wang , Baoquan Ma
{"title":"APIE: An information extraction module designed based on the pipeline method","authors":"Xu Jiang , Yurong Cheng , Siyi Zhang , Juan Wang , Baoquan Ma","doi":"10.1016/j.array.2023.100331","DOIUrl":null,"url":null,"abstract":"<div><p>Information extraction (IE) aims to discover and extract valuable information from unstructured text. This problem can be decomposed into two subtasks: named entity recognition (NER) and relation extraction (RE). Although the IE problem has been studied for years, most work efforts focused on jointly modeling these two subtasks, either by casting them into a structured prediction framework or by performing multitask learning through shared representations. However, since the contextual representations of entity and relation models inherently capture different feature information, sharing a single encoder to capture the information required by both subtasks in the same space would harm the accuracy of the model. Recent research (Zhong and Chen, 2020) has also proved that using two separate encoders for NER and RE tasks respectively through pipeline method are effective, with the model surpassing all previous joint models in accuracy. Thus, in this paper, we design <strong>A</strong>n <strong>P</strong>ipeline method <strong>I</strong>nformation <strong>E</strong>xtraction module called <strong>APIE</strong>, APIE combines the advantages of both pipeline methods and joint methods, demonstrating higher accuracy and powerful reasoning abilities. Specifically, we design a multi-level feature NER model based on attention mechanism and a document-level RE model based on local context pooling. To demonstrate the effectiveness of our proposed approach, we conducted tests on multiple datasets. Extensive experimental results have shown that our proposed model outperforms state-of-the-art methods and improves both accuracy and reasoning abilities.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"21 ","pages":"Article 100331"},"PeriodicalIF":2.3000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2590005623000565/pdfft?md5=1f053c973dea03b6b99efcb063a40e93&pid=1-s2.0-S2590005623000565-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005623000565","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Information extraction (IE) aims to discover and extract valuable information from unstructured text. This problem can be decomposed into two subtasks: named entity recognition (NER) and relation extraction (RE). Although the IE problem has been studied for years, most work efforts focused on jointly modeling these two subtasks, either by casting them into a structured prediction framework or by performing multitask learning through shared representations. However, since the contextual representations of entity and relation models inherently capture different feature information, sharing a single encoder to capture the information required by both subtasks in the same space would harm the accuracy of the model. Recent research (Zhong and Chen, 2020) has also proved that using two separate encoders for NER and RE tasks respectively through pipeline method are effective, with the model surpassing all previous joint models in accuracy. Thus, in this paper, we design An Pipeline method Information Extraction module called APIE, APIE combines the advantages of both pipeline methods and joint methods, demonstrating higher accuracy and powerful reasoning abilities. Specifically, we design a multi-level feature NER model based on attention mechanism and a document-level RE model based on local context pooling. To demonstrate the effectiveness of our proposed approach, we conducted tests on multiple datasets. Extensive experimental results have shown that our proposed model outperforms state-of-the-art methods and improves both accuracy and reasoning abilities.