{"title":"Domain Specific Text Preprocessing for Open Information Extraction","authors":"Chandan Prakash, Pavan Kumar Chittimalli, Ravindra Naik","doi":"10.1145/3511430.3511456","DOIUrl":null,"url":null,"abstract":"Preprocessing is an integral part of Natural Language Processing (NLP) based applications. Standard preprocessing steps consist of removal of irrelevant, unwanted characters or parts of the text based on several observed patterns, while preserving the original intent of the text. We introduce domain-specific preprocessing to filter domain-irrelevant parts of the text while preserving the intended, semantically relevant meaning and syntactic correctness of the text. For this, we define multiple patterns using the dependency tree that represents the Natural Language text based on its dependency grammar. We applied this technique and the patterns to the United States retirement domain documents for open information extraction task as a pre-cursor for mining business product information and rules, and were able to reduce the document data aka information for analysis and mining by at least 13%, which enhanced the F1-score of relation extraction by a minimum of 16%.","PeriodicalId":138760,"journal":{"name":"15th Innovations in Software Engineering Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"15th Innovations in Software Engineering Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3511430.3511456","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Preprocessing is an integral part of Natural Language Processing (NLP) based applications. Standard preprocessing steps consist of removal of irrelevant, unwanted characters or parts of the text based on several observed patterns, while preserving the original intent of the text. We introduce domain-specific preprocessing to filter domain-irrelevant parts of the text while preserving the intended, semantically relevant meaning and syntactic correctness of the text. For this, we define multiple patterns using the dependency tree that represents the Natural Language text based on its dependency grammar. We applied this technique and the patterns to the United States retirement domain documents for open information extraction task as a pre-cursor for mining business product information and rules, and were able to reduce the document data aka information for analysis and mining by at least 13%, which enhanced the F1-score of relation extraction by a minimum of 16%.