S. Chakraverty, Bhawna Juneja, U. Pandey, Ashima Arora
{"title":"基于上下文的文本分类双词法链","authors":"S. Chakraverty, Bhawna Juneja, U. Pandey, Ashima Arora","doi":"10.1109/ICACEA.2015.7164744","DOIUrl":null,"url":null,"abstract":"Text Classification enhances the accessibility and systematic organization of the vast reserves of data populatingthe world-wide-web. Despite great strides in the field, the domain of context driven text classification provides fresh opportunities to develop more efficient context oriented techniques with refined metrics. In this paper, we propose a novel approach to categorize text documents using a dual lexical chaining technique. The algorithm first prepares a cohesive category-keyword matrix by feeding category names into the WordNet and Wikipedia ontology, extracting lexically and semantically related keywords from them and then adding to the keywords by employing a keyword enrichment process. Next, the WordNet is referred again to find the degree of lexical cohesiveness between the tokens of a document. Terms that are strongly related are woven together into two separate lexical chains; one for their noun senses and another for their verb senses, that represent the feature set for the document. This segregation enables a better expression of word cohesiveness as concept terms and action terms are treated distinctively. We propose a new metric to calculate the strength of a lexical chain. It includes a statistical part given by Term Frequency-Inverse Document Frequency-Relative Category Frequency (TF-IDF-RCF) which itself is an improvement upon the conventional TF-IDF measure. The chain's contextual strength is determined by the degree of its lexical matching with the category-keyword matrix as well as by the relative positions of its constituent terms. Results indicate the efficacy of our approach. We obtained an average accuracy of 90% on 6 categories derived from the 20 News Group and the Reuters corpora. Lexical chaining has been applied successfully to text summarization. Our results indicate a positive direction towards its usefulness for text classification.","PeriodicalId":202893,"journal":{"name":"2015 International Conference on Advances in Computer Engineering and Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Dual lexical chaining for context based text classification\",\"authors\":\"S. Chakraverty, Bhawna Juneja, U. Pandey, Ashima Arora\",\"doi\":\"10.1109/ICACEA.2015.7164744\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Text Classification enhances the accessibility and systematic organization of the vast reserves of data populatingthe world-wide-web. Despite great strides in the field, the domain of context driven text classification provides fresh opportunities to develop more efficient context oriented techniques with refined metrics. In this paper, we propose a novel approach to categorize text documents using a dual lexical chaining technique. The algorithm first prepares a cohesive category-keyword matrix by feeding category names into the WordNet and Wikipedia ontology, extracting lexically and semantically related keywords from them and then adding to the keywords by employing a keyword enrichment process. Next, the WordNet is referred again to find the degree of lexical cohesiveness between the tokens of a document. Terms that are strongly related are woven together into two separate lexical chains; one for their noun senses and another for their verb senses, that represent the feature set for the document. This segregation enables a better expression of word cohesiveness as concept terms and action terms are treated distinctively. We propose a new metric to calculate the strength of a lexical chain. It includes a statistical part given by Term Frequency-Inverse Document Frequency-Relative Category Frequency (TF-IDF-RCF) which itself is an improvement upon the conventional TF-IDF measure. The chain's contextual strength is determined by the degree of its lexical matching with the category-keyword matrix as well as by the relative positions of its constituent terms. Results indicate the efficacy of our approach. We obtained an average accuracy of 90% on 6 categories derived from the 20 News Group and the Reuters corpora. Lexical chaining has been applied successfully to text summarization. Our results indicate a positive direction towards its usefulness for text classification.\",\"PeriodicalId\":202893,\"journal\":{\"name\":\"2015 International Conference on Advances in Computer Engineering and Applications\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Conference on Advances in Computer Engineering and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACEA.2015.7164744\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Advances in Computer Engineering and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACEA.2015.7164744","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dual lexical chaining for context based text classification
Text Classification enhances the accessibility and systematic organization of the vast reserves of data populatingthe world-wide-web. Despite great strides in the field, the domain of context driven text classification provides fresh opportunities to develop more efficient context oriented techniques with refined metrics. In this paper, we propose a novel approach to categorize text documents using a dual lexical chaining technique. The algorithm first prepares a cohesive category-keyword matrix by feeding category names into the WordNet and Wikipedia ontology, extracting lexically and semantically related keywords from them and then adding to the keywords by employing a keyword enrichment process. Next, the WordNet is referred again to find the degree of lexical cohesiveness between the tokens of a document. Terms that are strongly related are woven together into two separate lexical chains; one for their noun senses and another for their verb senses, that represent the feature set for the document. This segregation enables a better expression of word cohesiveness as concept terms and action terms are treated distinctively. We propose a new metric to calculate the strength of a lexical chain. It includes a statistical part given by Term Frequency-Inverse Document Frequency-Relative Category Frequency (TF-IDF-RCF) which itself is an improvement upon the conventional TF-IDF measure. The chain's contextual strength is determined by the degree of its lexical matching with the category-keyword matrix as well as by the relative positions of its constituent terms. Results indicate the efficacy of our approach. We obtained an average accuracy of 90% on 6 categories derived from the 20 News Group and the Reuters corpora. Lexical chaining has been applied successfully to text summarization. Our results indicate a positive direction towards its usefulness for text classification.