{"title":"Improved Text Classification using Long Short-Term Memory and Word Embedding Technique","authors":"A. Adamuthe","doi":"10.21742/ijhit.2020.13.1.03","DOIUrl":null,"url":null,"abstract":"Text classification is an important problem for spam filtering, sentiment analysis, news filtering, document organizations, document retrieval and many more. The complexity of text classification increases with a number of classes and training samples. The main objective of this paper is to improve the accuracy of text classification with long short-term memory with word embedding. Experiments conducted on seven benchmark datasets namely IMDB, Amazon review full score, Amazon review polarity, Yelp review polarity, AG news topic classification, Yahoo! Answers topic classification, DBpedia ontology classification with different number of classes and training samples. Different experiments are conducted to evaluate the effect of each parameter on LSTM. Results show that 100 batch size, 50 epochs, Adagrad optimizer, 5 hidden nodes, 100-word vector length, 2 LSTM layers, 0.001 L2 regularization, 0.001 learning rate give the higher accuracy. The results of LSTM are compared with literature. For IMDB, Amazon review full score, Yahoo! Answers topic classification dataset the results obtained are better than literature. Results of LSTM for Amazon review polarity, Yelp review polarity, AG news topic classification are close to bestknown results. For DBpedia ontology classification dataset the accuracy is more than 91% but less than best known.","PeriodicalId":170772,"journal":{"name":"International Journal of Hybrid Information Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Hybrid Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21742/ijhit.2020.13.1.03","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Text classification is an important problem for spam filtering, sentiment analysis, news filtering, document organizations, document retrieval and many more. The complexity of text classification increases with a number of classes and training samples. The main objective of this paper is to improve the accuracy of text classification with long short-term memory with word embedding. Experiments conducted on seven benchmark datasets namely IMDB, Amazon review full score, Amazon review polarity, Yelp review polarity, AG news topic classification, Yahoo! Answers topic classification, DBpedia ontology classification with different number of classes and training samples. Different experiments are conducted to evaluate the effect of each parameter on LSTM. Results show that 100 batch size, 50 epochs, Adagrad optimizer, 5 hidden nodes, 100-word vector length, 2 LSTM layers, 0.001 L2 regularization, 0.001 learning rate give the higher accuracy. The results of LSTM are compared with literature. For IMDB, Amazon review full score, Yahoo! Answers topic classification dataset the results obtained are better than literature. Results of LSTM for Amazon review polarity, Yelp review polarity, AG news topic classification are close to bestknown results. For DBpedia ontology classification dataset the accuracy is more than 91% but less than best known.