{"title":"Evaluating the machine learning models based on natural language processing tasks","authors":"Meeradevi Meeradevi, S. B. J., Swetha B. N.","doi":"10.11591/ijai.v13.i2.pp1954-1968","DOIUrl":null,"url":null,"abstract":"In the realm of natural language processing (NLP), a diverse array of language models has emerged, catering to a wide spectrum of tasks, ranging from speaker recognition and auto-correction to sentiment analysis and stock prediction. The significance of language models in enabling the execution of these NLP tasks cannot be overstated. This study proposes an approach to enhance accuracy by leveraging a hybrid language model, combining the strengths of long short-term memory (LSTM) and gated recurrent unit (GRU). LSTM excels in preserving long-term dependencies in data, while GRU's simpler gating mechanism expedites the training process. The research endeavors to evaluate four variations of this hybrid model: LSTM, GRU, bidirectional long short-term memory (Bi-LSTM), and a combination of LSTM with GRU. These models are subjected to rigorous testing on two distinct datasets: one focused on IBM stock price prediction, and the other on Jigsaw toxic comment classification (sentiment analysis). This work represents a significant stride towards democratizing NLP capabilities, ensuring that even in resource-constrained settings, NLP models can exhibit improved performance. The anticipated implications of these findings span a wide spectrum of real-world applications and hold the potential to stimulate further research in the field of NLP. ","PeriodicalId":507934,"journal":{"name":"IAES International Journal of Artificial Intelligence (IJ-AI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IAES International Journal of Artificial Intelligence (IJ-AI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11591/ijai.v13.i2.pp1954-1968","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the realm of natural language processing (NLP), a diverse array of language models has emerged, catering to a wide spectrum of tasks, ranging from speaker recognition and auto-correction to sentiment analysis and stock prediction. The significance of language models in enabling the execution of these NLP tasks cannot be overstated. This study proposes an approach to enhance accuracy by leveraging a hybrid language model, combining the strengths of long short-term memory (LSTM) and gated recurrent unit (GRU). LSTM excels in preserving long-term dependencies in data, while GRU's simpler gating mechanism expedites the training process. The research endeavors to evaluate four variations of this hybrid model: LSTM, GRU, bidirectional long short-term memory (Bi-LSTM), and a combination of LSTM with GRU. These models are subjected to rigorous testing on two distinct datasets: one focused on IBM stock price prediction, and the other on Jigsaw toxic comment classification (sentiment analysis). This work represents a significant stride towards democratizing NLP capabilities, ensuring that even in resource-constrained settings, NLP models can exhibit improved performance. The anticipated implications of these findings span a wide spectrum of real-world applications and hold the potential to stimulate further research in the field of NLP.