{"title":"Linear Regression Tree and Homogenized Attention Recurrent Neural Network for Online Training Classification","authors":"Yadhunandan K K A, Sujatha Arun Kokatnoor","doi":"10.1109/TEECCON54414.2022.9854833","DOIUrl":null,"url":null,"abstract":"Internet has become a vital part in people’s life with the swift development of Information Technology (IT). Predominantly the customers share their opinions concerning numerous entities like, products, services on numerous platforms. These platforms comprises of valuable information concerning different types of domains ranging from commercial to political and social applications. Analysis of this immeasurable amount of data is both laborious and cumbersome to manipulate manually. In this work, a method called, Linear Regression Tree-based Homogenized Attention Recurrent Neural Network (LRT-HRNN) for online training is proposed. In the first step, a dataset consisting of student’s reactions on E-learning is provided as input. A Linear Regression Decision Tree (LRT) - based feature (i.e., student’s reactions and posts) selection model is applied in the second step. The feature selection model initially selects the commonly dispensed features. In the last step, HRNN sentiment analysis is employed for aggregating characterizations from prior and succeeding posts based on student’s reactions for online training. During the experimentation process, LRT-HRNN method when compared with existing methods such as Attention Emotion-enhanced Convolutional Long Short Term Memory (AEC-LSTM) and Adaptive Particle Swarm Optimization based Long Short Term Memory (APSO-LSTM, performed better in terms of accuracy(increased by 6%), false positive rate (decreased by 22%), true positive rate (increased by 7%) and computational time (reduced by 21%).","PeriodicalId":251455,"journal":{"name":"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TEECCON54414.2022.9854833","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Internet has become a vital part in people’s life with the swift development of Information Technology (IT). Predominantly the customers share their opinions concerning numerous entities like, products, services on numerous platforms. These platforms comprises of valuable information concerning different types of domains ranging from commercial to political and social applications. Analysis of this immeasurable amount of data is both laborious and cumbersome to manipulate manually. In this work, a method called, Linear Regression Tree-based Homogenized Attention Recurrent Neural Network (LRT-HRNN) for online training is proposed. In the first step, a dataset consisting of student’s reactions on E-learning is provided as input. A Linear Regression Decision Tree (LRT) - based feature (i.e., student’s reactions and posts) selection model is applied in the second step. The feature selection model initially selects the commonly dispensed features. In the last step, HRNN sentiment analysis is employed for aggregating characterizations from prior and succeeding posts based on student’s reactions for online training. During the experimentation process, LRT-HRNN method when compared with existing methods such as Attention Emotion-enhanced Convolutional Long Short Term Memory (AEC-LSTM) and Adaptive Particle Swarm Optimization based Long Short Term Memory (APSO-LSTM, performed better in terms of accuracy(increased by 6%), false positive rate (decreased by 22%), true positive rate (increased by 7%) and computational time (reduced by 21%).