Junhee Lee, Flavius Frasincar, Maria Mihaela Truşcă
{"title":"DIWS-LCR-Rot-hop++: A Domain-Independent Word Selector for Cross-Domain Aspect-Based Sentiment Classification","authors":"Junhee Lee, Flavius Frasincar, Maria Mihaela Truşcă","doi":"10.1145/3626307.3626309","DOIUrl":null,"url":null,"abstract":"The Aspect-Based Sentiment Classification (ABSC) models often suffer from a lack of training data in some domains. To exploit the abundant data from another domain, this work extends the original state-of-the-art LCR-Rot-hop++ model that uses a neural network with a rotatory attention mechanism for a cross-domain setting. More specifically, we propose a Domain-Independent Word Selector (DIWS) model that is used in combination with the LCR-Rot-hop++ model (DIWS-LCR-Rot-hop++). DIWS-LCR-Rot-hop++ uses attention weights from the domain classification task to determine whether a word is domain-specific or domain-independent, and discards domain-specific words when training and testing the LCR-Rot-hop++ model for cross-domain ABSC. Overall, our results confirm that DIWS-LCR-Rot-hop++ outperforms the original LCR-Rot-hop++ model under a cross-domain setting in case we impose an optimal domain-dependent attention threshold value for deciding whether a word is domain-specific or domain-independent. For a target domain that is highly similar to the source domain, we find that imposing moderate restrictions on classifying domain-independent words yields the best performance. Differently, a dissimilar target domain requires a strict restriction that classifies a small proportion of words as domain-independent. Also, we observe information loss which deteriorates the performance of DIWS-LCR-Rot-hop++ when we categorize an excessive amount of words as domain-specific and discard them.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computing Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3626307.3626309","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The Aspect-Based Sentiment Classification (ABSC) models often suffer from a lack of training data in some domains. To exploit the abundant data from another domain, this work extends the original state-of-the-art LCR-Rot-hop++ model that uses a neural network with a rotatory attention mechanism for a cross-domain setting. More specifically, we propose a Domain-Independent Word Selector (DIWS) model that is used in combination with the LCR-Rot-hop++ model (DIWS-LCR-Rot-hop++). DIWS-LCR-Rot-hop++ uses attention weights from the domain classification task to determine whether a word is domain-specific or domain-independent, and discards domain-specific words when training and testing the LCR-Rot-hop++ model for cross-domain ABSC. Overall, our results confirm that DIWS-LCR-Rot-hop++ outperforms the original LCR-Rot-hop++ model under a cross-domain setting in case we impose an optimal domain-dependent attention threshold value for deciding whether a word is domain-specific or domain-independent. For a target domain that is highly similar to the source domain, we find that imposing moderate restrictions on classifying domain-independent words yields the best performance. Differently, a dissimilar target domain requires a strict restriction that classifies a small proportion of words as domain-independent. Also, we observe information loss which deteriorates the performance of DIWS-LCR-Rot-hop++ when we categorize an excessive amount of words as domain-specific and discard them.