跨领域基于方面的情感分类的微调

S. V. Berkum, Sophia van Megen, Max Savelkoul, Pim Weterman, F. Frasincar
{"title":"跨领域基于方面的情感分类的微调","authors":"S. V. Berkum, Sophia van Megen, Max Savelkoul, Pim Weterman, F. Frasincar","doi":"10.1145/3486622.3494003","DOIUrl":null,"url":null,"abstract":"Aspect-Based Sentiment Classification (ABSC) is a subfield of sentiment analysis concerned with classifying sentiment attributed to pre-identified aspects. A problem in ABSC nowadays is the limited availability of labeled data for certain domains. This study aims to improve sentiment classification accuracy for these domains where labeled data is scarce. Our proposed approach is to apply cross-domain fine-tuning to a state-of-the-art deep learning method designed for ABSC: LCR-Rot-hop++. For this purpose, we initially train the model on a domain that has a lot of labeled data available and consecutively fine-tune the upper layers with training data of the target domain. The performance of the fine-tuning method is evaluated relative to a model that is trained from scratch for each target domain. For the initial training, restaurant review data is used. For the fine-tuning and from-scratch training we use review data for laptops, books, hotels, and electronics. Our results show that when comparing the fine-tuning with the from-scratch method (for the same training set), the fine-tuning method on average outperforms the from-scratch method when the training set is small for all considered domains and is considerably faster.","PeriodicalId":89230,"journal":{"name":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"16 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Fine-Tuning for Cross-Domain Aspect-Based Sentiment Classification\",\"authors\":\"S. V. Berkum, Sophia van Megen, Max Savelkoul, Pim Weterman, F. Frasincar\",\"doi\":\"10.1145/3486622.3494003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aspect-Based Sentiment Classification (ABSC) is a subfield of sentiment analysis concerned with classifying sentiment attributed to pre-identified aspects. A problem in ABSC nowadays is the limited availability of labeled data for certain domains. This study aims to improve sentiment classification accuracy for these domains where labeled data is scarce. Our proposed approach is to apply cross-domain fine-tuning to a state-of-the-art deep learning method designed for ABSC: LCR-Rot-hop++. For this purpose, we initially train the model on a domain that has a lot of labeled data available and consecutively fine-tune the upper layers with training data of the target domain. The performance of the fine-tuning method is evaluated relative to a model that is trained from scratch for each target domain. For the initial training, restaurant review data is used. For the fine-tuning and from-scratch training we use review data for laptops, books, hotels, and electronics. Our results show that when comparing the fine-tuning with the from-scratch method (for the same training set), the fine-tuning method on average outperforms the from-scratch method when the training set is small for all considered domains and is considerably faster.\",\"PeriodicalId\":89230,\"journal\":{\"name\":\"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3486622.3494003\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3486622.3494003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

基于方面的情感分类(ABSC)是情感分析的一个子领域,涉及对归因于预先识别的方面的情感进行分类。目前ABSC的一个问题是某些领域标记数据的可用性有限。本研究旨在提高这些标记数据稀缺的领域的情感分类精度。我们提出的方法是将跨域微调应用于为ABSC设计的最先进的深度学习方法:LCR-Rot-hop++。为此,我们首先在一个有大量可用标记数据的域上训练模型,然后用目标域的训练数据连续微调上层。相对于为每个目标域从头开始训练的模型,对微调方法的性能进行了评估。对于最初的培训,使用的是餐厅评论数据。对于微调和从头开始的培训,我们使用笔记本电脑、书籍、酒店和电子产品的审查数据。我们的结果表明,当将微调与从头开始的方法(对于相同的训练集)进行比较时,当所有考虑的域的训练集都很小时,微调方法平均优于从头开始的方法,并且速度要快得多。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fine-Tuning for Cross-Domain Aspect-Based Sentiment Classification
Aspect-Based Sentiment Classification (ABSC) is a subfield of sentiment analysis concerned with classifying sentiment attributed to pre-identified aspects. A problem in ABSC nowadays is the limited availability of labeled data for certain domains. This study aims to improve sentiment classification accuracy for these domains where labeled data is scarce. Our proposed approach is to apply cross-domain fine-tuning to a state-of-the-art deep learning method designed for ABSC: LCR-Rot-hop++. For this purpose, we initially train the model on a domain that has a lot of labeled data available and consecutively fine-tune the upper layers with training data of the target domain. The performance of the fine-tuning method is evaluated relative to a model that is trained from scratch for each target domain. For the initial training, restaurant review data is used. For the fine-tuning and from-scratch training we use review data for laptops, books, hotels, and electronics. Our results show that when comparing the fine-tuning with the from-scratch method (for the same training set), the fine-tuning method on average outperforms the from-scratch method when the training set is small for all considered domains and is considerably faster.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信