BERT for Sentiment Classification in Software Engineering

Junfang Wu, Chunyang Ye, Hui Zhou
{"title":"BERT for Sentiment Classification in Software Engineering","authors":"Junfang Wu, Chunyang Ye, Hui Zhou","doi":"10.1109/ICSS53362.2021.00026","DOIUrl":null,"url":null,"abstract":"Sentiment analysis (SA) has been applied to various fields of software engineering (SE), such as app reviews, stack overflow Q&A website and API comments. General SA tools are trained based on movie or product review data. Research has shown that these SA tools can produce negative results when applied to the field of SE. In order to overcome the above limitations, developers need to customize tools (e.g., SentiStrength-SE, SentiCR, Senti4SD). In recent years, the pre-trained transformer-based models have brought great breakthroughs in the field of natural language processing. Therefore, we intend to fine-tune the pre-trained model BERT for downstream text classification tasks. We compare the performance of SE-specific tools. Meanwhile, we also studied the performance of SE-specific tools in a cross-platform setting. Experimental results show that our approach (BERT-FT) outperforms the existing state-of-the-art models in terms of F1-scores.","PeriodicalId":284026,"journal":{"name":"2021 International Conference on Service Science (ICSS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Service Science (ICSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSS53362.2021.00026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Sentiment analysis (SA) has been applied to various fields of software engineering (SE), such as app reviews, stack overflow Q&A website and API comments. General SA tools are trained based on movie or product review data. Research has shown that these SA tools can produce negative results when applied to the field of SE. In order to overcome the above limitations, developers need to customize tools (e.g., SentiStrength-SE, SentiCR, Senti4SD). In recent years, the pre-trained transformer-based models have brought great breakthroughs in the field of natural language processing. Therefore, we intend to fine-tune the pre-trained model BERT for downstream text classification tasks. We compare the performance of SE-specific tools. Meanwhile, we also studied the performance of SE-specific tools in a cross-platform setting. Experimental results show that our approach (BERT-FT) outperforms the existing state-of-the-art models in terms of F1-scores.
基于BERT的软件工程情感分类
情感分析(SA)已经应用于软件工程(SE)的各个领域,如应用程序评论、堆栈溢出问答网站和API评论。一般的SA工具是基于电影或产品评论数据进行训练的。研究表明,这些SA工具在应用于SE领域时可能会产生负面结果。为了克服上述限制,开发人员需要定制工具(例如,SentiStrength-SE, SentiCR, Senti4SD)。近年来,基于预训练变压器的模型在自然语言处理领域取得了重大突破。因此,我们打算为下游文本分类任务微调预训练模型BERT。我们比较了se特定工具的性能。同时,我们还研究了se专用工具在跨平台环境下的性能。实验结果表明,我们的方法(BERT-FT)在f1分数方面优于现有的最先进的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信