On the Robustness of Transformer-Based Models to Different Linguistic Perturbations: A Case of Study in Irony Detection

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Expert Systems Pub Date : 2025-04-29 DOI:10.1111/exsy.70062
Reynier Ortega-Bueno, Elisabetta Fersini, Paolo Rosso
{"title":"On the Robustness of Transformer-Based Models to Different Linguistic Perturbations: A Case of Study in Irony Detection","authors":"Reynier Ortega-Bueno,&nbsp;Elisabetta Fersini,&nbsp;Paolo Rosso","doi":"10.1111/exsy.70062","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>This study investigates the robustness of Transformer models in irony detection addressing various textual perturbations, revealing potential biases in training data concerning ironic and non-ironic classes. The perturbations involve three distinct approaches, each progressively increasing in complexity. The first approach is word masking, which employs wild-card characters or utilises BERT-specific masking through the mask token provided by BERT models. The second approach is word substitution, replacing the bias word with a contextually appropriate alternative. Lastly, paraphrasing generates a new phrase while preserving the original semantic meaning. We leverage Large Language Models (GPT 3.5 Turbo) and human inspection to ensure linguistic correctness and contextual coherence for word substitutions and paraphrasing. The results indicate that models are susceptible to these perturbations, and paraphrasing and word substitution demonstrate the most significant impact on model predictions. The irony class appears to be particularly challenging for models when subjected to these perturbations. The SHAP and LIME methods are used to correlate variations in attribution scores with prediction errors. A notable difference in the Total Variation of attribution scores is observed between original examples and cases involving bias word substitution or masking. Among the corpora used, <i>TwSemEval2018</i> emerges as the most challenging. Regarding model performance, Transformer-based models such as RoBERTa and BERTweet demonstrate superior overall performance addressing these perturbations. This research contributes to understanding the robustness and limitations of irony detection models, highlighting areas for improvement in model design and training data curation.</p>\n </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 6","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.70062","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This study investigates the robustness of Transformer models in irony detection addressing various textual perturbations, revealing potential biases in training data concerning ironic and non-ironic classes. The perturbations involve three distinct approaches, each progressively increasing in complexity. The first approach is word masking, which employs wild-card characters or utilises BERT-specific masking through the mask token provided by BERT models. The second approach is word substitution, replacing the bias word with a contextually appropriate alternative. Lastly, paraphrasing generates a new phrase while preserving the original semantic meaning. We leverage Large Language Models (GPT 3.5 Turbo) and human inspection to ensure linguistic correctness and contextual coherence for word substitutions and paraphrasing. The results indicate that models are susceptible to these perturbations, and paraphrasing and word substitution demonstrate the most significant impact on model predictions. The irony class appears to be particularly challenging for models when subjected to these perturbations. The SHAP and LIME methods are used to correlate variations in attribution scores with prediction errors. A notable difference in the Total Variation of attribution scores is observed between original examples and cases involving bias word substitution or masking. Among the corpora used, TwSemEval2018 emerges as the most challenging. Regarding model performance, Transformer-based models such as RoBERTa and BERTweet demonstrate superior overall performance addressing these perturbations. This research contributes to understanding the robustness and limitations of irony detection models, highlighting areas for improvement in model design and training data curation.

基于变压器的模型对不同语言扰动的鲁棒性研究——以反语检测为例
本研究探讨了Transformer模型在解决各种文本扰动的反语检测中的鲁棒性,揭示了反语和非反语类训练数据中的潜在偏差。扰动涉及三种不同的方法,每种方法的复杂性都在逐渐增加。第一种方法是单词屏蔽,它使用通配符或通过BERT模型提供的掩码令牌利用BERT特定的屏蔽。第二种方法是单词替换,用上下文合适的替代词替换有偏见的词。最后,在保留原语义的情况下,释义产生新的短语。我们利用大型语言模型(GPT 3.5 Turbo)和人工检查来确保单词替换和释义的语言正确性和上下文一致性。结果表明,模型容易受到这些扰动的影响,其中释义和词替换对模型预测的影响最为显著。当模型受到这些扰动时,反讽类似乎特别具有挑战性。SHAP和LIME方法用于将归因分数的变化与预测误差联系起来。归因分数的总变异在原始样本和涉及偏倚词替换或掩蔽的案例之间观察到显著差异。在使用的语料库中,TwSemEval2018是最具挑战性的。关于模型性能,基于变压器的模型,如RoBERTa和BERTweet,在处理这些扰动时表现出了卓越的整体性能。本研究有助于理解反语检测模型的鲁棒性和局限性,突出了模型设计和训练数据管理方面有待改进的领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Expert Systems
Expert Systems 工程技术-计算机:理论方法
CiteScore
7.40
自引率
6.10%
发文量
266
审稿时长
24 months
期刊介绍: Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper. As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信