利用大型语言模型反馈进行因果关键词驱动的可靠文本分类

IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Rui Song , Yingji Li , Mingjie Tian , Hanwen Wang , Fausto Giunchiglia , Hao Xu
{"title":"利用大型语言模型反馈进行因果关键词驱动的可靠文本分类","authors":"Rui Song ,&nbsp;Yingji Li ,&nbsp;Mingjie Tian ,&nbsp;Hanwen Wang ,&nbsp;Fausto Giunchiglia ,&nbsp;Hao Xu","doi":"10.1016/j.ipm.2024.103964","DOIUrl":null,"url":null,"abstract":"<div><div>Recent studies show Pre-trained Language Models (PLMs) tend to shortcut learning, reducing effectiveness with Out-Of-Distribution (OOD) samples, prompting research on the impact of shortcuts and robust causal features by interpretable methods for text classification. However, current approaches encounter two primary challenges. Firstly, black-box interpretable methods often yield incorrect causal keywords. Secondly, existing methods do not differentiate between shortcuts and causal keywords, often employing a unified approach to deal with them. To address the first challenge, we propose a framework that incorporates Large Language Model’s feedback into the process of identifying shortcuts and causal keywords. Specifically, we transform causal feature extraction into a word-level binary labeling task with the aid of ChatGPT. For the second challenge, we introduce a multi-grained shortcut mitigation framework. This framework includes two auxiliary tasks aimed at addressing shortcuts and causal features separately: shortcut reconstruction and counterfactual contrastive learning. These tasks enhance PLMs at both the token and sample granularity levels, respectively. Experimental results show that the proposed method achieves an average performance improvement of more than 1% under the premise of four different language model as the backbones for sentiment classification and toxicity detection tasks on 8 datasets compared with the most recent baseline methods.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 2","pages":"Article 103964"},"PeriodicalIF":7.4000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Causal keyword driven reliable text classification with large language model feedback\",\"authors\":\"Rui Song ,&nbsp;Yingji Li ,&nbsp;Mingjie Tian ,&nbsp;Hanwen Wang ,&nbsp;Fausto Giunchiglia ,&nbsp;Hao Xu\",\"doi\":\"10.1016/j.ipm.2024.103964\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent studies show Pre-trained Language Models (PLMs) tend to shortcut learning, reducing effectiveness with Out-Of-Distribution (OOD) samples, prompting research on the impact of shortcuts and robust causal features by interpretable methods for text classification. However, current approaches encounter two primary challenges. Firstly, black-box interpretable methods often yield incorrect causal keywords. Secondly, existing methods do not differentiate between shortcuts and causal keywords, often employing a unified approach to deal with them. To address the first challenge, we propose a framework that incorporates Large Language Model’s feedback into the process of identifying shortcuts and causal keywords. Specifically, we transform causal feature extraction into a word-level binary labeling task with the aid of ChatGPT. For the second challenge, we introduce a multi-grained shortcut mitigation framework. This framework includes two auxiliary tasks aimed at addressing shortcuts and causal features separately: shortcut reconstruction and counterfactual contrastive learning. These tasks enhance PLMs at both the token and sample granularity levels, respectively. Experimental results show that the proposed method achieves an average performance improvement of more than 1% under the premise of four different language model as the backbones for sentiment classification and toxicity detection tasks on 8 datasets compared with the most recent baseline methods.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"62 2\",\"pages\":\"Article 103964\"},\"PeriodicalIF\":7.4000,\"publicationDate\":\"2024-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457324003236\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457324003236","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究表明,预训练语言模型(PLMs)往往会缩短学习时间,降低对分布外样本(OOD)的学习效率,这促使人们研究可解释文本分类方法对缩短学习时间和稳健因果特征的影响。然而,当前的方法遇到了两个主要挑战。首先,黑盒子可解释方法经常产生错误的因果关键词。其次,现有方法没有区分捷径和因果关键词,往往采用统一的方法来处理它们。为了应对第一个挑战,我们提出了一个框架,将大语言模型的反馈融入到识别捷径和因果关键词的过程中。具体来说,我们借助 ChatGPT 将因果特征提取转化为词级二进制标注任务。针对第二个挑战,我们引入了一个多粒度捷径缓解框架。该框架包括两个旨在分别处理捷径和因果特征的辅助任务:捷径重构和反事实对比学习。这些任务分别在标记和样本粒度层面上增强了 PLM。实验结果表明,与最新的基线方法相比,在以四种不同语言模型为骨干进行情感分类和毒性检测任务的前提下,所提出的方法在 8 个数据集上的平均性能提高了 1%以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Causal keyword driven reliable text classification with large language model feedback
Recent studies show Pre-trained Language Models (PLMs) tend to shortcut learning, reducing effectiveness with Out-Of-Distribution (OOD) samples, prompting research on the impact of shortcuts and robust causal features by interpretable methods for text classification. However, current approaches encounter two primary challenges. Firstly, black-box interpretable methods often yield incorrect causal keywords. Secondly, existing methods do not differentiate between shortcuts and causal keywords, often employing a unified approach to deal with them. To address the first challenge, we propose a framework that incorporates Large Language Model’s feedback into the process of identifying shortcuts and causal keywords. Specifically, we transform causal feature extraction into a word-level binary labeling task with the aid of ChatGPT. For the second challenge, we introduce a multi-grained shortcut mitigation framework. This framework includes two auxiliary tasks aimed at addressing shortcuts and causal features separately: shortcut reconstruction and counterfactual contrastive learning. These tasks enhance PLMs at both the token and sample granularity levels, respectively. Experimental results show that the proposed method achieves an average performance improvement of more than 1% under the premise of four different language model as the backbones for sentiment classification and toxicity detection tasks on 8 datasets compared with the most recent baseline methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信