提示:用于学习模型解释的文本交互评价指标

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhenyu Nie , Zheng Xiao , Tao Wang , Anthony Theodore Chronopoulos , Răzvan Andonie , Amir Mosavi
{"title":"提示:用于学习模型解释的文本交互评价指标","authors":"Zhenyu Nie ,&nbsp;Zheng Xiao ,&nbsp;Tao Wang ,&nbsp;Anthony Theodore Chronopoulos ,&nbsp;Răzvan Andonie ,&nbsp;Amir Mosavi","doi":"10.1016/j.eswa.2025.128184","DOIUrl":null,"url":null,"abstract":"<div><div>Explaining the decision-making behavior of deep neural networks (DNNs) can increase their trustworthiness in real-world applications. For natural language processing (NLP) tasks, many existing interpretation methods split the text according to the interactions between words. Also, the evaluation of explanation capability focuses on justifying the importance of the divided text spans from the perspective of interaction contribution. However, the prior evaluations are misled by extra interactions, making the evaluation unable to acquire accurate interactions within the text spans. Besides, existing research considers only absolute interaction contribution, which causes the evaluation to underestimate the important text spans with lower absolute interaction contribution and to overestimate the unimportant text spans with higher absolute interaction contribution. In this work, we propose a metric called Text Interaction Proportional Score (TIPS) to evaluate faithful interpretation methods. More specifically, we use a pick scheme to acquire the interactions within the divided text span and eliminate the influence of the extra interactions. Meanwhile, we utilize the relative interaction contribution between the divided text span and whole text to measure the importance of the acquired interactions. The proposed metric is validated using two interpretation methods in explaining three neural text classifiers (LSTM, CNN and BERT) on six benchmark datasets. Experiments show that TIPS outperforms a baseline method in three ways consistently and significantly (i.e., acquiring interactions within the text span, measuring importance of interaction, and distinguishing the important and unimportant text spans).</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"287 ","pages":"Article 128184"},"PeriodicalIF":7.5000,"publicationDate":"2025-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TIPS: A text interaction evaluation metric for learning model interpretation\",\"authors\":\"Zhenyu Nie ,&nbsp;Zheng Xiao ,&nbsp;Tao Wang ,&nbsp;Anthony Theodore Chronopoulos ,&nbsp;Răzvan Andonie ,&nbsp;Amir Mosavi\",\"doi\":\"10.1016/j.eswa.2025.128184\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Explaining the decision-making behavior of deep neural networks (DNNs) can increase their trustworthiness in real-world applications. For natural language processing (NLP) tasks, many existing interpretation methods split the text according to the interactions between words. Also, the evaluation of explanation capability focuses on justifying the importance of the divided text spans from the perspective of interaction contribution. However, the prior evaluations are misled by extra interactions, making the evaluation unable to acquire accurate interactions within the text spans. Besides, existing research considers only absolute interaction contribution, which causes the evaluation to underestimate the important text spans with lower absolute interaction contribution and to overestimate the unimportant text spans with higher absolute interaction contribution. In this work, we propose a metric called Text Interaction Proportional Score (TIPS) to evaluate faithful interpretation methods. More specifically, we use a pick scheme to acquire the interactions within the divided text span and eliminate the influence of the extra interactions. Meanwhile, we utilize the relative interaction contribution between the divided text span and whole text to measure the importance of the acquired interactions. The proposed metric is validated using two interpretation methods in explaining three neural text classifiers (LSTM, CNN and BERT) on six benchmark datasets. Experiments show that TIPS outperforms a baseline method in three ways consistently and significantly (i.e., acquiring interactions within the text span, measuring importance of interaction, and distinguishing the important and unimportant text spans).</div></div>\",\"PeriodicalId\":50461,\"journal\":{\"name\":\"Expert Systems with Applications\",\"volume\":\"287 \",\"pages\":\"Article 128184\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems with Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0957417425018044\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425018044","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

解释深度神经网络(dnn)的决策行为可以提高其在现实应用中的可信度。对于自然语言处理(NLP)任务,许多现有的解释方法根据单词之间的相互作用来分割文本。此外,对解释能力的评估侧重于从交互贡献的角度来证明分割文本跨度的重要性。然而,先前的评估被额外的交互所误导,使得评估无法获得文本范围内准确的交互。此外,现有研究只考虑绝对交互贡献,导致评估低估了绝对交互贡献低的重要文本跨度,高估了绝对交互贡献高的不重要文本跨度。在这项工作中,我们提出了一个称为文本交互比例分数(TIPS)的度量来评估忠实的解释方法。更具体地说,我们使用一个选择方案来获取分割文本范围内的交互,并消除额外交互的影响。同时,我们利用分割的文本跨度和整个文本之间的相对交互贡献来衡量获得的交互的重要性。在六个基准数据集上,使用两种解释方法解释了三种神经文本分类器(LSTM, CNN和BERT),验证了所提出的度量。实验表明,TIPS在三个方面持续且显著地优于基线方法(即获取文本范围内的交互,测量交互的重要性,区分重要和不重要的文本范围)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TIPS: A text interaction evaluation metric for learning model interpretation
Explaining the decision-making behavior of deep neural networks (DNNs) can increase their trustworthiness in real-world applications. For natural language processing (NLP) tasks, many existing interpretation methods split the text according to the interactions between words. Also, the evaluation of explanation capability focuses on justifying the importance of the divided text spans from the perspective of interaction contribution. However, the prior evaluations are misled by extra interactions, making the evaluation unable to acquire accurate interactions within the text spans. Besides, existing research considers only absolute interaction contribution, which causes the evaluation to underestimate the important text spans with lower absolute interaction contribution and to overestimate the unimportant text spans with higher absolute interaction contribution. In this work, we propose a metric called Text Interaction Proportional Score (TIPS) to evaluate faithful interpretation methods. More specifically, we use a pick scheme to acquire the interactions within the divided text span and eliminate the influence of the extra interactions. Meanwhile, we utilize the relative interaction contribution between the divided text span and whole text to measure the importance of the acquired interactions. The proposed metric is validated using two interpretation methods in explaining three neural text classifiers (LSTM, CNN and BERT) on six benchmark datasets. Experiments show that TIPS outperforms a baseline method in three ways consistently and significantly (i.e., acquiring interactions within the text span, measuring importance of interaction, and distinguishing the important and unimportant text spans).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信