评估使用注意权重来解释基于bert的姿态分类

Carlos Abel Córdova Sáenz, Karin Becker
{"title":"评估使用注意权重来解释基于bert的姿态分类","authors":"Carlos Abel Córdova Sáenz, Karin Becker","doi":"10.1145/3486622.3493966","DOIUrl":null,"url":null,"abstract":"BERT models are currently state-of-the-art solutions for various tasks, including stance classification. However, these models are a black box for their users. Some proposals have leveraged the weights assigned by the internal attention mechanisms of these models for interpretability purposes. However, whether the attention weights help the interpretability of the model is still a matter of debate, with positions in favor and against. This work proposes an attention-based interpretability mechanism to identify the most influential words for stances predicted using BERT-based models. We target stances expressed in Twitter using the Portuguese language and assess the proposed mechanism using a case study regarding stances on COVID-19 vaccination in the Brazilian context. The interpretation mechanism traces tokens’ attentions back to words, assigning a newly proposed metric referred to as absolute word attention. Through this metric, we assess several aspects to determine if we can find important words for the classification and with meaning for the domain. We developed a broad experimental setting that involved three datasets with tweets in Brazilian Portuguese and three BERT models with support for this language. Our results are encouraging, as we were able to identify 52-82% of words with high absolute attention contributing positively to stance classification. The interpretability mechanism proved to be helpful to understand the influence of words in the classification, and they revealed intrinsic properties of the domain and representative arguments of the stances.","PeriodicalId":89230,"journal":{"name":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Assessing the use of attention weights to interpret BERT-based stance classification\",\"authors\":\"Carlos Abel Córdova Sáenz, Karin Becker\",\"doi\":\"10.1145/3486622.3493966\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"BERT models are currently state-of-the-art solutions for various tasks, including stance classification. However, these models are a black box for their users. Some proposals have leveraged the weights assigned by the internal attention mechanisms of these models for interpretability purposes. However, whether the attention weights help the interpretability of the model is still a matter of debate, with positions in favor and against. This work proposes an attention-based interpretability mechanism to identify the most influential words for stances predicted using BERT-based models. We target stances expressed in Twitter using the Portuguese language and assess the proposed mechanism using a case study regarding stances on COVID-19 vaccination in the Brazilian context. The interpretation mechanism traces tokens’ attentions back to words, assigning a newly proposed metric referred to as absolute word attention. Through this metric, we assess several aspects to determine if we can find important words for the classification and with meaning for the domain. We developed a broad experimental setting that involved three datasets with tweets in Brazilian Portuguese and three BERT models with support for this language. Our results are encouraging, as we were able to identify 52-82% of words with high absolute attention contributing positively to stance classification. The interpretability mechanism proved to be helpful to understand the influence of words in the classification, and they revealed intrinsic properties of the domain and representative arguments of the stances.\",\"PeriodicalId\":89230,\"journal\":{\"name\":\"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3486622.3493966\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3486622.3493966","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

BERT模型是目前各种任务的最先进的解决方案,包括姿态分类。然而,这些模型对于它们的用户来说是一个黑盒子。一些建议利用这些模型的内部注意机制分配的权重来达到可解释性的目的。然而,注意力权重是否有助于模型的可解释性仍然是一个争论的问题,有赞成和反对的立场。这项工作提出了一个基于注意力的可解释性机制,以识别使用基于bert的模型预测的最具影响力的立场词。我们针对Twitter上使用葡萄牙语表达的立场,并通过对巴西背景下COVID-19疫苗接种立场的案例研究来评估拟议的机制。解释机制将标记的注意追溯到单词,分配一个新提出的度量称为绝对单词注意。通过这个度量,我们评估了几个方面,以确定我们是否可以找到分类的重要词和对领域有意义的词。我们开发了一个广泛的实验设置,涉及三个数据集,其中包含巴西葡萄牙语的推文和三个支持该语言的BERT模型。我们的结果是令人鼓舞的,因为我们能够识别出52-82%具有高绝对注意力的单词,这对立场分类有积极的贡献。可解释性机制有助于理解词在分类中的影响,它揭示了领域的内在属性和立场的代表性论点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Assessing the use of attention weights to interpret BERT-based stance classification
BERT models are currently state-of-the-art solutions for various tasks, including stance classification. However, these models are a black box for their users. Some proposals have leveraged the weights assigned by the internal attention mechanisms of these models for interpretability purposes. However, whether the attention weights help the interpretability of the model is still a matter of debate, with positions in favor and against. This work proposes an attention-based interpretability mechanism to identify the most influential words for stances predicted using BERT-based models. We target stances expressed in Twitter using the Portuguese language and assess the proposed mechanism using a case study regarding stances on COVID-19 vaccination in the Brazilian context. The interpretation mechanism traces tokens’ attentions back to words, assigning a newly proposed metric referred to as absolute word attention. Through this metric, we assess several aspects to determine if we can find important words for the classification and with meaning for the domain. We developed a broad experimental setting that involved three datasets with tweets in Brazilian Portuguese and three BERT models with support for this language. Our results are encouraging, as we were able to identify 52-82% of words with high absolute attention contributing positively to stance classification. The interpretability mechanism proved to be helpful to understand the influence of words in the classification, and they revealed intrinsic properties of the domain and representative arguments of the stances.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信