Context-Sensitive Visualization of Deep Learning Natural Language Processing Models

A. Dunn, D. Inkpen, Razvan Andonie
{"title":"Context-Sensitive Visualization of Deep Learning Natural Language Processing Models","authors":"A. Dunn, D. Inkpen, Razvan Andonie","doi":"10.1109/IV53921.2021.00035","DOIUrl":null,"url":null,"abstract":"The introduction of Transformer neural networks has changed the landscape of Natural Language Processing (NLP) during the last years. So far, none of the visualization systems has yet managed to examine all the facets of the Transformers. This gave us the motivation of the current work. We propose a novel NLP Transformer context-sensitive visualization method that leverages existing NLP tools to find the most significant groups of tokens (words) that have the greatest effect on the output, thus preserving some context from the original text. The original contribution is a context-aware visualization method of the most influential word combinations with respect to a classifier. This context-sensitive approach leads to heatmaps that include more of the relevant information pertaining to the classification, as well as more accurately highlighting the most important words from the input text. The proposed method uses a dependency parser, a BERT model, and the leave-n-out technique. Experimental results suggest that improved visualizations increase the understanding of the model, and help design models that perform closer to the human level of understanding for these problems.","PeriodicalId":380260,"journal":{"name":"2021 25th International Conference Information Visualisation (IV)","volume":"90 11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 25th International Conference Information Visualisation (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IV53921.2021.00035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

The introduction of Transformer neural networks has changed the landscape of Natural Language Processing (NLP) during the last years. So far, none of the visualization systems has yet managed to examine all the facets of the Transformers. This gave us the motivation of the current work. We propose a novel NLP Transformer context-sensitive visualization method that leverages existing NLP tools to find the most significant groups of tokens (words) that have the greatest effect on the output, thus preserving some context from the original text. The original contribution is a context-aware visualization method of the most influential word combinations with respect to a classifier. This context-sensitive approach leads to heatmaps that include more of the relevant information pertaining to the classification, as well as more accurately highlighting the most important words from the input text. The proposed method uses a dependency parser, a BERT model, and the leave-n-out technique. Experimental results suggest that improved visualizations increase the understanding of the model, and help design models that perform closer to the human level of understanding for these problems.
深度学习自然语言处理模型的上下文敏感可视化
在过去的几年里,Transformer神经网络的引入改变了自然语言处理(NLP)的格局。到目前为止,还没有一个可视化系统能够检查变形金刚的所有方面。这给了我们现在工作的动力。我们提出了一种新颖的NLP Transformer上下文敏感可视化方法,该方法利用现有的NLP工具来找到对输出影响最大的最重要的标记(单词)组,从而保留原始文本中的一些上下文。最初的贡献是一个关于分类器的最具影响力的词组合的上下文感知可视化方法。这种对上下文敏感的方法会生成热图,其中包含更多与分类相关的信息,并更准确地突出显示输入文本中最重要的单词。所建议的方法使用依赖解析器、BERT模型和省略技术。实验结果表明,改进的可视化增加了对模型的理解,并有助于设计更接近人类对这些问题的理解水平的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信