{"title":"用于解释递归神经网络隐藏状态的可视化分析工具。","authors":"Rafael Garcia, Tanja Munz, Daniel Weiskopf","doi":"10.1186/s42492-021-00090-0","DOIUrl":null,"url":null,"abstract":"<p><p>In this paper, we introduce a visual analytics approach aimed at helping machine learning experts analyze the hidden states of layers in recurrent neural networks. Our technique allows the user to interactively inspect how hidden states store and process information throughout the feeding of an input sequence into the network. The technique can help answer questions, such as which parts of the input data have a higher impact on the prediction and how the model correlates each hidden state configuration with a certain output. Our visual analytics approach comprises several components: First, our input visualization shows the input sequence and how it relates to the output (using color coding). In addition, hidden states are visualized through a nonlinear projection into a 2-D visualization space using t-distributed stochastic neighbor embedding to understand the shape of the space of the hidden states. Trajectories are also employed to show the details of the evolution of the hidden state configurations. Finally, a time-multi-class heatmap matrix visualizes the evolution of the expected predictions for multi-class classifiers, and a histogram indicates the distances between the hidden states within the original space. The different visualizations are shown simultaneously in multiple views and support brushing-and-linking to facilitate the analysis of the classifications and debugging for misclassified input sequences. To demonstrate the capability of our approach, we discuss two typical use cases for long short-term memory models applied to two widely used natural language processing datasets.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"4 1","pages":"24"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8479019/pdf/","citationCount":"0","resultStr":"{\"title\":\"Visual analytics tool for the interpretation of hidden states in recurrent neural networks.\",\"authors\":\"Rafael Garcia, Tanja Munz, Daniel Weiskopf\",\"doi\":\"10.1186/s42492-021-00090-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In this paper, we introduce a visual analytics approach aimed at helping machine learning experts analyze the hidden states of layers in recurrent neural networks. Our technique allows the user to interactively inspect how hidden states store and process information throughout the feeding of an input sequence into the network. The technique can help answer questions, such as which parts of the input data have a higher impact on the prediction and how the model correlates each hidden state configuration with a certain output. Our visual analytics approach comprises several components: First, our input visualization shows the input sequence and how it relates to the output (using color coding). In addition, hidden states are visualized through a nonlinear projection into a 2-D visualization space using t-distributed stochastic neighbor embedding to understand the shape of the space of the hidden states. Trajectories are also employed to show the details of the evolution of the hidden state configurations. Finally, a time-multi-class heatmap matrix visualizes the evolution of the expected predictions for multi-class classifiers, and a histogram indicates the distances between the hidden states within the original space. The different visualizations are shown simultaneously in multiple views and support brushing-and-linking to facilitate the analysis of the classifications and debugging for misclassified input sequences. To demonstrate the capability of our approach, we discuss two typical use cases for long short-term memory models applied to two widely used natural language processing datasets.</p>\",\"PeriodicalId\":52384,\"journal\":{\"name\":\"Visual Computing for Industry, Biomedicine, and Art\",\"volume\":\"4 1\",\"pages\":\"24\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8479019/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Visual Computing for Industry, Biomedicine, and Art\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://doi.org/10.1186/s42492-021-00090-0\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Computing for Industry, Biomedicine, and Art","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1186/s42492-021-00090-0","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0
摘要
本文介绍了一种可视化分析方法,旨在帮助机器学习专家分析递归神经网络中各层的隐藏状态。我们的技术允许用户以交互方式检查隐藏状态在将输入序列输入网络的整个过程中如何存储和处理信息。该技术有助于回答一些问题,例如输入数据的哪些部分对预测的影响更大,以及模型如何将每个隐藏状态配置与特定输出相关联。我们的可视化分析方法由几个部分组成:首先,我们的输入可视化显示了输入序列及其与输出的关系(使用彩色编码)。此外,隐藏状态通过非线性投影可视化到二维可视化空间,使用 t 分布随机邻域嵌入来了解隐藏状态空间的形状。此外,还采用轨迹来显示隐藏状态配置演变的细节。最后,时间-多类热图矩阵可视化了多类分类器预期预测的演变,直方图显示了原始空间中隐藏状态之间的距离。不同的可视化以多种视图同时显示,并支持刷新和链接,以方便分析分类和调试分类错误的输入序列。为了证明我们的方法的能力,我们讨论了两个应用于两个广泛使用的自然语言处理数据集的长短期记忆模型的典型用例。
Visual analytics tool for the interpretation of hidden states in recurrent neural networks.
In this paper, we introduce a visual analytics approach aimed at helping machine learning experts analyze the hidden states of layers in recurrent neural networks. Our technique allows the user to interactively inspect how hidden states store and process information throughout the feeding of an input sequence into the network. The technique can help answer questions, such as which parts of the input data have a higher impact on the prediction and how the model correlates each hidden state configuration with a certain output. Our visual analytics approach comprises several components: First, our input visualization shows the input sequence and how it relates to the output (using color coding). In addition, hidden states are visualized through a nonlinear projection into a 2-D visualization space using t-distributed stochastic neighbor embedding to understand the shape of the space of the hidden states. Trajectories are also employed to show the details of the evolution of the hidden state configurations. Finally, a time-multi-class heatmap matrix visualizes the evolution of the expected predictions for multi-class classifiers, and a histogram indicates the distances between the hidden states within the original space. The different visualizations are shown simultaneously in multiple views and support brushing-and-linking to facilitate the analysis of the classifications and debugging for misclassified input sequences. To demonstrate the capability of our approach, we discuss two typical use cases for long short-term memory models applied to two widely used natural language processing datasets.