Hao Yuan, Fan Yang, Mengnan Du, Shuiwang Ji, Xia Hu
{"title":"通过图形解释器实现结构化的NLP解释","authors":"Hao Yuan, Fan Yang, Mengnan Du, Shuiwang Ji, Xia Hu","doi":"10.1002/ail2.58","DOIUrl":null,"url":null,"abstract":"<p>Natural language processing (NLP) models have been increasingly deployed in real-world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance interpretation, which indicate the contribution of each word towards a specific model prediction. Text data typically possess highly structured characteristics and feature importance explanation cannot fully reveal the rich information contained in text. To bridge this gap, we propose to generate structured interpretations for textual data. Specifically, we pre-process the original text using dependency parsing, which could transform the text from sequences into graphs. Then graph neural networks (GNNs) are utilized to classify the transformed graphs. In particular, we explore two kinds of structured interpretation for pre-trained GNNs: edge-level interpretation and subgraph-level interpretation. Experimental results over three text datasets demonstrate that the structured interpretation can better reveal the structured knowledge encoded in the text. The experimental analysis further indicates that the proposed interpretations can faithfully reflect the decision-making process of the GNN model.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.58","citationCount":"2","resultStr":"{\"title\":\"Towards structured NLP interpretation via graph explainers\",\"authors\":\"Hao Yuan, Fan Yang, Mengnan Du, Shuiwang Ji, Xia Hu\",\"doi\":\"10.1002/ail2.58\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Natural language processing (NLP) models have been increasingly deployed in real-world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance interpretation, which indicate the contribution of each word towards a specific model prediction. Text data typically possess highly structured characteristics and feature importance explanation cannot fully reveal the rich information contained in text. To bridge this gap, we propose to generate structured interpretations for textual data. Specifically, we pre-process the original text using dependency parsing, which could transform the text from sequences into graphs. Then graph neural networks (GNNs) are utilized to classify the transformed graphs. In particular, we explore two kinds of structured interpretation for pre-trained GNNs: edge-level interpretation and subgraph-level interpretation. Experimental results over three text datasets demonstrate that the structured interpretation can better reveal the structured knowledge encoded in the text. The experimental analysis further indicates that the proposed interpretations can faithfully reflect the decision-making process of the GNN model.</p>\",\"PeriodicalId\":72253,\"journal\":{\"name\":\"Applied AI letters\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.58\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied AI letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ail2.58\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied AI letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ail2.58","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards structured NLP interpretation via graph explainers
Natural language processing (NLP) models have been increasingly deployed in real-world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance interpretation, which indicate the contribution of each word towards a specific model prediction. Text data typically possess highly structured characteristics and feature importance explanation cannot fully reveal the rich information contained in text. To bridge this gap, we propose to generate structured interpretations for textual data. Specifically, we pre-process the original text using dependency parsing, which could transform the text from sequences into graphs. Then graph neural networks (GNNs) are utilized to classify the transformed graphs. In particular, we explore two kinds of structured interpretation for pre-trained GNNs: edge-level interpretation and subgraph-level interpretation. Experimental results over three text datasets demonstrate that the structured interpretation can better reveal the structured knowledge encoded in the text. The experimental analysis further indicates that the proposed interpretations can faithfully reflect the decision-making process of the GNN model.