{"title":"外语义感知与内语义卷积增强内隐语篇关系分类","authors":"Zujun Dou, Yu Hong, Yu Sun, Xiao Li, Guodong Zhou","doi":"10.1109/ICTAI56018.2022.00080","DOIUrl":null,"url":null,"abstract":"Implicit discourse relation classification refers to a task of automatically determining relationships between arguments. It has been widely proven that, in a neural classification architecture, decoding discourse relations heavily relies on the reliable semantic representations of arguments. In addition, our previous survey shows that, for a target argument, the external semantic information hidden in the accompanying argument benefits the encoding of the target, either wholly or partially. Moreover, dependency structure appears as the crucial feature for synthesizing word senses of the entire words in arguments. Accordingly, we propose a novel method to enhance the current representation learning of pairwise arguments, which takes into consideration both external semantic information and internal dependency structure. In particular, we inject external semantic information into the Long-Short Term Memory (LSTM) unit of Recurrent Neural Network (RNN) through the input and forget gates. Different from the existing one-off interactive learning models, our method allows the neuronal memory of internal argument semantics to be affected by external information at each encoding step. On the basis, we apply the parser-based Graph Convolutional Networks (GCN) over the semantic presentations of words, so as to accumulate the closely-related semantic information in terms of dependency structures. We conduct experiments on Penn Discourse TreeBank Corpus of version 2.0 (PDTB 2.0). The test results illustrate that the proposed method enhances the baseline significantly, and it obtains comparable performance compared to the state of the art.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Implicit Discourse Relation Classification by Perceiving External Semantics and Convolving Internal Semantics\",\"authors\":\"Zujun Dou, Yu Hong, Yu Sun, Xiao Li, Guodong Zhou\",\"doi\":\"10.1109/ICTAI56018.2022.00080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Implicit discourse relation classification refers to a task of automatically determining relationships between arguments. It has been widely proven that, in a neural classification architecture, decoding discourse relations heavily relies on the reliable semantic representations of arguments. In addition, our previous survey shows that, for a target argument, the external semantic information hidden in the accompanying argument benefits the encoding of the target, either wholly or partially. Moreover, dependency structure appears as the crucial feature for synthesizing word senses of the entire words in arguments. Accordingly, we propose a novel method to enhance the current representation learning of pairwise arguments, which takes into consideration both external semantic information and internal dependency structure. In particular, we inject external semantic information into the Long-Short Term Memory (LSTM) unit of Recurrent Neural Network (RNN) through the input and forget gates. Different from the existing one-off interactive learning models, our method allows the neuronal memory of internal argument semantics to be affected by external information at each encoding step. On the basis, we apply the parser-based Graph Convolutional Networks (GCN) over the semantic presentations of words, so as to accumulate the closely-related semantic information in terms of dependency structures. We conduct experiments on Penn Discourse TreeBank Corpus of version 2.0 (PDTB 2.0). The test results illustrate that the proposed method enhances the baseline significantly, and it obtains comparable performance compared to the state of the art.\",\"PeriodicalId\":354314,\"journal\":{\"name\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"63 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI56018.2022.00080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enhancing Implicit Discourse Relation Classification by Perceiving External Semantics and Convolving Internal Semantics
Implicit discourse relation classification refers to a task of automatically determining relationships between arguments. It has been widely proven that, in a neural classification architecture, decoding discourse relations heavily relies on the reliable semantic representations of arguments. In addition, our previous survey shows that, for a target argument, the external semantic information hidden in the accompanying argument benefits the encoding of the target, either wholly or partially. Moreover, dependency structure appears as the crucial feature for synthesizing word senses of the entire words in arguments. Accordingly, we propose a novel method to enhance the current representation learning of pairwise arguments, which takes into consideration both external semantic information and internal dependency structure. In particular, we inject external semantic information into the Long-Short Term Memory (LSTM) unit of Recurrent Neural Network (RNN) through the input and forget gates. Different from the existing one-off interactive learning models, our method allows the neuronal memory of internal argument semantics to be affected by external information at each encoding step. On the basis, we apply the parser-based Graph Convolutional Networks (GCN) over the semantic presentations of words, so as to accumulate the closely-related semantic information in terms of dependency structures. We conduct experiments on Penn Discourse TreeBank Corpus of version 2.0 (PDTB 2.0). The test results illustrate that the proposed method enhances the baseline significantly, and it obtains comparable performance compared to the state of the art.