外语义感知与内语义卷积增强内隐语篇关系分类

Zujun Dou, Yu Hong, Yu Sun, Xiao Li, Guodong Zhou
{"title":"外语义感知与内语义卷积增强内隐语篇关系分类","authors":"Zujun Dou, Yu Hong, Yu Sun, Xiao Li, Guodong Zhou","doi":"10.1109/ICTAI56018.2022.00080","DOIUrl":null,"url":null,"abstract":"Implicit discourse relation classification refers to a task of automatically determining relationships between arguments. It has been widely proven that, in a neural classification architecture, decoding discourse relations heavily relies on the reliable semantic representations of arguments. In addition, our previous survey shows that, for a target argument, the external semantic information hidden in the accompanying argument benefits the encoding of the target, either wholly or partially. Moreover, dependency structure appears as the crucial feature for synthesizing word senses of the entire words in arguments. Accordingly, we propose a novel method to enhance the current representation learning of pairwise arguments, which takes into consideration both external semantic information and internal dependency structure. In particular, we inject external semantic information into the Long-Short Term Memory (LSTM) unit of Recurrent Neural Network (RNN) through the input and forget gates. Different from the existing one-off interactive learning models, our method allows the neuronal memory of internal argument semantics to be affected by external information at each encoding step. On the basis, we apply the parser-based Graph Convolutional Networks (GCN) over the semantic presentations of words, so as to accumulate the closely-related semantic information in terms of dependency structures. We conduct experiments on Penn Discourse TreeBank Corpus of version 2.0 (PDTB 2.0). The test results illustrate that the proposed method enhances the baseline significantly, and it obtains comparable performance compared to the state of the art.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Implicit Discourse Relation Classification by Perceiving External Semantics and Convolving Internal Semantics\",\"authors\":\"Zujun Dou, Yu Hong, Yu Sun, Xiao Li, Guodong Zhou\",\"doi\":\"10.1109/ICTAI56018.2022.00080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Implicit discourse relation classification refers to a task of automatically determining relationships between arguments. It has been widely proven that, in a neural classification architecture, decoding discourse relations heavily relies on the reliable semantic representations of arguments. In addition, our previous survey shows that, for a target argument, the external semantic information hidden in the accompanying argument benefits the encoding of the target, either wholly or partially. Moreover, dependency structure appears as the crucial feature for synthesizing word senses of the entire words in arguments. Accordingly, we propose a novel method to enhance the current representation learning of pairwise arguments, which takes into consideration both external semantic information and internal dependency structure. In particular, we inject external semantic information into the Long-Short Term Memory (LSTM) unit of Recurrent Neural Network (RNN) through the input and forget gates. Different from the existing one-off interactive learning models, our method allows the neuronal memory of internal argument semantics to be affected by external information at each encoding step. On the basis, we apply the parser-based Graph Convolutional Networks (GCN) over the semantic presentations of words, so as to accumulate the closely-related semantic information in terms of dependency structures. We conduct experiments on Penn Discourse TreeBank Corpus of version 2.0 (PDTB 2.0). The test results illustrate that the proposed method enhances the baseline significantly, and it obtains comparable performance compared to the state of the art.\",\"PeriodicalId\":354314,\"journal\":{\"name\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"63 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI56018.2022.00080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

隐式语篇关系分类是指自动确定论点之间关系的任务。已经被广泛证明,在神经分类体系结构中,话语关系的解码在很大程度上依赖于论点的可靠语义表示。此外,我们之前的调查表明,对于目标参数,隐藏在伴随参数中的外部语义信息全部或部分地有利于目标参数的编码。此外,依存结构是论证中综合整个词的词义的关键特征。因此,我们提出了一种同时考虑外部语义信息和内部依赖结构的新方法来增强当前成对参数的表示学习。特别地,我们通过输入门和遗忘门将外部语义信息注入到循环神经网络(RNN)的长短期记忆(LSTM)单元中。与现有的一次性交互学习模型不同,我们的方法允许内部参数语义的神经元记忆在每个编码步骤都受到外部信息的影响。在此基础上,我们将基于解析器的图卷积网络(GCN)应用于词的语义表示,从依赖结构上积累密切相关的语义信息。我们在宾夕法尼亚大学话语树库2.0版语料库(PDTB 2.0)上进行了实验。测试结果表明,该方法显著增强了基线,并获得了与现有方法相当的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing Implicit Discourse Relation Classification by Perceiving External Semantics and Convolving Internal Semantics
Implicit discourse relation classification refers to a task of automatically determining relationships between arguments. It has been widely proven that, in a neural classification architecture, decoding discourse relations heavily relies on the reliable semantic representations of arguments. In addition, our previous survey shows that, for a target argument, the external semantic information hidden in the accompanying argument benefits the encoding of the target, either wholly or partially. Moreover, dependency structure appears as the crucial feature for synthesizing word senses of the entire words in arguments. Accordingly, we propose a novel method to enhance the current representation learning of pairwise arguments, which takes into consideration both external semantic information and internal dependency structure. In particular, we inject external semantic information into the Long-Short Term Memory (LSTM) unit of Recurrent Neural Network (RNN) through the input and forget gates. Different from the existing one-off interactive learning models, our method allows the neuronal memory of internal argument semantics to be affected by external information at each encoding step. On the basis, we apply the parser-based Graph Convolutional Networks (GCN) over the semantic presentations of words, so as to accumulate the closely-related semantic information in terms of dependency structures. We conduct experiments on Penn Discourse TreeBank Corpus of version 2.0 (PDTB 2.0). The test results illustrate that the proposed method enhances the baseline significantly, and it obtains comparable performance compared to the state of the art.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信