引导变压器生成用于AMR解析的图结构

Runliang Niu, Qi Wang
{"title":"引导变压器生成用于AMR解析的图结构","authors":"Runliang Niu, Qi Wang","doi":"10.1117/12.2639102","DOIUrl":null,"url":null,"abstract":"Abstract Meaning Representation (AMR) is a kind of semantic representation of natural language, which aims to represent the semantics of a sentence by a rooted, directed, and acyclic graph (DAG). Most existing AMR parsing works are designed under specific dictionary. However, these works make the content length of each node limited, and they mainly need to go through a very complicated post-processing process. In this paper, we propose a novel encoder-decoder framework for AMR parsing to address these issues, which generates a graph structure and predicts node relationships simultaneously. Specifically, we represent each node as a five-tuple form, containing token sequence of variable length and the connection relationship with other nodes. BERT model is employed as the encoder module. Our decoder module first generates a linearization representation of the graph structure, then predicts multiple elements of each node by four different attention based classifiers. We also found an effective way to improve the generalization performance of Transformer model for graph generation. By assigning different index number to nodes in each training step and remove positional encoding used in most generative models, the model can learn the relationship between nodes better. Experiments against two AMR datasets demonstrate the competitive performance of our proposed method compared with baseline methods.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Guiding transformer to generate graph structure for AMR parsing\",\"authors\":\"Runliang Niu, Qi Wang\",\"doi\":\"10.1117/12.2639102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Meaning Representation (AMR) is a kind of semantic representation of natural language, which aims to represent the semantics of a sentence by a rooted, directed, and acyclic graph (DAG). Most existing AMR parsing works are designed under specific dictionary. However, these works make the content length of each node limited, and they mainly need to go through a very complicated post-processing process. In this paper, we propose a novel encoder-decoder framework for AMR parsing to address these issues, which generates a graph structure and predicts node relationships simultaneously. Specifically, we represent each node as a five-tuple form, containing token sequence of variable length and the connection relationship with other nodes. BERT model is employed as the encoder module. Our decoder module first generates a linearization representation of the graph structure, then predicts multiple elements of each node by four different attention based classifiers. We also found an effective way to improve the generalization performance of Transformer model for graph generation. By assigning different index number to nodes in each training step and remove positional encoding used in most generative models, the model can learn the relationship between nodes better. Experiments against two AMR datasets demonstrate the competitive performance of our proposed method compared with baseline methods.\",\"PeriodicalId\":336892,\"journal\":{\"name\":\"Neural Networks, Information and Communication Engineering\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks, Information and Communication Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2639102\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks, Information and Communication Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2639102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

摘要意义表示(AMR)是自然语言的一种语义表示,其目的是用有根、有向、无环图(DAG)来表示句子的语义。现有的大多数AMR解析工作都是在特定的字典下设计的。但是这些作品使得每个节点的内容长度有限,主要需要经过非常复杂的后处理过程。在本文中,我们提出了一种新的编码器-解码器框架来解决这些问题,该框架可以同时生成图结构和预测节点关系。具体来说,我们将每个节点表示为一个五元组形式,其中包含可变长度的令牌序列以及与其他节点的连接关系。编码器模块采用BERT模型。我们的解码器模块首先生成图结构的线性化表示,然后通过四个不同的基于注意力的分类器预测每个节点的多个元素。我们还找到了一种有效的方法来提高Transformer模型在图生成中的泛化性能。通过在每个训练步骤中为节点分配不同的索引号,并去除大多数生成模型中使用的位置编码,模型可以更好地学习节点之间的关系。在两个AMR数据集上的实验表明,与基线方法相比,我们提出的方法具有竞争力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Guiding transformer to generate graph structure for AMR parsing
Abstract Meaning Representation (AMR) is a kind of semantic representation of natural language, which aims to represent the semantics of a sentence by a rooted, directed, and acyclic graph (DAG). Most existing AMR parsing works are designed under specific dictionary. However, these works make the content length of each node limited, and they mainly need to go through a very complicated post-processing process. In this paper, we propose a novel encoder-decoder framework for AMR parsing to address these issues, which generates a graph structure and predicts node relationships simultaneously. Specifically, we represent each node as a five-tuple form, containing token sequence of variable length and the connection relationship with other nodes. BERT model is employed as the encoder module. Our decoder module first generates a linearization representation of the graph structure, then predicts multiple elements of each node by four different attention based classifiers. We also found an effective way to improve the generalization performance of Transformer model for graph generation. By assigning different index number to nodes in each training step and remove positional encoding used in most generative models, the model can learn the relationship between nodes better. Experiments against two AMR datasets demonstrate the competitive performance of our proposed method compared with baseline methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信