Xiaoyao Ding, Dongyan Ding, Gang Zhou, Jicang Lu, Taojie Zhu
{"title":"通过双重注意力融合和动态非对称损失进行文档级关系提取","authors":"Xiaoyao Ding, Dongyan Ding, Gang Zhou, Jicang Lu, Taojie Zhu","doi":"10.1007/s40747-024-01632-8","DOIUrl":null,"url":null,"abstract":"<p>Document-level relation extraction (RE), which requires integrating and reasoning information to identify multiple possible relations among entities. However, previous research typically performed reasoning on heterogeneous graphs and set a global threshold for multiple relations classification, regardless of interaction reasoning information among multiple relations and positive–negative samples imbalance on databases. This paper proposes a novel framework for Document-level RE with two techniques, dual attention fusion and dynamic asymmetric loss. Concretely, to obtain more interdependency feature learning, we construct entity pairs and contextual matrixes using multi-head axial attention and co-attention mechanism to learn the interaction among entity pairs deeply. To alleviate the hard-thresholds influence from positive–negative imbalance samples, we dynamically adjust weights to optimize the probabilities of different labels. We evaluate our model on two benchmark document-level RE datasets, DocRED and CDR. Experimental results show that our DASL (Dual Attention fusion and dynamic aSymmetric Loss) obtains superior performance on two public datasets, we further provide extensive experiments to analyze how dual attention fusion and dynamic asymmetric loss guide the model for better extracting multi-label relations among entities.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"154 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Document-level relation extraction via dual attention fusion and dynamic asymmetric loss\",\"authors\":\"Xiaoyao Ding, Dongyan Ding, Gang Zhou, Jicang Lu, Taojie Zhu\",\"doi\":\"10.1007/s40747-024-01632-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Document-level relation extraction (RE), which requires integrating and reasoning information to identify multiple possible relations among entities. However, previous research typically performed reasoning on heterogeneous graphs and set a global threshold for multiple relations classification, regardless of interaction reasoning information among multiple relations and positive–negative samples imbalance on databases. This paper proposes a novel framework for Document-level RE with two techniques, dual attention fusion and dynamic asymmetric loss. Concretely, to obtain more interdependency feature learning, we construct entity pairs and contextual matrixes using multi-head axial attention and co-attention mechanism to learn the interaction among entity pairs deeply. To alleviate the hard-thresholds influence from positive–negative imbalance samples, we dynamically adjust weights to optimize the probabilities of different labels. We evaluate our model on two benchmark document-level RE datasets, DocRED and CDR. Experimental results show that our DASL (Dual Attention fusion and dynamic aSymmetric Loss) obtains superior performance on two public datasets, we further provide extensive experiments to analyze how dual attention fusion and dynamic asymmetric loss guide the model for better extracting multi-label relations among entities.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"154 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01632-8\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01632-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
文档级关系提取(RE),需要整合和推理信息来识别实体间的多种可能关系。然而,以往的研究通常是在异构图上进行推理,并为多重关系分类设置一个全局阈值,而不考虑多重关系之间的交互推理信息和数据库中正负样本的不平衡。本文利用双重注意力融合和动态非对称损失两种技术提出了一种新的文档级 RE 框架。具体来说,为了获得更多的相互依赖特征学习,我们利用多头轴向注意和共同注意机制构建了实体对和上下文矩阵,以深入学习实体对之间的交互。为了减轻正负不平衡样本对硬阈值的影响,我们动态调整权重以优化不同标签的概率。我们在 DocRED 和 CDR 这两个基准文档级 RE 数据集上评估了我们的模型。实验结果表明,我们的 DASL(双注意融合和动态非对称损失)在两个公共数据集上获得了优异的性能,我们进一步提供了大量实验来分析双注意融合和动态非对称损失如何指导模型更好地提取实体间的多标签关系。
Document-level relation extraction via dual attention fusion and dynamic asymmetric loss
Document-level relation extraction (RE), which requires integrating and reasoning information to identify multiple possible relations among entities. However, previous research typically performed reasoning on heterogeneous graphs and set a global threshold for multiple relations classification, regardless of interaction reasoning information among multiple relations and positive–negative samples imbalance on databases. This paper proposes a novel framework for Document-level RE with two techniques, dual attention fusion and dynamic asymmetric loss. Concretely, to obtain more interdependency feature learning, we construct entity pairs and contextual matrixes using multi-head axial attention and co-attention mechanism to learn the interaction among entity pairs deeply. To alleviate the hard-thresholds influence from positive–negative imbalance samples, we dynamically adjust weights to optimize the probabilities of different labels. We evaluate our model on two benchmark document-level RE datasets, DocRED and CDR. Experimental results show that our DASL (Dual Attention fusion and dynamic aSymmetric Loss) obtains superior performance on two public datasets, we further provide extensive experiments to analyze how dual attention fusion and dynamic asymmetric loss guide the model for better extracting multi-label relations among entities.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.