{"title":"Enhancing Document-level Event Argument Extraction with Contextual Clues and Role Relevance","authors":"Wanlong Liu, Shaohuan Cheng, Di Zeng, Hong Qu","doi":"10.18653/v1/2023.findings-acl.817","DOIUrl":null,"url":null,"abstract":"Document-level event argument extraction poses new challenges of long input and cross-sentence inference compared to its sentence-level counterpart. However, most prior works focus on capturing the relations between candidate arguments and the event trigger in each event, ignoring two crucial points: a) non-argument contextual clue information; b) the relevance among argument roles. In this paper, we propose a SCPRG (Span-trigger-based Contextual Pooling and latent Role Guidance) model, which contains two novel and effective modules for the above problem. The Span-Trigger-based Contextual Pooling(STCP) adaptively selects and aggregates the information of non-argument clue words based on the context attention weights of specific argument-trigger pairs from pre-trained model. The Role-based Latent Information Guidance (RLIG) module constructs latent role representations, makes them interact through role-interactive encoding to capture semantic relevance, and merges them into candidate arguments. Both STCP and RLIG introduce no more than 1% new parameters compared with the base model and can be easily applied to other event extraction models, which are compact and transplantable. Experiments on two public datasets show that our SCPRG outperforms previous state-of-the-art methods, with 1.13 F1 and 2.64 F1 improvements on RAMS and WikiEvents respectively. Further analyses illustrate the interpretability of our model.","PeriodicalId":352845,"journal":{"name":"Annual Meeting of the Association for Computational Linguistics","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual Meeting of the Association for Computational Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2023.findings-acl.817","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Document-level event argument extraction poses new challenges of long input and cross-sentence inference compared to its sentence-level counterpart. However, most prior works focus on capturing the relations between candidate arguments and the event trigger in each event, ignoring two crucial points: a) non-argument contextual clue information; b) the relevance among argument roles. In this paper, we propose a SCPRG (Span-trigger-based Contextual Pooling and latent Role Guidance) model, which contains two novel and effective modules for the above problem. The Span-Trigger-based Contextual Pooling(STCP) adaptively selects and aggregates the information of non-argument clue words based on the context attention weights of specific argument-trigger pairs from pre-trained model. The Role-based Latent Information Guidance (RLIG) module constructs latent role representations, makes them interact through role-interactive encoding to capture semantic relevance, and merges them into candidate arguments. Both STCP and RLIG introduce no more than 1% new parameters compared with the base model and can be easily applied to other event extraction models, which are compact and transplantable. Experiments on two public datasets show that our SCPRG outperforms previous state-of-the-art methods, with 1.13 F1 and 2.64 F1 improvements on RAMS and WikiEvents respectively. Further analyses illustrate the interpretability of our model.
与句子级事件参数提取相比,文档级事件参数提取提出了长输入和跨句子推理的新挑战。然而,大多数先前的工作都集中在捕获每个事件中候选参数与事件触发器之间的关系,忽略了两个关键点:a)非参数上下文线索信息;B)论点角色之间的相关性。在本文中,我们提出了一个基于Span-trigger-based Contextual Pooling and latent Role Guidance的SCPRG模型,该模型包含了两个新颖有效的模块。基于span -trigger的上下文池(STCP)基于预训练模型中特定参数触发对的上下文关注权重,自适应地选择和聚合非参数线索词的信息。基于角色的潜在信息引导(rligg)模块构建潜在的角色表示,通过角色交互编码实现交互,获取语义相关性,并将其合并为候选参数。与基本模型相比,STCP和RLIG都引入了不超过1%的新参数,并且易于应用于其他事件提取模型,具有紧凑和可移植的特点。在两个公共数据集上的实验表明,我们的SCPRG优于以前最先进的方法,在RAMS和WikiEvents上分别提高了1.13 F1和2.64 F1。进一步的分析说明了我们模型的可解释性。