语言启发的神经共指解析

Xuanyue Yang, Wenting Ye, Luke Breitfeller, Tianwei Yue, Wenping Wang
{"title":"语言启发的神经共指解析","authors":"Xuanyue Yang, Wenting Ye, Luke Breitfeller, Tianwei Yue, Wenping Wang","doi":"10.54364/aaiml.2023.1166","DOIUrl":null,"url":null,"abstract":"The field of coreference resolution has witnessed significant advancements since the introduction of deep learning-based models. In this paper, we replicate the state-of-the-art coreference resolution model and perform a thorough error analysis. We identify a potential limitation of the current approach in terms of its treatment of grammatical constructions within sentences. Furthermore, the model struggles to leverage contextual information across sentences, resulting in suboptimal accuracy when resolving mentions that span multiple sentences. Motivated by these observations, we propose an approach that integrates linguistic information throughout the entire architecture. Our innovative contributions include multitask learning with part-of-speech (POS) tagging, supervision of intermediate scores, and self-attention mechanisms that operate across sentences. By incorporating these linguisticinspired modules, we not only achieve a modest improvement in the F1 score on CoNLL 2012 dataset, but we also perform qualitative analysis to ascertain whether our model invisibly surpasses the baseline performance. Our findings demonstrate that our model successfully learns linguistic signals that are absent in the original baseline. We posit that these enhance ments may have gone undetected due to annotation errors, but they nonetheless lead to a more accurate understanding of coreference resolution.","PeriodicalId":373878,"journal":{"name":"Adv. Artif. Intell. Mach. Learn.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Linguistically-Inspired Neural Coreference Resolution\",\"authors\":\"Xuanyue Yang, Wenting Ye, Luke Breitfeller, Tianwei Yue, Wenping Wang\",\"doi\":\"10.54364/aaiml.2023.1166\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The field of coreference resolution has witnessed significant advancements since the introduction of deep learning-based models. In this paper, we replicate the state-of-the-art coreference resolution model and perform a thorough error analysis. We identify a potential limitation of the current approach in terms of its treatment of grammatical constructions within sentences. Furthermore, the model struggles to leverage contextual information across sentences, resulting in suboptimal accuracy when resolving mentions that span multiple sentences. Motivated by these observations, we propose an approach that integrates linguistic information throughout the entire architecture. Our innovative contributions include multitask learning with part-of-speech (POS) tagging, supervision of intermediate scores, and self-attention mechanisms that operate across sentences. By incorporating these linguisticinspired modules, we not only achieve a modest improvement in the F1 score on CoNLL 2012 dataset, but we also perform qualitative analysis to ascertain whether our model invisibly surpasses the baseline performance. Our findings demonstrate that our model successfully learns linguistic signals that are absent in the original baseline. We posit that these enhance ments may have gone undetected due to annotation errors, but they nonetheless lead to a more accurate understanding of coreference resolution.\",\"PeriodicalId\":373878,\"journal\":{\"name\":\"Adv. Artif. Intell. Mach. Learn.\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adv. Artif. Intell. Mach. Learn.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54364/aaiml.2023.1166\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adv. Artif. Intell. Mach. Learn.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54364/aaiml.2023.1166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

自引入基于深度学习的模型以来,共参考分辨率领域取得了重大进展。在本文中,我们复制了最先进的共参考分辨率模型,并进行了彻底的误差分析。我们在处理句子中的语法结构方面确定了当前方法的潜在局限性。此外,该模型难以利用跨句子的上下文信息,导致在解析跨多个句子的提及时准确性不够理想。基于这些观察,我们提出了一种将语言信息集成到整个体系结构中的方法。我们的创新贡献包括词性标注的多任务学习,中间分数的监督,以及跨句子操作的自注意机制。通过整合这些受语言启发的模块,我们不仅在CoNLL 2012数据集上实现了F1分数的适度改进,而且还进行了定性分析,以确定我们的模型是否无形地超过了基线性能。我们的研究结果表明,我们的模型成功地学习了原始基线中不存在的语言信号。我们假设这些增强可能由于注释错误而未被检测到,但它们仍然导致对共同参考分辨率的更准确理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Linguistically-Inspired Neural Coreference Resolution
The field of coreference resolution has witnessed significant advancements since the introduction of deep learning-based models. In this paper, we replicate the state-of-the-art coreference resolution model and perform a thorough error analysis. We identify a potential limitation of the current approach in terms of its treatment of grammatical constructions within sentences. Furthermore, the model struggles to leverage contextual information across sentences, resulting in suboptimal accuracy when resolving mentions that span multiple sentences. Motivated by these observations, we propose an approach that integrates linguistic information throughout the entire architecture. Our innovative contributions include multitask learning with part-of-speech (POS) tagging, supervision of intermediate scores, and self-attention mechanisms that operate across sentences. By incorporating these linguisticinspired modules, we not only achieve a modest improvement in the F1 score on CoNLL 2012 dataset, but we also perform qualitative analysis to ascertain whether our model invisibly surpasses the baseline performance. Our findings demonstrate that our model successfully learns linguistic signals that are absent in the original baseline. We posit that these enhance ments may have gone undetected due to annotation errors, but they nonetheless lead to a more accurate understanding of coreference resolution.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信