Exploring Pre-Trained Transformers and Bilingual Transfer Learning for Arabic Coreference Resolution

Bonan Min
{"title":"Exploring Pre-Trained Transformers and Bilingual Transfer Learning for Arabic Coreference Resolution","authors":"Bonan Min","doi":"10.18653/v1/2021.crac-1.10","DOIUrl":null,"url":null,"abstract":"In this paper, we develop bilingual transfer learning approaches to improve Arabic coreference resolution by leveraging additional English annotation via bilingual or multilingual pre-trained transformers. We show that bilingual transfer learning improves the strong transformer-based neural coreference models by 2-4 F1. We also systemically investigate the effectiveness of several pre-trained transformer models that differ in training corpora, languages covered, and model capacity. Our best model achieves a new state-of-the-art performance of 64.55 F1 on the Arabic OntoNotes dataset. Our code is publicly available at https://github.com/bnmin/arabic_coref.","PeriodicalId":447425,"journal":{"name":"Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2021.crac-1.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In this paper, we develop bilingual transfer learning approaches to improve Arabic coreference resolution by leveraging additional English annotation via bilingual or multilingual pre-trained transformers. We show that bilingual transfer learning improves the strong transformer-based neural coreference models by 2-4 F1. We also systemically investigate the effectiveness of several pre-trained transformer models that differ in training corpora, languages covered, and model capacity. Our best model achieves a new state-of-the-art performance of 64.55 F1 on the Arabic OntoNotes dataset. Our code is publicly available at https://github.com/bnmin/arabic_coref.
探索预训练变形器和双语迁移学习在阿拉伯语共同参考解决中的应用
在本文中,我们开发了双语迁移学习方法,通过双语或多语言预训练转换器利用额外的英语注释来提高阿拉伯语共指分辨率。我们发现双语迁移学习将基于强变换的神经共参考模型提高了2-4个F1。我们还系统地研究了几种预训练的变压器模型的有效性,这些模型在训练语料库、涵盖的语言和模型容量方面有所不同。我们最好的模型在阿拉伯语OntoNotes数据集上实现了64.55 F1的最新性能。我们的代码可以在https://github.com/bnmin/arabic_coref上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信