{"title":"Multilingual Coreference Resolution with Harmonized Annotations","authors":"O. Pražák, Miloslav Konopík, Jakub Sido","doi":"10.26615/978-954-452-072-4_125","DOIUrl":null,"url":null,"abstract":"In this paper, we present coreference resolution experiments with a newly created multilingual corpus CorefUD (Nedoluzhko et al.,2021). We focus on the following languages: Czech, Russian, Polish, German, Spanish, and Catalan. In addition to monolingual experiments, we combine the training data in multilingual experiments and train two joined models - for Slavic languages and for all the languages together. We rely on an end-to-end deep learning model that we slightly adapted for the CorefUD corpus. Our results show that we can profit from harmonized annotations, and using joined models helps significantly for the languages with smaller training data.","PeriodicalId":284493,"journal":{"name":"Recent Advances in Natural Language Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Recent Advances in Natural Language Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26615/978-954-452-072-4_125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
In this paper, we present coreference resolution experiments with a newly created multilingual corpus CorefUD (Nedoluzhko et al.,2021). We focus on the following languages: Czech, Russian, Polish, German, Spanish, and Catalan. In addition to monolingual experiments, we combine the training data in multilingual experiments and train two joined models - for Slavic languages and for all the languages together. We rely on an end-to-end deep learning model that we slightly adapted for the CorefUD corpus. Our results show that we can profit from harmonized annotations, and using joined models helps significantly for the languages with smaller training data.
在本文中,我们使用新创建的多语言语料库CorefUD (Nedoluzhko et al.,2021)进行了共参考解析实验。我们专注于以下语言:捷克语、俄语、波兰语、德语、西班牙语和加泰罗尼亚语。除了单语言实验,我们还将多语言实验中的训练数据结合起来,训练两个联合模型——针对斯拉夫语言和针对所有语言。我们依赖于端到端深度学习模型,我们对CorefUD语料库进行了稍微调整。我们的结果表明,我们可以从协调注释中获益,并且使用连接模型对具有较小训练数据的语言有很大帮助。