hmBERT: Historical Multilingual Language Models for Named Entity Recognition

Stefan Schweter, Luisa März, Katharina Schmid, Erion cCano
{"title":"hmBERT: Historical Multilingual Language Models for Named Entity Recognition","authors":"Stefan Schweter, Luisa März, Katharina Schmid, Erion cCano","doi":"10.48550/arXiv.2205.15575","DOIUrl":null,"url":null,"abstract":"Compared to standard Named Entity Recognition (NER), identifying persons, locations, and organizations in historical texts constitutes a big challenge. To obtain machine-readable corpora, the historical text is usually scanned and Optical Character Recognition (OCR) needs to be performed. As a result, the historical corpora contain errors. Also, entities like location or organization can change over time, which poses another challenge. Overall, historical texts come with several peculiarities that differ greatly from modern texts and large labeled corpora for training a neural tagger are hardly available for this domain. In this work, we tackle NER for historical German, English, French, Swedish, and Finnish by training large historical language models. We circumvent the need for large amounts of labeled data by using unlabeled data for pretraining a language model. We propose hmBERT, a historical multilingual BERT-based language model, and release the model in several versions of different sizes. Furthermore, we evaluate the capability of hmBERT by solving downstream NER as part of this year's HIPE-2022 shared task and provide detailed analysis and insights. For the Multilingual Classical Commentary coarse-grained NER challenge, our tagger HISTeria outperforms the other teams' models for two out of three languages.","PeriodicalId":232729,"journal":{"name":"Conference and Labs of the Evaluation Forum","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference and Labs of the Evaluation Forum","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2205.15575","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Compared to standard Named Entity Recognition (NER), identifying persons, locations, and organizations in historical texts constitutes a big challenge. To obtain machine-readable corpora, the historical text is usually scanned and Optical Character Recognition (OCR) needs to be performed. As a result, the historical corpora contain errors. Also, entities like location or organization can change over time, which poses another challenge. Overall, historical texts come with several peculiarities that differ greatly from modern texts and large labeled corpora for training a neural tagger are hardly available for this domain. In this work, we tackle NER for historical German, English, French, Swedish, and Finnish by training large historical language models. We circumvent the need for large amounts of labeled data by using unlabeled data for pretraining a language model. We propose hmBERT, a historical multilingual BERT-based language model, and release the model in several versions of different sizes. Furthermore, we evaluate the capability of hmBERT by solving downstream NER as part of this year's HIPE-2022 shared task and provide detailed analysis and insights. For the Multilingual Classical Commentary coarse-grained NER challenge, our tagger HISTeria outperforms the other teams' models for two out of three languages.
命名实体识别的历史多语言语言模型
与标准的命名实体识别(NER)相比,在历史文本中识别人物、地点和组织是一个很大的挑战。为了获得机器可读的语料库,通常需要对历史文本进行扫描,并进行光学字符识别(OCR)。因此,历史语料库中存在错误。此外,位置或组织等实体可能会随着时间而变化,这带来了另一个挑战。总的来说,历史文本具有与现代文本大不相同的特点,并且用于训练神经标注器的大型标记语料库很难用于该领域。在这项工作中,我们通过训练大型历史语言模型来处理历史德语、英语、法语、瑞典语和芬兰语的NER。我们通过使用未标记的数据来预训练语言模型,从而避免了对大量标记数据的需求。我们提出了历史上基于bert的多语言语言模型hmBERT,并以不同大小的几个版本发布该模型。此外,作为今年HIPE-2022共享任务的一部分,我们通过解决下游NER来评估hmBERT的能力,并提供详细的分析和见解。对于多语言经典评论粗粒度NER挑战,我们的标注器HISTeria在三种语言中的两种上优于其他团队的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信