Alexander Shan, John Bauer, Riley Carlson, Christopher D. Manning
{"title":"英语 \"命名实体识别器在全球英语中效果好吗?","authors":"Alexander Shan, John Bauer, Riley Carlson, Christopher D. Manning","doi":"10.18653/v1/2023.findings-emnlp.788","DOIUrl":null,"url":null,"abstract":"The vast majority of the popular English named entity recognition (NER) datasets contain American or British English data, despite the existence of many global varieties of English. As such, it is unclear whether they generalize for analyzing use of English globally. To test this, we build a newswire dataset, the Worldwide English NER Dataset, to analyze NER model performance on low-resource English variants from around the world. We test widely used NER toolkits and transformer models, including models using the pre-trained contextual models RoBERTa and ELECTRA, on three datasets: a commonly used British English newswire dataset, CoNLL 2003, a more American focused dataset OntoNotes, and our global dataset. All models trained on the CoNLL or OntoNotes datasets experienced significant performance drops-over 10 F1 in some cases-when tested on the Worldwide English dataset. Upon examination of region-specific errors, we observe the greatest performance drops for Oceania and Africa, while Asia and the Middle East had comparatively strong performance. Lastly, we find that a combined model trained on the Worldwide dataset and either CoNLL or OntoNotes lost only 1-2 F1 on both test sets.","PeriodicalId":505350,"journal":{"name":"Conference on Empirical Methods in Natural Language Processing","volume":"115 36","pages":"11778-11791"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Do \\\"English\\\" Named Entity Recognizers Work Well on Global Englishes?\",\"authors\":\"Alexander Shan, John Bauer, Riley Carlson, Christopher D. Manning\",\"doi\":\"10.18653/v1/2023.findings-emnlp.788\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The vast majority of the popular English named entity recognition (NER) datasets contain American or British English data, despite the existence of many global varieties of English. As such, it is unclear whether they generalize for analyzing use of English globally. To test this, we build a newswire dataset, the Worldwide English NER Dataset, to analyze NER model performance on low-resource English variants from around the world. We test widely used NER toolkits and transformer models, including models using the pre-trained contextual models RoBERTa and ELECTRA, on three datasets: a commonly used British English newswire dataset, CoNLL 2003, a more American focused dataset OntoNotes, and our global dataset. All models trained on the CoNLL or OntoNotes datasets experienced significant performance drops-over 10 F1 in some cases-when tested on the Worldwide English dataset. Upon examination of region-specific errors, we observe the greatest performance drops for Oceania and Africa, while Asia and the Middle East had comparatively strong performance. Lastly, we find that a combined model trained on the Worldwide dataset and either CoNLL or OntoNotes lost only 1-2 F1 on both test sets.\",\"PeriodicalId\":505350,\"journal\":{\"name\":\"Conference on Empirical Methods in Natural Language Processing\",\"volume\":\"115 36\",\"pages\":\"11778-11791\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Conference on Empirical Methods in Natural Language Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18653/v1/2023.findings-emnlp.788\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Empirical Methods in Natural Language Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2023.findings-emnlp.788","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
尽管全球存在多种英语,但绝大多数流行的英语命名实体识别(NER)数据集都包含美式英语或英式英语数据。因此,目前还不清楚这些数据集是否可用于分析全球英语的使用情况。为了测试这一点,我们建立了一个新闻通讯数据集,即全球英语 NER 数据集,以分析 NER 模型在全球低资源英语变体上的性能。我们在三个数据集上测试了广泛使用的 NER 工具包和转换器模型,包括使用预先训练的上下文模型 RoBERTa 和 ELECTRA 的模型:一个常用的英国英语新闻通讯数据集 CoNLL 2003、一个更侧重于美国的数据集 OntoNotes 以及我们的全球数据集。在全球英语数据集上进行测试时,所有在 CoNLL 或 OntoNotes 数据集上训练的模型都出现了明显的性能下降,有的甚至下降了 10 F1。在研究特定地区的误差时,我们发现大洋洲和非洲的性能下降幅度最大,而亚洲和中东的性能相对较强。最后,我们发现,在全球数据集和 CoNLL 或 OntoNotes 上训练的组合模型在两个测试集上都只损失了 1-2 个 F1。
Do "English" Named Entity Recognizers Work Well on Global Englishes?
The vast majority of the popular English named entity recognition (NER) datasets contain American or British English data, despite the existence of many global varieties of English. As such, it is unclear whether they generalize for analyzing use of English globally. To test this, we build a newswire dataset, the Worldwide English NER Dataset, to analyze NER model performance on low-resource English variants from around the world. We test widely used NER toolkits and transformer models, including models using the pre-trained contextual models RoBERTa and ELECTRA, on three datasets: a commonly used British English newswire dataset, CoNLL 2003, a more American focused dataset OntoNotes, and our global dataset. All models trained on the CoNLL or OntoNotes datasets experienced significant performance drops-over 10 F1 in some cases-when tested on the Worldwide English dataset. Upon examination of region-specific errors, we observe the greatest performance drops for Oceania and Africa, while Asia and the Middle East had comparatively strong performance. Lastly, we find that a combined model trained on the Worldwide dataset and either CoNLL or OntoNotes lost only 1-2 F1 on both test sets.