OdNER: NER resource creation and system development for low-resource Odia language

Tusarkanta Dalai , Anupam Das , Tapas Kumar Mishra , Pankaj Kumar Sa
{"title":"OdNER: NER resource creation and system development for low-resource Odia language","authors":"Tusarkanta Dalai ,&nbsp;Anupam Das ,&nbsp;Tapas Kumar Mishra ,&nbsp;Pankaj Kumar Sa","doi":"10.1016/j.nlp.2025.100139","DOIUrl":null,"url":null,"abstract":"<div><div>This work aims to enhance the usability of natural language processing (NLP) based systems for the low-resource Odia language by focusing on the development of effective named entity recognition (NER) system. NLP applications rely heavily on NER to extract relevant information from massive amounts of unstructured text. The task of identifying and classifying the named entities included in a given text into a set of predetermined categories is referred to as NER. Already, the NER task has accomplished productive results in English as well as in a number of other European languages. On the other hand, because of a lack of supporting tools and resources, it has not yet been thoroughly investigated in Indian languages, particularly the Odia language. Recently, approaches based on machine learning (ML) and deep learning (DL) have demonstrated exceptional performance when it comes to constructing NLP tasks. Moreover, transformer models, particularly masked-language models (MLM), have demonstrated remarkable efficacy in the NER task; nevertheless, these methods generally call for massive volumes of annotated corpus. Unfortunately, we could not find any open-source NER corpus for the Odia language. The purpose of this research is to compile OdNER, a NER dataset with quality baselines for the low-resource Odia language. The Odia NER corpus OdNER contains 48,000 sentences having 6,71,354 tokens and 98,116 name entities annotated with 12 tags. To establish the quality of our corpus, we use conditional random field (CRF) and BiLSTM model as our baseline models. To demonstrate the efficacy of our dataset, we conduct a comparative evaluation of various transformer-based multilingual language models (IndicBERT, MuRIL, XLM-R) and utilize them to carry out the sequence labeling task for NER. With the pre-trained XLM-R multilingual model, our dataset achieves a maximum F1 score of 90.48%. When it comes to Odia NER, no other work comes close to matching the quality and quantity of ours. We anticipate that, this work will have made substantial progress toward the development of NLP tasks for the Odia language.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100139"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719125000159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This work aims to enhance the usability of natural language processing (NLP) based systems for the low-resource Odia language by focusing on the development of effective named entity recognition (NER) system. NLP applications rely heavily on NER to extract relevant information from massive amounts of unstructured text. The task of identifying and classifying the named entities included in a given text into a set of predetermined categories is referred to as NER. Already, the NER task has accomplished productive results in English as well as in a number of other European languages. On the other hand, because of a lack of supporting tools and resources, it has not yet been thoroughly investigated in Indian languages, particularly the Odia language. Recently, approaches based on machine learning (ML) and deep learning (DL) have demonstrated exceptional performance when it comes to constructing NLP tasks. Moreover, transformer models, particularly masked-language models (MLM), have demonstrated remarkable efficacy in the NER task; nevertheless, these methods generally call for massive volumes of annotated corpus. Unfortunately, we could not find any open-source NER corpus for the Odia language. The purpose of this research is to compile OdNER, a NER dataset with quality baselines for the low-resource Odia language. The Odia NER corpus OdNER contains 48,000 sentences having 6,71,354 tokens and 98,116 name entities annotated with 12 tags. To establish the quality of our corpus, we use conditional random field (CRF) and BiLSTM model as our baseline models. To demonstrate the efficacy of our dataset, we conduct a comparative evaluation of various transformer-based multilingual language models (IndicBERT, MuRIL, XLM-R) and utilize them to carry out the sequence labeling task for NER. With the pre-trained XLM-R multilingual model, our dataset achieves a maximum F1 score of 90.48%. When it comes to Odia NER, no other work comes close to matching the quality and quantity of ours. We anticipate that, this work will have made substantial progress toward the development of NLP tasks for the Odia language.
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信