Tusarkanta Dalai , Anupam Das , Tapas Kumar Mishra , Pankaj Kumar Sa
{"title":"OdNER: NER resource creation and system development for low-resource Odia language","authors":"Tusarkanta Dalai , Anupam Das , Tapas Kumar Mishra , Pankaj Kumar Sa","doi":"10.1016/j.nlp.2025.100139","DOIUrl":null,"url":null,"abstract":"<div><div>This work aims to enhance the usability of natural language processing (NLP) based systems for the low-resource Odia language by focusing on the development of effective named entity recognition (NER) system. NLP applications rely heavily on NER to extract relevant information from massive amounts of unstructured text. The task of identifying and classifying the named entities included in a given text into a set of predetermined categories is referred to as NER. Already, the NER task has accomplished productive results in English as well as in a number of other European languages. On the other hand, because of a lack of supporting tools and resources, it has not yet been thoroughly investigated in Indian languages, particularly the Odia language. Recently, approaches based on machine learning (ML) and deep learning (DL) have demonstrated exceptional performance when it comes to constructing NLP tasks. Moreover, transformer models, particularly masked-language models (MLM), have demonstrated remarkable efficacy in the NER task; nevertheless, these methods generally call for massive volumes of annotated corpus. Unfortunately, we could not find any open-source NER corpus for the Odia language. The purpose of this research is to compile OdNER, a NER dataset with quality baselines for the low-resource Odia language. The Odia NER corpus OdNER contains 48,000 sentences having 6,71,354 tokens and 98,116 name entities annotated with 12 tags. To establish the quality of our corpus, we use conditional random field (CRF) and BiLSTM model as our baseline models. To demonstrate the efficacy of our dataset, we conduct a comparative evaluation of various transformer-based multilingual language models (IndicBERT, MuRIL, XLM-R) and utilize them to carry out the sequence labeling task for NER. With the pre-trained XLM-R multilingual model, our dataset achieves a maximum F1 score of 90.48%. When it comes to Odia NER, no other work comes close to matching the quality and quantity of ours. We anticipate that, this work will have made substantial progress toward the development of NLP tasks for the Odia language.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100139"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719125000159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This work aims to enhance the usability of natural language processing (NLP) based systems for the low-resource Odia language by focusing on the development of effective named entity recognition (NER) system. NLP applications rely heavily on NER to extract relevant information from massive amounts of unstructured text. The task of identifying and classifying the named entities included in a given text into a set of predetermined categories is referred to as NER. Already, the NER task has accomplished productive results in English as well as in a number of other European languages. On the other hand, because of a lack of supporting tools and resources, it has not yet been thoroughly investigated in Indian languages, particularly the Odia language. Recently, approaches based on machine learning (ML) and deep learning (DL) have demonstrated exceptional performance when it comes to constructing NLP tasks. Moreover, transformer models, particularly masked-language models (MLM), have demonstrated remarkable efficacy in the NER task; nevertheless, these methods generally call for massive volumes of annotated corpus. Unfortunately, we could not find any open-source NER corpus for the Odia language. The purpose of this research is to compile OdNER, a NER dataset with quality baselines for the low-resource Odia language. The Odia NER corpus OdNER contains 48,000 sentences having 6,71,354 tokens and 98,116 name entities annotated with 12 tags. To establish the quality of our corpus, we use conditional random field (CRF) and BiLSTM model as our baseline models. To demonstrate the efficacy of our dataset, we conduct a comparative evaluation of various transformer-based multilingual language models (IndicBERT, MuRIL, XLM-R) and utilize them to carry out the sequence labeling task for NER. With the pre-trained XLM-R multilingual model, our dataset achieves a maximum F1 score of 90.48%. When it comes to Odia NER, no other work comes close to matching the quality and quantity of ours. We anticipate that, this work will have made substantial progress toward the development of NLP tasks for the Odia language.