BanglaLem:一个基于转换器的孟加拉语词汇分析器,具有增强的数据集

IF 3.6
Md Fuadul Islam, Jakir Hasan, Md Ashikul Islam, Prato Dewan, M. Shahidur Rahman
{"title":"BanglaLem:一个基于转换器的孟加拉语词汇分析器,具有增强的数据集","authors":"Md Fuadul Islam,&nbsp;Jakir Hasan,&nbsp;Md Ashikul Islam,&nbsp;Prato Dewan,&nbsp;M. Shahidur Rahman","doi":"10.1016/j.sasc.2025.200244","DOIUrl":null,"url":null,"abstract":"<div><div>Lemmatization plays a crucial role in various natural language processing (NLP) tasks, such as information retrieval, sentiment analysis, text summarization, and text classification. However, Bangla lemmatization remains particularly challenging due to the language’s rich morphology and high inflectional complexity. Existing open-access datasets for Bangla lemmatization are limited in size, with the largest containing only 22353 unique inflected words, which constrains the effectiveness of data-driven neural models. To address this limitation, we introduce a novel dataset, BanglaLem, comprising 96040 frequently used inflected words. This dataset has been carefully curated and annotated through a rigorous selection process to enhance the accuracy and efficiency of Bangla lemmatization. Furthermore, we propose a transformer-based approach to lemmatization and evaluate the performance of various pre-trained and trained from-scratch transformer models on this dataset. Among these, the BanglaT5 model achieved the highest exact match accuracy of 94.42% on the test set. The BanglaLem dataset is publicly accessible via the following <span><span>link</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200244"},"PeriodicalIF":3.6000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BanglaLem: A Transformer-based Bangla Lemmatizer with an Enhanced Dataset\",\"authors\":\"Md Fuadul Islam,&nbsp;Jakir Hasan,&nbsp;Md Ashikul Islam,&nbsp;Prato Dewan,&nbsp;M. Shahidur Rahman\",\"doi\":\"10.1016/j.sasc.2025.200244\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Lemmatization plays a crucial role in various natural language processing (NLP) tasks, such as information retrieval, sentiment analysis, text summarization, and text classification. However, Bangla lemmatization remains particularly challenging due to the language’s rich morphology and high inflectional complexity. Existing open-access datasets for Bangla lemmatization are limited in size, with the largest containing only 22353 unique inflected words, which constrains the effectiveness of data-driven neural models. To address this limitation, we introduce a novel dataset, BanglaLem, comprising 96040 frequently used inflected words. This dataset has been carefully curated and annotated through a rigorous selection process to enhance the accuracy and efficiency of Bangla lemmatization. Furthermore, we propose a transformer-based approach to lemmatization and evaluate the performance of various pre-trained and trained from-scratch transformer models on this dataset. Among these, the BanglaT5 model achieved the highest exact match accuracy of 94.42% on the test set. The BanglaLem dataset is publicly accessible via the following <span><span>link</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":101205,\"journal\":{\"name\":\"Systems and Soft Computing\",\"volume\":\"7 \",\"pages\":\"Article 200244\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Systems and Soft Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772941925000626\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772941925000626","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

词源化在信息检索、情感分析、文本摘要和文本分类等自然语言处理任务中起着至关重要的作用。然而,由于语言丰富的形态和高度的屈折复杂性,孟加拉语的词形化仍然特别具有挑战性。现有的孟加拉语词形化开放获取数据集规模有限,最大的数据集仅包含22353个独特的屈折词,这限制了数据驱动神经模型的有效性。为了解决这一限制,我们引入了一个新的数据集,BanglaLem,其中包含96040个常用的屈折词。该数据集经过严格的筛选过程精心整理和注释,以提高孟加拉语词汇化的准确性和效率。此外,我们提出了一种基于变压器的方法来对该数据集上各种预训练和从头训练的变压器模型进行归纳和评估。其中,BanglaT5模型在测试集上的精确匹配准确率最高,达到94.42%。孟加拉数据集可通过以下链接公开访问。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
BanglaLem: A Transformer-based Bangla Lemmatizer with an Enhanced Dataset
Lemmatization plays a crucial role in various natural language processing (NLP) tasks, such as information retrieval, sentiment analysis, text summarization, and text classification. However, Bangla lemmatization remains particularly challenging due to the language’s rich morphology and high inflectional complexity. Existing open-access datasets for Bangla lemmatization are limited in size, with the largest containing only 22353 unique inflected words, which constrains the effectiveness of data-driven neural models. To address this limitation, we introduce a novel dataset, BanglaLem, comprising 96040 frequently used inflected words. This dataset has been carefully curated and annotated through a rigorous selection process to enhance the accuracy and efficiency of Bangla lemmatization. Furthermore, we propose a transformer-based approach to lemmatization and evaluate the performance of various pre-trained and trained from-scratch transformer models on this dataset. Among these, the BanglaT5 model achieved the highest exact match accuracy of 94.42% on the test set. The BanglaLem dataset is publicly accessible via the following link.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信