A Machine Learning Approach of Text Classification for High- and Low-Resource Languages

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Muhammad Owais Raza, Naeem Ahmed Mahoto, Asadullah Shaikh, Nazia Pathan, Hani Alshahrani, M. A. Elmagzoub
{"title":"A Machine Learning Approach of Text Classification for High- and Low-Resource Languages","authors":"Muhammad Owais Raza,&nbsp;Naeem Ahmed Mahoto,&nbsp;Asadullah Shaikh,&nbsp;Nazia Pathan,&nbsp;Hani Alshahrani,&nbsp;M. A. Elmagzoub","doi":"10.1111/coin.70114","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>A large amount of data have been published online in textual format for the last decade because of the advancement of information and communication technologies. This is an open challenge to organize and classify large amounts of textual data automatically, especially for a language that has limited resources available online. In this study, two types of approaches are adopted for experiments. First one is a traditional strategy that uses six (06) classical state-of-the-art classification models (1. decision tree (DT), 2. logistic regression (LR), 3. support vector machine (SVM), 4. k-nearest neighbour (k-NN), 5. Naive Bayes (NB), and 6. random forest (RF)) along with two (02) ensemble methods (1. Adaboost and 2. gradient boosting (GB)) and second modeling technique is our proposed voting based ensembling scheme. Models are trained on a 75-25 split where 75% of data is used for training and 25% for testing. The evaluation of the classification models is carried out based on accuracy, precision, recall, and F1-score indexes. The experimental outcomes witnessed that for the traditional approach, gradient boosting outperformed for the limited resource language with 98.08% F1-score, while SVM performed better (97.34% F1-score) for the resource-rich language.</p>\n </div>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"41 4","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coin.70114","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

A large amount of data have been published online in textual format for the last decade because of the advancement of information and communication technologies. This is an open challenge to organize and classify large amounts of textual data automatically, especially for a language that has limited resources available online. In this study, two types of approaches are adopted for experiments. First one is a traditional strategy that uses six (06) classical state-of-the-art classification models (1. decision tree (DT), 2. logistic regression (LR), 3. support vector machine (SVM), 4. k-nearest neighbour (k-NN), 5. Naive Bayes (NB), and 6. random forest (RF)) along with two (02) ensemble methods (1. Adaboost and 2. gradient boosting (GB)) and second modeling technique is our proposed voting based ensembling scheme. Models are trained on a 75-25 split where 75% of data is used for training and 25% for testing. The evaluation of the classification models is carried out based on accuracy, precision, recall, and F1-score indexes. The experimental outcomes witnessed that for the traditional approach, gradient boosting outperformed for the limited resource language with 98.08% F1-score, while SVM performed better (97.34% F1-score) for the resource-rich language.

高资源语言和低资源语言文本分类的机器学习方法
在过去十年中,由于信息和通信技术的进步,大量数据以文本格式在线发布。自动组织和分类大量文本数据是一个公开的挑战,特别是对于在线可用资源有限的语言。在本研究中,实验采用了两种方法。第一种是传统的策略,它使用了6(06)个经典的最先进的分类模型。决策树(DT), 2。2 .逻辑回归(LR);3 .支持向量机;k近邻(k-NN), 5。朴素贝叶斯(NB)和6。随机森林(RF)以及两种(02)集成方法(1)。Adaboost和2。梯度增强(GB)和二次建模技术是我们提出的基于投票的集成方案。模型按照75-25分割进行训练,其中75%的数据用于训练,25%用于测试。根据准确率、精密度、召回率和f1评分指标对分类模型进行评价。实验结果表明,对于传统方法而言,梯度增强在资源有限的语言上表现更好,f1得分为98.08%,而SVM在资源丰富的语言上表现更好(f1得分为97.34%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computational Intelligence
Computational Intelligence 工程技术-计算机:人工智能
CiteScore
6.90
自引率
3.60%
发文量
65
审稿时长
>12 weeks
期刊介绍: This leading international journal promotes and stimulates research in the field of artificial intelligence (AI). Covering a wide range of issues - from the tools and languages of AI to its philosophical implications - Computational Intelligence provides a vigorous forum for the publication of both experimental and theoretical research, as well as surveys and impact studies. The journal is designed to meet the needs of a wide range of AI workers in academic and industrial research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信