AraFastQA: a transformer model for question-answering for Arabic language using few-shot learning

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Asmaa Alrayzah , Fawaz Alsolami , Mostafa Saleh
{"title":"AraFastQA: a transformer model for question-answering for Arabic language using few-shot learning","authors":"Asmaa Alrayzah ,&nbsp;Fawaz Alsolami ,&nbsp;Mostafa Saleh","doi":"10.1016/j.csl.2025.101857","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, numerous studies have developed pre-trained language models (PLMs) for Arabic natural language processing (NLP) tasks, including question-answering (QA), but often overlooking the challenge of data scarcity. This study introduces the Arabic Few-Shot QA (AraFastQA) pre-trained language model to confront the challenge of limited resources in Arabic QA tasks. The primary contributions of this study involve developing an PLM based on a few-shot learning (FSL) approach to address the challenge of low-resource datasets in Arabic QA. Moreover, this study contributes to the developing of Arabic benchmark few-shot QA datasets. By using the few-shot datasets, we compare the AraFastQA PLM with the state-of-art Arabic PLMs such that AraBERT, AraELECTRA, and XLM-Roberta. We evaluated AraFastQA and state-of-art models on two Arabic benchmark datasets that are Arabic reading comprehension (ARCD) and the typologically diverse question answering (TyDiQA). The obtained experimental results show that AraFastQA outperforms other models across eight training sample sizes of the Arabic benchmark datasets. For instance, our proposed PLM achieves 73.2 of F1-score on TyDi QA with only 1024 training examples while the highest accuracy of other models (AraELECTRA) achieves 56.1. For the full training dataset of ARCD dataset, AraFastQA improves accuracy by 9 %, 3 %, and 10 % of AraBERT, AraELECTRA, and XLM-Roberta respectively.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"95 ","pages":"Article 101857"},"PeriodicalIF":3.1000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230825000828","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, numerous studies have developed pre-trained language models (PLMs) for Arabic natural language processing (NLP) tasks, including question-answering (QA), but often overlooking the challenge of data scarcity. This study introduces the Arabic Few-Shot QA (AraFastQA) pre-trained language model to confront the challenge of limited resources in Arabic QA tasks. The primary contributions of this study involve developing an PLM based on a few-shot learning (FSL) approach to address the challenge of low-resource datasets in Arabic QA. Moreover, this study contributes to the developing of Arabic benchmark few-shot QA datasets. By using the few-shot datasets, we compare the AraFastQA PLM with the state-of-art Arabic PLMs such that AraBERT, AraELECTRA, and XLM-Roberta. We evaluated AraFastQA and state-of-art models on two Arabic benchmark datasets that are Arabic reading comprehension (ARCD) and the typologically diverse question answering (TyDiQA). The obtained experimental results show that AraFastQA outperforms other models across eight training sample sizes of the Arabic benchmark datasets. For instance, our proposed PLM achieves 73.2 of F1-score on TyDi QA with only 1024 training examples while the highest accuracy of other models (AraELECTRA) achieves 56.1. For the full training dataset of ARCD dataset, AraFastQA improves accuracy by 9 %, 3 %, and 10 % of AraBERT, AraELECTRA, and XLM-Roberta respectively.
阿拉法特qa:一个阿拉伯语问答转换模型,使用少镜头学习
近年来,许多研究为阿拉伯语自然语言处理(NLP)任务开发了预训练语言模型(PLMs),包括问答(QA),但往往忽视了数据稀缺性的挑战。本研究引入阿拉伯语Few-Shot QA (AraFastQA)预训练语言模型,以应对阿拉伯语QA任务中资源有限的挑战。本研究的主要贡献包括开发基于少量学习(FSL)方法的PLM,以解决阿拉伯语QA中资源不足的数据集的挑战。此外,本研究还有助于开发阿拉伯语基准少射QA数据集。通过使用少量数据集,我们将AraFastQA PLM与阿拉伯最先进的PLM(如AraBERT、AraELECTRA和XLM-Roberta)进行了比较。我们在阿拉伯语阅读理解(ARCD)和类型学多样性问答(TyDiQA)两个阿拉伯基准数据集上评估了AraFastQA和最先进的模型。实验结果表明,在阿拉伯文基准数据集的8个训练样本大小上,AraFastQA优于其他模型。例如,我们提出的PLM仅使用1024个训练样例,在TyDi QA上就达到了F1-score的73.2,而其他模型(AraELECTRA)的最高准确率达到了56.1。对于ARCD数据集的完整训练数据集,AraFastQA的准确率分别比AraBERT、AraELECTRA和XLM-Roberta提高9%、3%和10%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Speech and Language
Computer Speech and Language 工程技术-计算机:人工智能
CiteScore
11.30
自引率
4.70%
发文量
80
审稿时长
22.9 weeks
期刊介绍: Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language. The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信