Extractive Question Answering for Kazakh Language

Magzhan Shymbayev, Yermek Alimzhanov
{"title":"Extractive Question Answering for Kazakh Language","authors":"Magzhan Shymbayev, Yermek Alimzhanov","doi":"10.1109/SIST58284.2023.10223508","DOIUrl":null,"url":null,"abstract":"This article provides research and development of an extractive question answering system based on the BERT-like model for the Kazakh language. Developing an extractive question answering system requires large training datasets - tens of thousands of annotated question-answer pairs. Such datasets are not available in the majority of languages, including Kazakh. To address this issue, the Kazakh Question Answering Dataset (KazQA) is introduced, which is based on the Stanford Question Answering Dataset (SQuAD) and generated through machine translation using the Google Cloud Translation API. Different large pretrained contextual language models are used as the baseline models - ALBERT and multilingual BERT and are compared with the newly trained monolingual Kazakh model KazBERT. The results demonstrate that the proposed approach can effectively generate question answering systems in low-resourced Kazakh language.","PeriodicalId":367406,"journal":{"name":"2023 IEEE International Conference on Smart Information Systems and Technologies (SIST)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Smart Information Systems and Technologies (SIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIST58284.2023.10223508","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This article provides research and development of an extractive question answering system based on the BERT-like model for the Kazakh language. Developing an extractive question answering system requires large training datasets - tens of thousands of annotated question-answer pairs. Such datasets are not available in the majority of languages, including Kazakh. To address this issue, the Kazakh Question Answering Dataset (KazQA) is introduced, which is based on the Stanford Question Answering Dataset (SQuAD) and generated through machine translation using the Google Cloud Translation API. Different large pretrained contextual language models are used as the baseline models - ALBERT and multilingual BERT and are compared with the newly trained monolingual Kazakh model KazBERT. The results demonstrate that the proposed approach can effectively generate question answering systems in low-resourced Kazakh language.
哈萨克语抽取式问答
本文研究和开发了一种基于类bert模型的哈萨克语抽取式问答系统。开发一个抽取式问答系统需要大量的训练数据集——数以万计的注释问答对。包括哈萨克语在内的大多数语言都没有这样的数据集。为了解决这个问题,引入了哈萨克问答数据集(KazQA),该数据集基于斯坦福问答数据集(SQuAD),并通过使用谷歌云翻译API的机器翻译生成。使用不同的大型预训练上下文语言模型作为基线模型- ALBERT和多语言BERT,并与新训练的单语言哈萨克语模型KazBERT进行比较。结果表明,该方法可以有效地生成资源匮乏的哈萨克语问答系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信