Pretrained Neural Models for Turkish Text Classification

Halil Ibrahim Okur, A. Sertbas
{"title":"Pretrained Neural Models for Turkish Text Classification","authors":"Halil Ibrahim Okur, A. Sertbas","doi":"10.1109/UBMK52708.2021.9558878","DOIUrl":null,"url":null,"abstract":"In the text classification process, which is a sub-task of NLP, the preprocessing and indexing of the text has a direct determining effect on the performance for NLP models. When the studies on pre-trained models are examined, it is seen that the changes made on the models developed for world languages or training the same model with a Turkish text dataset. Word-embedding is considered to be the most critical point of the text processing problem. The two most popular word embedding methods today are Word2Vec and Glove, which embed words into a corpus using multidimensional vectors. BERT, Electra and Fastext models, which have a contextual word representation method and a deep neural network architecture, have been frequently used in the creation of pre-trained models recently. In this study, the use and performance results of pre-trained models on TTC-3600 and TRT-Haber text sets prepared for Turkish text classification NLP task are shown. By using pre-trained models obtained with large corpus, a certain time and hardware cost, the text classification process is performed with less effort and high performance.","PeriodicalId":106516,"journal":{"name":"2021 6th International Conference on Computer Science and Engineering (UBMK)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Computer Science and Engineering (UBMK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UBMK52708.2021.9558878","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the text classification process, which is a sub-task of NLP, the preprocessing and indexing of the text has a direct determining effect on the performance for NLP models. When the studies on pre-trained models are examined, it is seen that the changes made on the models developed for world languages or training the same model with a Turkish text dataset. Word-embedding is considered to be the most critical point of the text processing problem. The two most popular word embedding methods today are Word2Vec and Glove, which embed words into a corpus using multidimensional vectors. BERT, Electra and Fastext models, which have a contextual word representation method and a deep neural network architecture, have been frequently used in the creation of pre-trained models recently. In this study, the use and performance results of pre-trained models on TTC-3600 and TRT-Haber text sets prepared for Turkish text classification NLP task are shown. By using pre-trained models obtained with large corpus, a certain time and hardware cost, the text classification process is performed with less effort and high performance.
土耳其语文本分类的预训练神经模型
文本分类是自然语言处理的一个子任务,在文本分类过程中,文本的预处理和索引对自然语言处理模型的性能有直接的决定作用。当对预训练模型的研究进行检查时,可以看到对为世界语言开发的模型或使用土耳其文本数据集训练相同模型所做的更改。词嵌入被认为是文本处理中最关键的问题。目前最流行的两种词嵌入方法是Word2Vec和Glove,它们使用多维向量将词嵌入到语料库中。BERT、Electra和Fastext模型具有上下文词表示方法和深度神经网络架构,近年来被广泛用于预训练模型的创建。在本研究中,展示了预训练模型在为土耳其文本分类NLP任务准备的TTC-3600和TRT-Haber文本集上的使用和性能结果。通过使用大量语料库、一定的时间和硬件成本获得的预训练模型,实现了省力、高性能的文本分类过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信