用Transformer改进文本分类

Gokhan Soyalp, Artun Alar, Kaan Ozkanli, Beytullah Yildiz
{"title":"用Transformer改进文本分类","authors":"Gokhan Soyalp, Artun Alar, Kaan Ozkanli, Beytullah Yildiz","doi":"10.1109/UBMK52708.2021.9558906","DOIUrl":null,"url":null,"abstract":"Huge amounts of text data are produced every day. Processing text data that accumulates and grows exponentially every day requires the use of appropriate automation tools. Text classification, a Natural Language Processing task, has the potential to provide automatic text data processing. Many new models have been proposed to achieve much better results in text classification. The transformer model has been introduced recently to provide superior performance in terms of accuracy and processing speed in deep learning. In this article, we propose an improved Transformer model for text classification. The dataset containing information about the books was collected from an online resource and used to train the models. We witnessed superior performance in our proposed Transformer model compared to previous state-of-art models such as LSTM and CNN.","PeriodicalId":106516,"journal":{"name":"2021 6th International Conference on Computer Science and Engineering (UBMK)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Improving Text Classification with Transformer\",\"authors\":\"Gokhan Soyalp, Artun Alar, Kaan Ozkanli, Beytullah Yildiz\",\"doi\":\"10.1109/UBMK52708.2021.9558906\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Huge amounts of text data are produced every day. Processing text data that accumulates and grows exponentially every day requires the use of appropriate automation tools. Text classification, a Natural Language Processing task, has the potential to provide automatic text data processing. Many new models have been proposed to achieve much better results in text classification. The transformer model has been introduced recently to provide superior performance in terms of accuracy and processing speed in deep learning. In this article, we propose an improved Transformer model for text classification. The dataset containing information about the books was collected from an online resource and used to train the models. We witnessed superior performance in our proposed Transformer model compared to previous state-of-art models such as LSTM and CNN.\",\"PeriodicalId\":106516,\"journal\":{\"name\":\"2021 6th International Conference on Computer Science and Engineering (UBMK)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 6th International Conference on Computer Science and Engineering (UBMK)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/UBMK52708.2021.9558906\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 6th International Conference on Computer Science and Engineering (UBMK)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UBMK52708.2021.9558906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

每天都会产生大量的文本数据。处理每天呈指数级增长的文本数据需要使用适当的自动化工具。文本分类是一项自然语言处理任务,具有提供自动文本数据处理的潜力。为了在文本分类中取得更好的结果,人们提出了许多新的模型。最近引入的变压器模型在深度学习的准确性和处理速度方面提供了卓越的性能。在本文中,我们提出了一个改进的Transformer文本分类模型。包含图书信息的数据集是从在线资源中收集的,并用于训练模型。我们发现,与LSTM和CNN等先前最先进的模型相比,我们提出的Transformer模型具有优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving Text Classification with Transformer
Huge amounts of text data are produced every day. Processing text data that accumulates and grows exponentially every day requires the use of appropriate automation tools. Text classification, a Natural Language Processing task, has the potential to provide automatic text data processing. Many new models have been proposed to achieve much better results in text classification. The transformer model has been introduced recently to provide superior performance in terms of accuracy and processing speed in deep learning. In this article, we propose an improved Transformer model for text classification. The dataset containing information about the books was collected from an online resource and used to train the models. We witnessed superior performance in our proposed Transformer model compared to previous state-of-art models such as LSTM and CNN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信