Bilingual IT Service Desk Ticket Classification Using Language Model Pre-training Techniques

{"title":"Bilingual IT Service Desk Ticket Classification Using Language Model Pre-training Techniques","authors":"","doi":"10.1109/iSAI-NLP54397.2021.9678179","DOIUrl":null,"url":null,"abstract":"Language model pre-training techniques have been successfully applied to several natural language processing and text-mining tasks. However, existing published studies regarding automatic IT service desk ticket categorization were mostly conducted using the traditional bag-of-words (BoW) model and focused on the tickets that contain only one language. Therefore, this paper presents an examination of applying the state-of-the-art language model pre-training approaches to automatically determine the service category of bilingual IT service desk tickets, particularly for those tickets that contain Thai and/or English texts. Three well-known algorithms, mBERT, ULMFiT, and XLM-R, are investigated in this study using an in-house real-world dataset. Three Ensemble methods with bag-of-words text representation are used as performance evaluation baselines. According to our experimental results, language model pre-training techniques are superior to the BoW-based Ensemble methods for bilingual IT ticket categorization tasks. XLM-R gives the highest overall performance at 87.02% accuracy and 86.96% F1-score on the test dataset, followed by ULMFiT, mBERT and Ensemble methods, respectively","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Language model pre-training techniques have been successfully applied to several natural language processing and text-mining tasks. However, existing published studies regarding automatic IT service desk ticket categorization were mostly conducted using the traditional bag-of-words (BoW) model and focused on the tickets that contain only one language. Therefore, this paper presents an examination of applying the state-of-the-art language model pre-training approaches to automatically determine the service category of bilingual IT service desk tickets, particularly for those tickets that contain Thai and/or English texts. Three well-known algorithms, mBERT, ULMFiT, and XLM-R, are investigated in this study using an in-house real-world dataset. Three Ensemble methods with bag-of-words text representation are used as performance evaluation baselines. According to our experimental results, language model pre-training techniques are superior to the BoW-based Ensemble methods for bilingual IT ticket categorization tasks. XLM-R gives the highest overall performance at 87.02% accuracy and 86.96% F1-score on the test dataset, followed by ULMFiT, mBERT and Ensemble methods, respectively
基于语言模型预训练技术的双语IT服务台票证分类
语言模型预训练技术已经成功地应用于一些自然语言处理和文本挖掘任务中。然而,现有发表的关于IT服务台票证自动分类的研究大多是使用传统的词袋(BoW)模型进行的,并且主要关注只包含一种语言的票证。因此,本文提出了一项应用最先进的语言模型预训练方法来自动确定双语IT服务台票的服务类别的研究,特别是那些包含泰语和/或英语文本的票。本研究使用内部真实数据集对mBERT、ULMFiT和XLM-R这三种知名算法进行了研究。采用三种具有词袋文本表示的集成方法作为性能评估基准。根据实验结果,语言模型预训练技术在双语IT票据分类任务中优于基于bow的集成方法。在测试数据集上,XLM-R给出了最高的整体性能,准确率为87.02%,f1得分为86.96%,其次是ULMFiT、mBERT和Ensemble方法
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信