{"title":"Isarn Dialect Word Segmentation using Bi-directional Gated Recurrent Unit with transfer learning approach","authors":"Sawetsit Aim-Nang, Pusadee Seresangtakul, Pongsathon Janyoi","doi":"10.1109/ICSEC56337.2022.10049346","DOIUrl":null,"url":null,"abstract":"This paper presents an Isarn dialect word segmentation based on a recurrent neural network. In this study, the Isarn text written in Thai script is taken as input. We explored the effectiveness of the types of recurrent layers; recurrent neural networks (RNN), gated recurrent units (GRU), and long short-term memory (LSTM). The F1-scores of RNN, GRU, and LSTM are 95.36, 96.05, and 95.70, respectively. The experiment results showed that using GRU as the recurrent layer achieved the best performance. To deal with borrowed words from Thai, transfer learning was applied to improve the performance of the model by fine-tuning the pre-trained model given the limited size of the Isarn corpus. The model trained through the transfer learning approach outperformed the model trained from the Isarn dataset alone.","PeriodicalId":430850,"journal":{"name":"2022 26th International Computer Science and Engineering Conference (ICSEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 26th International Computer Science and Engineering Conference (ICSEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSEC56337.2022.10049346","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents an Isarn dialect word segmentation based on a recurrent neural network. In this study, the Isarn text written in Thai script is taken as input. We explored the effectiveness of the types of recurrent layers; recurrent neural networks (RNN), gated recurrent units (GRU), and long short-term memory (LSTM). The F1-scores of RNN, GRU, and LSTM are 95.36, 96.05, and 95.70, respectively. The experiment results showed that using GRU as the recurrent layer achieved the best performance. To deal with borrowed words from Thai, transfer learning was applied to improve the performance of the model by fine-tuning the pre-trained model given the limited size of the Isarn corpus. The model trained through the transfer learning approach outperformed the model trained from the Isarn dataset alone.