Thin Dang Van, D. Hao, N. Nguyen, Luân Đình Ngô, Kiến Lê Hiếu Ngô
{"title":"vnNLI - VLSP2021: An Empirical Study on Vietnamese-English Natural Language Inference Based on Pretrained Language Models with Data Augmentation","authors":"Thin Dang Van, D. Hao, N. Nguyen, Luân Đình Ngô, Kiến Lê Hiếu Ngô","doi":"10.25073/2588-1086/vnucsce.330","DOIUrl":null,"url":null,"abstract":"In this paper, we describe an empirical study of data augmentation techniques with various pre-trained language models on the bilingual dataset which was presented at the VLSP 2021 - Vietnamese and English-Vietnamese Textual Entailment. We apply the machine translation tool to generate new training set from original training data and then investigate and compare the effectiveness of a monolingual and multilingual model on the new data set. Our experimental results show that fine-tuning a pre-trained multilingual language XLM-R model with an augmented training set gives the best performance. Our system was ranked third in the shared-task VLSP 2021 with the F1-score of about 0.88.","PeriodicalId":416488,"journal":{"name":"VNU Journal of Science: Computer Science and Communication Engineering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"VNU Journal of Science: Computer Science and Communication Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25073/2588-1086/vnucsce.330","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we describe an empirical study of data augmentation techniques with various pre-trained language models on the bilingual dataset which was presented at the VLSP 2021 - Vietnamese and English-Vietnamese Textual Entailment. We apply the machine translation tool to generate new training set from original training data and then investigate and compare the effectiveness of a monolingual and multilingual model on the new data set. Our experimental results show that fine-tuning a pre-trained multilingual language XLM-R model with an augmented training set gives the best performance. Our system was ranked third in the shared-task VLSP 2021 with the F1-score of about 0.88.