{"title":"Quasi Character-Level Transformers to Improve Neural Machine Translation on Small Datasets","authors":"Salvador Carrión, F. Casacuberta","doi":"10.1109/SNAMS53716.2021.9732120","DOIUrl":null,"url":null,"abstract":"In the Neural Machine Translation community, it is a common practice to use some form of subword segmentation to encode words as a sequence of subword units. This allows practitioners to represent their entire dataset using the least amount of tokens, thus avoiding memory and performance-related problems derived from the full wordor purely character-level representations. Even though there is strong evidence that each dataset has an optimal vocabulary size, in practice it is common to use as many “words” as possible. In this work, we show how this standard approach might be counter-productive for small datasets or low-resource environments, where models trained with quasi character-level vocabularies seem to con-sistently outperform models with large subword vocabularies. Nonetheless, these improvements come at the expense of requiring a neural architecture capable of dealing with long sequences and long-term dependencies.","PeriodicalId":387260,"journal":{"name":"2021 Eighth International Conference on Social Network Analysis, Management and Security (SNAMS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Eighth International Conference on Social Network Analysis, Management and Security (SNAMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SNAMS53716.2021.9732120","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the Neural Machine Translation community, it is a common practice to use some form of subword segmentation to encode words as a sequence of subword units. This allows practitioners to represent their entire dataset using the least amount of tokens, thus avoiding memory and performance-related problems derived from the full wordor purely character-level representations. Even though there is strong evidence that each dataset has an optimal vocabulary size, in practice it is common to use as many “words” as possible. In this work, we show how this standard approach might be counter-productive for small datasets or low-resource environments, where models trained with quasi character-level vocabularies seem to con-sistently outperform models with large subword vocabularies. Nonetheless, these improvements come at the expense of requiring a neural architecture capable of dealing with long sequences and long-term dependencies.