{"title":"基于域数据平衡器的多域自适应神经网络机器翻译方法","authors":"Jinlei Xu, Yonghua Wen, Shuanghong Huang, Zhengtao Yu","doi":"10.3233/ida-230155","DOIUrl":null,"url":null,"abstract":"Most methods for multi-domain adaptive neural machine translation (NMT) currently rely on mixing data from multiple domains in a single model to achieve multi-domain translation. However, this mixing can lead to imbalanced training data, causing the model to focus on training for the large-scale general domain while ignoring the scarce resources of specific domains, resulting in a decrease in translation performance. In this paper, we propose a multi-domain adaptive NMT method based on Domain Data Balancer (DDB) to address the problems of imbalanced data caused by simple fine-tuning. By adding DDB to the Transformer model, we adaptively learn the sampling distribution of each group of training data, replace the maximum likelihood estimation criterion with empirical risk minimization training, and introduce a reward-based iterative update of the bilevel optimizer based on reinforcement learning. Experimental results show that the proposed method improves the baseline model by an average of 1.55 and 0.14 BLEU (Bilingual Evaluation Understudy) scores respectively in English-German and Chinese-English multi-domain NMT.","PeriodicalId":50355,"journal":{"name":"Intelligent Data Analysis","volume":"12 1","pages":"0"},"PeriodicalIF":0.8000,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A multi-domain adaptive neural machine translation method based on domain data balancer\",\"authors\":\"Jinlei Xu, Yonghua Wen, Shuanghong Huang, Zhengtao Yu\",\"doi\":\"10.3233/ida-230155\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most methods for multi-domain adaptive neural machine translation (NMT) currently rely on mixing data from multiple domains in a single model to achieve multi-domain translation. However, this mixing can lead to imbalanced training data, causing the model to focus on training for the large-scale general domain while ignoring the scarce resources of specific domains, resulting in a decrease in translation performance. In this paper, we propose a multi-domain adaptive NMT method based on Domain Data Balancer (DDB) to address the problems of imbalanced data caused by simple fine-tuning. By adding DDB to the Transformer model, we adaptively learn the sampling distribution of each group of training data, replace the maximum likelihood estimation criterion with empirical risk minimization training, and introduce a reward-based iterative update of the bilevel optimizer based on reinforcement learning. Experimental results show that the proposed method improves the baseline model by an average of 1.55 and 0.14 BLEU (Bilingual Evaluation Understudy) scores respectively in English-German and Chinese-English multi-domain NMT.\",\"PeriodicalId\":50355,\"journal\":{\"name\":\"Intelligent Data Analysis\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2023-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Data Analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/ida-230155\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Data Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/ida-230155","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
目前,大多数多领域自适应神经机器翻译(NMT)方法依赖于将多个领域的数据混合在一个模型中来实现多领域翻译。然而,这种混合会导致训练数据的不平衡,导致模型专注于大规模通用领域的训练,而忽略了特定领域的稀缺资源,导致翻译性能下降。本文提出了一种基于域数据平衡器(Domain Data Balancer, DDB)的多域自适应NMT方法,以解决简单微调导致的数据不平衡问题。通过在Transformer模型中加入DDB,自适应学习每组训练数据的抽样分布,用经验风险最小化训练取代极大似然估计准则,并引入基于强化学习的基于奖励的双层优化器迭代更新。实验结果表明,该方法在英语-德语和汉语-英语多域NMT中平均分别提高了1.55和0.14个BLEU(双语评价Understudy)分数。
A multi-domain adaptive neural machine translation method based on domain data balancer
Most methods for multi-domain adaptive neural machine translation (NMT) currently rely on mixing data from multiple domains in a single model to achieve multi-domain translation. However, this mixing can lead to imbalanced training data, causing the model to focus on training for the large-scale general domain while ignoring the scarce resources of specific domains, resulting in a decrease in translation performance. In this paper, we propose a multi-domain adaptive NMT method based on Domain Data Balancer (DDB) to address the problems of imbalanced data caused by simple fine-tuning. By adding DDB to the Transformer model, we adaptively learn the sampling distribution of each group of training data, replace the maximum likelihood estimation criterion with empirical risk minimization training, and introduce a reward-based iterative update of the bilevel optimizer based on reinforcement learning. Experimental results show that the proposed method improves the baseline model by an average of 1.55 and 0.14 BLEU (Bilingual Evaluation Understudy) scores respectively in English-German and Chinese-English multi-domain NMT.
期刊介绍:
Intelligent Data Analysis provides a forum for the examination of issues related to the research and applications of Artificial Intelligence techniques in data analysis across a variety of disciplines. These techniques include (but are not limited to): all areas of data visualization, data pre-processing (fusion, editing, transformation, filtering, sampling), data engineering, database mining techniques, tools and applications, use of domain knowledge in data analysis, big data applications, evolutionary algorithms, machine learning, neural nets, fuzzy logic, statistical pattern recognition, knowledge filtering, and post-processing. In particular, papers are preferred that discuss development of new AI related data analysis architectures, methodologies, and techniques and their applications to various domains.