{"title":"多域神经机器翻译的梯度冲突集合策略","authors":"Zhibo Man, Yujie Zhang, Yu Li, Yuanmeng Chen, Yufeng Chen, Jinan Xu","doi":"10.1145/3638248","DOIUrl":null,"url":null,"abstract":"<p>Multi-domain neural machine translation aims to construct a unified NMT model to translate sentences across various domains. Nevertheless, previous studies have one limitation is the incapacity to acquire both domain-general and specific representations concurrently. To this end, we propose an ensemble strategy with gradient conflict for multi-domain neural machine translation that automatically learns model parameters by identifying both domain-shared and domain-specific features. Specifically, our approach consists of <b>(1)</b> a parameter-sharing framework: the parameters of all the layers are originally shared and equivalent to each domain. <b>(2)</b> ensemble strategy: we design an Extra Ensemble strategy via a piecewise condition function to learn direction and distance-based gradient conflict. In addition, we give a detailed theoretical analysis of the gradient conflict to further validate the effectiveness of our approach. Experimental results on two multi-domain datasets show the superior performance of our proposed model compared to previous work.</p>","PeriodicalId":54312,"journal":{"name":"ACM Transactions on Asian and Low-Resource Language Information Processing","volume":"31 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Ensemble Strategy with Gradient Conflict for Multi-Domain Neural Machine Translation\",\"authors\":\"Zhibo Man, Yujie Zhang, Yu Li, Yuanmeng Chen, Yufeng Chen, Jinan Xu\",\"doi\":\"10.1145/3638248\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Multi-domain neural machine translation aims to construct a unified NMT model to translate sentences across various domains. Nevertheless, previous studies have one limitation is the incapacity to acquire both domain-general and specific representations concurrently. To this end, we propose an ensemble strategy with gradient conflict for multi-domain neural machine translation that automatically learns model parameters by identifying both domain-shared and domain-specific features. Specifically, our approach consists of <b>(1)</b> a parameter-sharing framework: the parameters of all the layers are originally shared and equivalent to each domain. <b>(2)</b> ensemble strategy: we design an Extra Ensemble strategy via a piecewise condition function to learn direction and distance-based gradient conflict. In addition, we give a detailed theoretical analysis of the gradient conflict to further validate the effectiveness of our approach. Experimental results on two multi-domain datasets show the superior performance of our proposed model compared to previous work.</p>\",\"PeriodicalId\":54312,\"journal\":{\"name\":\"ACM Transactions on Asian and Low-Resource Language Information Processing\",\"volume\":\"31 1\",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-12-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Asian and Low-Resource Language Information Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3638248\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Asian and Low-Resource Language Information Processing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3638248","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
An Ensemble Strategy with Gradient Conflict for Multi-Domain Neural Machine Translation
Multi-domain neural machine translation aims to construct a unified NMT model to translate sentences across various domains. Nevertheless, previous studies have one limitation is the incapacity to acquire both domain-general and specific representations concurrently. To this end, we propose an ensemble strategy with gradient conflict for multi-domain neural machine translation that automatically learns model parameters by identifying both domain-shared and domain-specific features. Specifically, our approach consists of (1) a parameter-sharing framework: the parameters of all the layers are originally shared and equivalent to each domain. (2) ensemble strategy: we design an Extra Ensemble strategy via a piecewise condition function to learn direction and distance-based gradient conflict. In addition, we give a detailed theoretical analysis of the gradient conflict to further validate the effectiveness of our approach. Experimental results on two multi-domain datasets show the superior performance of our proposed model compared to previous work.
期刊介绍:
The ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) publishes high quality original archival papers and technical notes in the areas of computation and processing of information in Asian languages, low-resource languages of Africa, Australasia, Oceania and the Americas, as well as related disciplines. The subject areas covered by TALLIP include, but are not limited to:
-Computational Linguistics: including computational phonology, computational morphology, computational syntax (e.g. parsing), computational semantics, computational pragmatics, etc.
-Linguistic Resources: including computational lexicography, terminology, electronic dictionaries, cross-lingual dictionaries, electronic thesauri, etc.
-Hardware and software algorithms and tools for Asian or low-resource language processing, e.g., handwritten character recognition.
-Information Understanding: including text understanding, speech understanding, character recognition, discourse processing, dialogue systems, etc.
-Machine Translation involving Asian or low-resource languages.
-Information Retrieval: including natural language processing (NLP) for concept-based indexing, natural language query interfaces, semantic relevance judgments, etc.
-Information Extraction and Filtering: including automatic abstraction, user profiling, etc.
-Speech processing: including text-to-speech synthesis and automatic speech recognition.
-Multimedia Asian Information Processing: including speech, image, video, image/text translation, etc.
-Cross-lingual information processing involving Asian or low-resource languages.
-Papers that deal in theory, systems design, evaluation and applications in the aforesaid subjects are appropriate for TALLIP. Emphasis will be placed on the originality and the practical significance of the reported research.