DR-MIM: Zero-shot cross-lingual transfer via disentangled representation and mutual information maximization

IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Wenwen Zhao, Zhisheng Yang, Li Li
{"title":"DR-MIM: Zero-shot cross-lingual transfer via disentangled representation and mutual information maximization","authors":"Wenwen Zhao,&nbsp;Zhisheng Yang,&nbsp;Li Li","doi":"10.1016/j.ipm.2025.104389","DOIUrl":null,"url":null,"abstract":"<div><div>Multilingual models have made significant progress in cross-lingual transferability through large-scale pretraining. However, the generated global representations are often mixed with language-specific noise, limiting their effectiveness in low-resource language scenarios. This paper explores how to more efficiently utilize the representations learned by multilingual pretraining models by separating language-invariant features from language-specific ones. To this end, we propose a novel cross-lingual transfer framework, DR-MIM, which explicitly decouples universal and language-specific features, reduces noise interference, and improves model stability and accuracy. Additionally, we introduce a mutual information maximization mechanism to strengthen the correlation between universal features and model outputs, further optimizing the quality of semantic representations. We conducted a systematic evaluation of this method on three cross-lingual natural language understanding benchmark datasets. On the TyDiQA dataset, DR-MIM improved the F1 score by 1.7% and the EM score by 4.5% over the best baseline. To further validate the model’s generalization capability, we introduced two new tasks: paraphrase identification and natural language inference, and designed both within-language and cross-language analysis experiments. All experiments collectively covered 22 languages. Further ablation studies, generalization analysis, and visualization results all confirm the effectiveness and adaptability of our approach.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 2","pages":"Article 104389"},"PeriodicalIF":6.9000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325003309","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Multilingual models have made significant progress in cross-lingual transferability through large-scale pretraining. However, the generated global representations are often mixed with language-specific noise, limiting their effectiveness in low-resource language scenarios. This paper explores how to more efficiently utilize the representations learned by multilingual pretraining models by separating language-invariant features from language-specific ones. To this end, we propose a novel cross-lingual transfer framework, DR-MIM, which explicitly decouples universal and language-specific features, reduces noise interference, and improves model stability and accuracy. Additionally, we introduce a mutual information maximization mechanism to strengthen the correlation between universal features and model outputs, further optimizing the quality of semantic representations. We conducted a systematic evaluation of this method on three cross-lingual natural language understanding benchmark datasets. On the TyDiQA dataset, DR-MIM improved the F1 score by 1.7% and the EM score by 4.5% over the best baseline. To further validate the model’s generalization capability, we introduced two new tasks: paraphrase identification and natural language inference, and designed both within-language and cross-language analysis experiments. All experiments collectively covered 22 languages. Further ablation studies, generalization analysis, and visualization results all confirm the effectiveness and adaptability of our approach.
DR-MIM:通过解纠缠表示和相互信息最大化的零概率跨语言迁移
通过大规模的预训练,多语言模型在跨语言迁移方面取得了重大进展。然而,生成的全局表示通常与特定于语言的噪声混合在一起,限制了它们在低资源语言场景中的有效性。本文探讨了如何通过将语言不变特征与特定语言特征分离,更有效地利用多语言预训练模型学习到的表征。为此,我们提出了一种新的跨语言迁移框架DR-MIM,它显式地解耦了通用和特定语言的特征,减少了噪声干扰,提高了模型的稳定性和准确性。此外,我们引入了互信息最大化机制来加强通用特征和模型输出之间的相关性,进一步优化语义表示的质量。我们在三个跨语言自然语言理解基准数据集上对该方法进行了系统的评估。在TyDiQA数据集上,DR-MIM在最佳基线上将F1得分提高了1.7%,EM得分提高了4.5%。为了进一步验证模型的泛化能力,我们引入了两个新任务:意译识别和自然语言推理,并设计了语言内和跨语言分析实验。所有实验总共涉及22种语言。进一步的消融研究、泛化分析和可视化结果都证实了我们方法的有效性和适应性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Processing & Management
Information Processing & Management 工程技术-计算机:信息系统
CiteScore
17.00
自引率
11.60%
发文量
276
审稿时长
39 days
期刊介绍: Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing. We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信