{"title":"A knowledge-fused maximum mean discrepancy for cross-lingual named entity recognition","authors":"Hailong Cao, Junlin Shang, Muyun Yang, Tiejun Zhao","doi":"10.1016/j.inffus.2025.103494","DOIUrl":null,"url":null,"abstract":"<div><div>Cross-lingual named entity recognition (NER) aims to train a model that effectively transfers knowledge from a source language to a target language using labeled data from the source. This approach addresses the challenges posed by the limited availability of NER resources in certain languages. In such transfer learning tasks, the Maximum Mean Discrepancy (MMD) loss function is commonly used to minimize the discrepancy between the source and target domains. However, computing the MMD loss is computationally intensive. Traditional methods often use sampling methods for approximate calculations. But from an accuracy perspective, sampling without prior knowledge yields suboptimal results. To address these challenges, we fuse part-of-speech knowledge into the computation of MMD. Specifically, we replace words of various parts of speech in the sentence with [MASK] token at a specific proportion. We then obtain category labels based on the part of speech of the replaced words. Subsequently, we perform stratified sampling based on these category labels to achieve more accurate results in the MMD calculation. Experiments on multiple benchmark datasets show that our model outperforms existing methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"125 ","pages":"Article 103494"},"PeriodicalIF":15.5000,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525005676","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Cross-lingual named entity recognition (NER) aims to train a model that effectively transfers knowledge from a source language to a target language using labeled data from the source. This approach addresses the challenges posed by the limited availability of NER resources in certain languages. In such transfer learning tasks, the Maximum Mean Discrepancy (MMD) loss function is commonly used to minimize the discrepancy between the source and target domains. However, computing the MMD loss is computationally intensive. Traditional methods often use sampling methods for approximate calculations. But from an accuracy perspective, sampling without prior knowledge yields suboptimal results. To address these challenges, we fuse part-of-speech knowledge into the computation of MMD. Specifically, we replace words of various parts of speech in the sentence with [MASK] token at a specific proportion. We then obtain category labels based on the part of speech of the replaced words. Subsequently, we perform stratified sampling based on these category labels to achieve more accurate results in the MMD calculation. Experiments on multiple benchmark datasets show that our model outperforms existing methods.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.