{"title":"跨领域命名实体识别的双重对比学习","authors":"Jingyun Xu, Junnan Yu, Yi Cai, Tat-Seng Chua","doi":"10.1145/3678879","DOIUrl":null,"url":null,"abstract":"\n Benefiting many information retrieval applications, named entity recognition (NER) has shown impressive progress. Recently, there has been a growing trend to decompose complex NER tasks into two subtasks (\n \n \\(e.g.,\\)\n \n entity span detection (ESD) and entity type classification (ETC), to achieve better performance. Despite the remarkable success, from the perspective of representation, existing methods do not explicitly distinguish non-entities and entities, which may lead to entity span detection errors. Meanwhile, they do not explicitly distinguish entities with different entity types, which may lead to entity type misclassification. As such, the limited representation abilities may challenge some competitive NER methods, leading to unsatisfactory performance, especially in the low-resource setting (\n \n \\(e.g.,\\)\n \n cross-domain NER). In light of these challenges, we propose to utilize contrastive learning to refine the original chaotic representations and learn the generalized representations for cross-domain NER. In particular, this paper proposes a dual contrastive learning model (Dual-CL), which respectively utilizes a token-level contrastive learning module and a sentence-level contrastive learning module to enhance ESD, ETC for cross-domain NER. Empirical results on 10 domain pairs under two different settings show that Dual-CL achieves better performances than compared baselines in terms of several standard metrics. Moreover, we conduct detailed analyses to are presented to better understand each component’s effectiveness.\n","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dual Contrastive Learning for Cross-domain Named Entity Recognition\",\"authors\":\"Jingyun Xu, Junnan Yu, Yi Cai, Tat-Seng Chua\",\"doi\":\"10.1145/3678879\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Benefiting many information retrieval applications, named entity recognition (NER) has shown impressive progress. Recently, there has been a growing trend to decompose complex NER tasks into two subtasks (\\n \\n \\\\(e.g.,\\\\)\\n \\n entity span detection (ESD) and entity type classification (ETC), to achieve better performance. Despite the remarkable success, from the perspective of representation, existing methods do not explicitly distinguish non-entities and entities, which may lead to entity span detection errors. Meanwhile, they do not explicitly distinguish entities with different entity types, which may lead to entity type misclassification. As such, the limited representation abilities may challenge some competitive NER methods, leading to unsatisfactory performance, especially in the low-resource setting (\\n \\n \\\\(e.g.,\\\\)\\n \\n cross-domain NER). In light of these challenges, we propose to utilize contrastive learning to refine the original chaotic representations and learn the generalized representations for cross-domain NER. In particular, this paper proposes a dual contrastive learning model (Dual-CL), which respectively utilizes a token-level contrastive learning module and a sentence-level contrastive learning module to enhance ESD, ETC for cross-domain NER. Empirical results on 10 domain pairs under two different settings show that Dual-CL achieves better performances than compared baselines in terms of several standard metrics. Moreover, we conduct detailed analyses to are presented to better understand each component’s effectiveness.\\n\",\"PeriodicalId\":50936,\"journal\":{\"name\":\"ACM Transactions on Information Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-07-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Information Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3678879\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Information Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3678879","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
命名实体识别(NER)使许多信息检索应用受益匪浅,并取得了令人瞩目的进展。近来,将复杂的 NER 任务分解为实体跨度检测(ESD)和实体类型分类(ETC)两个子任务以获得更好性能的趋势越来越明显。尽管取得了显著的成绩,但从表征的角度来看,现有的方法并没有明确区分非实体和实体,这可能会导致实体跨度检测错误。同时,它们也没有明确区分不同实体类型的实体,这可能会导致实体类型分类错误。因此,有限的表示能力可能会挑战一些有竞争力的 NER 方法,导致其性能不尽如人意,尤其是在低资源环境下(如跨域 NER)。鉴于这些挑战,我们建议利用对比学习来完善原始混沌表征,并学习用于跨域 NER 的广义表征。本文特别提出了一种双对比学习模型(Dual-CL),分别利用标记级对比学习模块和句子级对比学习模块来增强跨域 NER 的 ESD 和 ETC。在两种不同设置下对 10 个域对进行的实证结果表明,Dual-CL 在多个标准指标方面都取得了比基线更好的性能。此外,我们还进行了详细的分析,以更好地了解每个组件的有效性。
Dual Contrastive Learning for Cross-domain Named Entity Recognition
Benefiting many information retrieval applications, named entity recognition (NER) has shown impressive progress. Recently, there has been a growing trend to decompose complex NER tasks into two subtasks (
\(e.g.,\)
entity span detection (ESD) and entity type classification (ETC), to achieve better performance. Despite the remarkable success, from the perspective of representation, existing methods do not explicitly distinguish non-entities and entities, which may lead to entity span detection errors. Meanwhile, they do not explicitly distinguish entities with different entity types, which may lead to entity type misclassification. As such, the limited representation abilities may challenge some competitive NER methods, leading to unsatisfactory performance, especially in the low-resource setting (
\(e.g.,\)
cross-domain NER). In light of these challenges, we propose to utilize contrastive learning to refine the original chaotic representations and learn the generalized representations for cross-domain NER. In particular, this paper proposes a dual contrastive learning model (Dual-CL), which respectively utilizes a token-level contrastive learning module and a sentence-level contrastive learning module to enhance ESD, ETC for cross-domain NER. Empirical results on 10 domain pairs under two different settings show that Dual-CL achieves better performances than compared baselines in terms of several standard metrics. Moreover, we conduct detailed analyses to are presented to better understand each component’s effectiveness.
期刊介绍:
The ACM Transactions on Information Systems (TOIS) publishes papers on information retrieval (such as search engines, recommender systems) that contain:
new principled information retrieval models or algorithms with sound empirical validation;
observational, experimental and/or theoretical studies yielding new insights into information retrieval or information seeking;
accounts of applications of existing information retrieval techniques that shed light on the strengths and weaknesses of the techniques;
formalization of new information retrieval or information seeking tasks and of methods for evaluating the performance on those tasks;
development of content (text, image, speech, video, etc) analysis methods to support information retrieval and information seeking;
development of computational models of user information preferences and interaction behaviors;
creation and analysis of evaluation methodologies for information retrieval and information seeking; or
surveys of existing work that propose a significant synthesis.
The information retrieval scope of ACM Transactions on Information Systems (TOIS) appeals to industry practitioners for its wealth of creative ideas, and to academic researchers for its descriptions of their colleagues'' work.