Sanhe Yang, Peichao Lai, Ruixiong Fang, Yanggeng Fu, Feiyang Ye, Yilei Wang
{"title":"FE-CFNER: Feature Enhancement-based approach for Chinese Few-shot Named Entity Recognition","authors":"Sanhe Yang, Peichao Lai, Ruixiong Fang, Yanggeng Fu, Feiyang Ye, Yilei Wang","doi":"10.1016/j.csl.2024.101730","DOIUrl":null,"url":null,"abstract":"<div><div>Although significant progress has been made in Chinese Named Entity Recognition (NER) methods based on deep learning, their performance often falls short in few-shot scenarios. Feature enhancement is considered a promising approach to address the issue of Chinese few-shot NER. However, traditional feature fusion methods tend to lead to the loss of important information and the integration of irrelevant information. Despite the benefits of incorporating BERT for improving entity recognition, its performance is limited when training data is insufficient. To tackle these challenges, this paper proposes a Feature Enhancement-based approach for Chinese Few-shot NER called FE-CFNER. FE-CFNER designs a double cross neural network to minimize information loss through the interaction of feature cross twice. Additionally, adaptive weights and a top-<span><math><mi>k</mi></math></span> mechanism are introduced to sparsify attention distributions, enabling the model to prioritize important information related to entities while excluding irrelevant information. To further enhance the quality of BERT embeddings, FE-CFNER employs a contrastive template for contrastive learning pre-training of BERT, enhancing BERT’s semantic understanding capability. We evaluate the proposed method on four sampled Chinese NER datasets: Weibo, Resume, Taobao, and Youku. Experimental results validate the effectiveness and superiority of FE-CFNER in Chinese few-shot NER tasks.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S088523082400113X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Although significant progress has been made in Chinese Named Entity Recognition (NER) methods based on deep learning, their performance often falls short in few-shot scenarios. Feature enhancement is considered a promising approach to address the issue of Chinese few-shot NER. However, traditional feature fusion methods tend to lead to the loss of important information and the integration of irrelevant information. Despite the benefits of incorporating BERT for improving entity recognition, its performance is limited when training data is insufficient. To tackle these challenges, this paper proposes a Feature Enhancement-based approach for Chinese Few-shot NER called FE-CFNER. FE-CFNER designs a double cross neural network to minimize information loss through the interaction of feature cross twice. Additionally, adaptive weights and a top- mechanism are introduced to sparsify attention distributions, enabling the model to prioritize important information related to entities while excluding irrelevant information. To further enhance the quality of BERT embeddings, FE-CFNER employs a contrastive template for contrastive learning pre-training of BERT, enhancing BERT’s semantic understanding capability. We evaluate the proposed method on four sampled Chinese NER datasets: Weibo, Resume, Taobao, and Youku. Experimental results validate the effectiveness and superiority of FE-CFNER in Chinese few-shot NER tasks.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.