将基于模板的对比学习融入认知启发的低资源关系提取中

IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yandan Zheng, Luu Anh Tuan
{"title":"将基于模板的对比学习融入认知启发的低资源关系提取中","authors":"Yandan Zheng, Luu Anh Tuan","doi":"10.1007/s12559-024-10343-8","DOIUrl":null,"url":null,"abstract":"<p>From an unstructured text, relation extraction (RE) predicts semantic relationships between pairs of entities. The process of labeling tokens and phrases can be very expensive and require a great deal of time and effort. The low-resource relation extraction (LRE) problem comes into being and is challenging since there are only a limited number of annotated sentences available. Recent research has focused on minimizing the cross-entropy loss between pseudo labels and ground truth or on using external knowledge to make annotations for unlabeled data. Existing methods, however, fail to take into account the semantics of relation types and the information hidden within different relation groups. By drawing inspiration from the process of human interpretation of unstructured documents, we introduce a <b>Temp</b>late-based <b>C</b>ontrastive <b>L</b>earning ( <span>TempCL</span> ). Through the use of <i>template</i>, we limit the model’s attention to the semantic information that is contained in a relation. Then, we employ a <i>contrastive learning</i> strategy using both <i>group-wise</i> and <i>instance-wise</i> perspectives to leverage shared semantic information within the same relation type to achieve a more coherent semantic representation. Particularly, the proposed group-wise contrastive learning minimizes the discrepancy between the template and original sentences in the same label group and maximizes the difference between those from separate label groups under limited annotation settings. Our experiment results on two public datasets show that our model <span>TempCL</span> achieves state-of-the-art results for low-resource relation extraction in comparison to baselines. The relative error reductions range from 0.68 to 1.32%. Our model encourages the feature to be aligned with both the original and template sentences. Using two contrastive losses, we exploit shared semantic information underlying sentences (both original and template) that have the same relation type. We demonstrate that our method reduces the noise caused by tokens that are unrelated and constrains the model’s attention to the tokens that are related.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"100 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Incorporating Template-Based Contrastive Learning into Cognitively Inspired, Low-Resource Relation Extraction\",\"authors\":\"Yandan Zheng, Luu Anh Tuan\",\"doi\":\"10.1007/s12559-024-10343-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>From an unstructured text, relation extraction (RE) predicts semantic relationships between pairs of entities. The process of labeling tokens and phrases can be very expensive and require a great deal of time and effort. The low-resource relation extraction (LRE) problem comes into being and is challenging since there are only a limited number of annotated sentences available. Recent research has focused on minimizing the cross-entropy loss between pseudo labels and ground truth or on using external knowledge to make annotations for unlabeled data. Existing methods, however, fail to take into account the semantics of relation types and the information hidden within different relation groups. By drawing inspiration from the process of human interpretation of unstructured documents, we introduce a <b>Temp</b>late-based <b>C</b>ontrastive <b>L</b>earning ( <span>TempCL</span> ). Through the use of <i>template</i>, we limit the model’s attention to the semantic information that is contained in a relation. Then, we employ a <i>contrastive learning</i> strategy using both <i>group-wise</i> and <i>instance-wise</i> perspectives to leverage shared semantic information within the same relation type to achieve a more coherent semantic representation. Particularly, the proposed group-wise contrastive learning minimizes the discrepancy between the template and original sentences in the same label group and maximizes the difference between those from separate label groups under limited annotation settings. Our experiment results on two public datasets show that our model <span>TempCL</span> achieves state-of-the-art results for low-resource relation extraction in comparison to baselines. The relative error reductions range from 0.68 to 1.32%. Our model encourages the feature to be aligned with both the original and template sentences. Using two contrastive losses, we exploit shared semantic information underlying sentences (both original and template) that have the same relation type. We demonstrate that our method reduces the noise caused by tokens that are unrelated and constrains the model’s attention to the tokens that are related.</p>\",\"PeriodicalId\":51243,\"journal\":{\"name\":\"Cognitive Computation\",\"volume\":\"100 1\",\"pages\":\"\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12559-024-10343-8\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12559-024-10343-8","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

关系提取(RE)是从非结构化文本中预测实体对之间的语义关系。标记标记符和短语的过程可能非常昂贵,需要花费大量的时间和精力。低资源关系提取(LRE)问题应运而生,由于可用的注释句子数量有限,因此具有挑战性。最近的研究主要集中在尽量减少伪标签和地面实况之间的交叉熵损失,或利用外部知识为无标签数据进行注释。然而,现有的方法没有考虑到关系类型的语义以及隐藏在不同关系组中的信息。通过从人类对非结构化文档的解释过程中汲取灵感,我们引入了基于模板的对比学习(TempCL)。通过使用模板,我们将模型的注意力限制在关系中包含的语义信息上。然后,我们采用了一种对比学习策略,从分组和实例两个角度来利用同一关系类型中的共享语义信息,从而获得更加连贯的语义表征。特别是,在有限的注释设置下,所提出的分组对比学习能使同一标签组中模板与原始句子之间的差异最小化,并使不同标签组中模板与原始句子之间的差异最大化。我们在两个公开数据集上的实验结果表明,与基线相比,我们的模型 TempCL 在低资源关系提取方面取得了最先进的结果。相对误差降低了 0.68% 到 1.32%。我们的模型鼓励特征与原始句子和模板句子保持一致。利用两种对比损失,我们利用了具有相同关系类型的句子(包括原始句和模板句)中的共享语义信息。我们证明,我们的方法减少了不相关的标记所造成的噪音,并将模型的注意力限制在相关的标记上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Incorporating Template-Based Contrastive Learning into Cognitively Inspired, Low-Resource Relation Extraction

Incorporating Template-Based Contrastive Learning into Cognitively Inspired, Low-Resource Relation Extraction

From an unstructured text, relation extraction (RE) predicts semantic relationships between pairs of entities. The process of labeling tokens and phrases can be very expensive and require a great deal of time and effort. The low-resource relation extraction (LRE) problem comes into being and is challenging since there are only a limited number of annotated sentences available. Recent research has focused on minimizing the cross-entropy loss between pseudo labels and ground truth or on using external knowledge to make annotations for unlabeled data. Existing methods, however, fail to take into account the semantics of relation types and the information hidden within different relation groups. By drawing inspiration from the process of human interpretation of unstructured documents, we introduce a Template-based Contrastive Learning ( TempCL ). Through the use of template, we limit the model’s attention to the semantic information that is contained in a relation. Then, we employ a contrastive learning strategy using both group-wise and instance-wise perspectives to leverage shared semantic information within the same relation type to achieve a more coherent semantic representation. Particularly, the proposed group-wise contrastive learning minimizes the discrepancy between the template and original sentences in the same label group and maximizes the difference between those from separate label groups under limited annotation settings. Our experiment results on two public datasets show that our model TempCL achieves state-of-the-art results for low-resource relation extraction in comparison to baselines. The relative error reductions range from 0.68 to 1.32%. Our model encourages the feature to be aligned with both the original and template sentences. Using two contrastive losses, we exploit shared semantic information underlying sentences (both original and template) that have the same relation type. We demonstrate that our method reduces the noise caused by tokens that are unrelated and constrains the model’s attention to the tokens that are related.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Computation
Cognitive Computation COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-NEUROSCIENCES
CiteScore
9.30
自引率
3.70%
发文量
116
审稿时长
>12 weeks
期刊介绍: Cognitive Computation is an international, peer-reviewed, interdisciplinary journal that publishes cutting-edge articles describing original basic and applied work involving biologically-inspired computational accounts of all aspects of natural and artificial cognitive systems. It provides a new platform for the dissemination of research, current practices and future trends in the emerging discipline of cognitive computation that bridges the gap between life sciences, social sciences, engineering, physical and mathematical sciences, and humanities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信