Diyou Li, Lijuan Zhang, Jie Huang, Neal Xiong, Lei Zhang, Jian Wan
{"title":"利用双重对比学习框架和交叉注意模块加强零镜头关系提取","authors":"Diyou Li, Lijuan Zhang, Jie Huang, Neal Xiong, Lei Zhang, Jian Wan","doi":"10.1007/s40747-024-01642-6","DOIUrl":null,"url":null,"abstract":"<p>Zero-shot relation extraction (ZSRE) is essential for improving the understanding of natural language relations and enhancing the accuracy and efficiency of natural language processing methods in practical applications. However, the existing ZSRE models ignore the importance of semantic information fusion and possess limitations when used for zero-shot relation extraction tasks. Thus, this paper proposes a dual contrastive learning framework and a cross-attention network module for ZSRE. First, our model designs a dual contrastive learning framework to compare the input sentences and relation descriptions from different perspectives; this process aims to achieve better separation between different relation categories in the representation space. Moreover, the cross-attention network of our model is introduced from the computer vision field to enhance the attention paid by the input instance to the relevant information of the relation description. The experimental results obtained on the Wiki-ZSL and FewRel datasets fully demonstrate the effectiveness of our approach.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"11 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module\",\"authors\":\"Diyou Li, Lijuan Zhang, Jie Huang, Neal Xiong, Lei Zhang, Jian Wan\",\"doi\":\"10.1007/s40747-024-01642-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Zero-shot relation extraction (ZSRE) is essential for improving the understanding of natural language relations and enhancing the accuracy and efficiency of natural language processing methods in practical applications. However, the existing ZSRE models ignore the importance of semantic information fusion and possess limitations when used for zero-shot relation extraction tasks. Thus, this paper proposes a dual contrastive learning framework and a cross-attention network module for ZSRE. First, our model designs a dual contrastive learning framework to compare the input sentences and relation descriptions from different perspectives; this process aims to achieve better separation between different relation categories in the representation space. Moreover, the cross-attention network of our model is introduced from the computer vision field to enhance the attention paid by the input instance to the relevant information of the relation description. The experimental results obtained on the Wiki-ZSL and FewRel datasets fully demonstrate the effectiveness of our approach.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01642-6\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01642-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Enhancing zero-shot relation extraction with a dual contrastive learning framework and a cross-attention module
Zero-shot relation extraction (ZSRE) is essential for improving the understanding of natural language relations and enhancing the accuracy and efficiency of natural language processing methods in practical applications. However, the existing ZSRE models ignore the importance of semantic information fusion and possess limitations when used for zero-shot relation extraction tasks. Thus, this paper proposes a dual contrastive learning framework and a cross-attention network module for ZSRE. First, our model designs a dual contrastive learning framework to compare the input sentences and relation descriptions from different perspectives; this process aims to achieve better separation between different relation categories in the representation space. Moreover, the cross-attention network of our model is introduced from the computer vision field to enhance the attention paid by the input instance to the relevant information of the relation description. The experimental results obtained on the Wiki-ZSL and FewRel datasets fully demonstrate the effectiveness of our approach.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.