{"title":"基于对比句的句子级关系语义学习。","authors":"Bowen Xing,Ivor W Tsang","doi":"10.1109/tpami.2025.3607794","DOIUrl":null,"url":null,"abstract":"Sentence-level semantics plays a key role in language understanding. There exist subtle relations and dependencies among sentence-level samples, which is to be exploited. For example, in relational triple extraction, existing models overemphasize extraction modules, ignoring the sentence-level semantics and relation information, which causes (1) the semantics fed to extraction modules is relation-unaware; (2) each sample is trained individually without considering inter-sample dependency. To address these issues, we first propose the model-agnostic multi-relation detection task, which incorporates relation information into text encoding to generate the relation-aware semantics. Then we propose the model-agnostic multi-relation supervised contrastive learning, which leverages the relation-derived inter-sample dependencies as a supervised signal to learn discriminative semantics via drawing together or pushing away the sentence-level semantics regarding whether they share the same/similar relations. Besides, we design the reverse label frequency weighting and hierarchical label embedding mechanisms to alleviate label imbalance and integrate relation hierarchy. Our method can be applied to any RTE model and we conduct extensive experiments on five backbones by augmenting them with our method. Experimental results on four public benchmarks show that our method can bring significant and consistent improvements to various backbones and model analysis further verify the effectiveness of our method.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"1 1","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sentence-level Relation Semantics Learning via Contrastive Sentences.\",\"authors\":\"Bowen Xing,Ivor W Tsang\",\"doi\":\"10.1109/tpami.2025.3607794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sentence-level semantics plays a key role in language understanding. There exist subtle relations and dependencies among sentence-level samples, which is to be exploited. For example, in relational triple extraction, existing models overemphasize extraction modules, ignoring the sentence-level semantics and relation information, which causes (1) the semantics fed to extraction modules is relation-unaware; (2) each sample is trained individually without considering inter-sample dependency. To address these issues, we first propose the model-agnostic multi-relation detection task, which incorporates relation information into text encoding to generate the relation-aware semantics. Then we propose the model-agnostic multi-relation supervised contrastive learning, which leverages the relation-derived inter-sample dependencies as a supervised signal to learn discriminative semantics via drawing together or pushing away the sentence-level semantics regarding whether they share the same/similar relations. Besides, we design the reverse label frequency weighting and hierarchical label embedding mechanisms to alleviate label imbalance and integrate relation hierarchy. Our method can be applied to any RTE model and we conduct extensive experiments on five backbones by augmenting them with our method. Experimental results on four public benchmarks show that our method can bring significant and consistent improvements to various backbones and model analysis further verify the effectiveness of our method.\",\"PeriodicalId\":13426,\"journal\":{\"name\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":18.6000,\"publicationDate\":\"2025-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tpami.2025.3607794\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3607794","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Sentence-level Relation Semantics Learning via Contrastive Sentences.
Sentence-level semantics plays a key role in language understanding. There exist subtle relations and dependencies among sentence-level samples, which is to be exploited. For example, in relational triple extraction, existing models overemphasize extraction modules, ignoring the sentence-level semantics and relation information, which causes (1) the semantics fed to extraction modules is relation-unaware; (2) each sample is trained individually without considering inter-sample dependency. To address these issues, we first propose the model-agnostic multi-relation detection task, which incorporates relation information into text encoding to generate the relation-aware semantics. Then we propose the model-agnostic multi-relation supervised contrastive learning, which leverages the relation-derived inter-sample dependencies as a supervised signal to learn discriminative semantics via drawing together or pushing away the sentence-level semantics regarding whether they share the same/similar relations. Besides, we design the reverse label frequency weighting and hierarchical label embedding mechanisms to alleviate label imbalance and integrate relation hierarchy. Our method can be applied to any RTE model and we conduct extensive experiments on five backbones by augmenting them with our method. Experimental results on four public benchmarks show that our method can bring significant and consistent improvements to various backbones and model analysis further verify the effectiveness of our method.
期刊介绍:
The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.