A position-aware attention model based on double-level contrastive learning for hyper-relational knowledge graph representation in emergency management
IF 3.4 Q1 PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH
Xinzhi Wang , Weijian Zhu , Jiang Kai , Xiangfeng Luo , Jianqiang Huang
{"title":"A position-aware attention model based on double-level contrastive learning for hyper-relational knowledge graph representation in emergency management","authors":"Xinzhi Wang , Weijian Zhu , Jiang Kai , Xiangfeng Luo , Jianqiang Huang","doi":"10.1016/j.jnlssr.2025.100223","DOIUrl":null,"url":null,"abstract":"<div><div>Effective emergency management relies on timely risk identification and decision-making, wherein natural language processing plays a vital role. Hyper-relational knowledge graph (HKG) representation, which embeds entities and their complex relations into latent space, provides a strong foundation for supporting emergency responses. Existing methods consider either inter-entity or inter-fact dependencies, leading to the loss of interaction information at the unconsidered level (fact level or entity level). To address the above issue, we propose a position-aware attention model based on dual-level contrastive learning (PDCL) for HKG representation. First, the complete and co-occurrence graphs were constructed and encoded using different graph convolutional networks, generating different embedding views for entities and facts. Second, entity-level and fact-level contrastive objectives were designed to enhance information exchange between the two levels in a self-supervised manner. Finally, a linear transformation corresponding to the ordinal information of each element was used to integrate positional constraints into the representation of the HKG. Experimental results for three benchmark datasets showed that the PDCL model outperformed existing state-of-the-art methods. Especially, MRR and Hits@1 values could be improved by up to 1.8% and 3.3%, respectively.</div></div>","PeriodicalId":62710,"journal":{"name":"安全科学与韧性(英文)","volume":"7 1","pages":"Article 100223"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"安全科学与韧性(英文)","FirstCategoryId":"1087","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266644962500057X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PUBLIC, ENVIRONMENTAL & OCCUPATIONAL HEALTH","Score":null,"Total":0}
引用次数: 0
Abstract
Effective emergency management relies on timely risk identification and decision-making, wherein natural language processing plays a vital role. Hyper-relational knowledge graph (HKG) representation, which embeds entities and their complex relations into latent space, provides a strong foundation for supporting emergency responses. Existing methods consider either inter-entity or inter-fact dependencies, leading to the loss of interaction information at the unconsidered level (fact level or entity level). To address the above issue, we propose a position-aware attention model based on dual-level contrastive learning (PDCL) for HKG representation. First, the complete and co-occurrence graphs were constructed and encoded using different graph convolutional networks, generating different embedding views for entities and facts. Second, entity-level and fact-level contrastive objectives were designed to enhance information exchange between the two levels in a self-supervised manner. Finally, a linear transformation corresponding to the ordinal information of each element was used to integrate positional constraints into the representation of the HKG. Experimental results for three benchmark datasets showed that the PDCL model outperformed existing state-of-the-art methods. Especially, MRR and Hits@1 values could be improved by up to 1.8% and 3.3%, respectively.