{"title":"Quantum entropy structural encoding for graph neural networks","authors":"Feng Ding , Yingbo Wang , Shuo Yu , Yanming Shen","doi":"10.1016/j.knosys.2025.114580","DOIUrl":null,"url":null,"abstract":"<div><div>Structural encoding (SE) can improve the expressive power of Graph Neural Networks (GNNs). However, current SE methods have limited expressive power because they have limitations in capturing (1) node subgraphs, (2) global position of nodes, and (3) global structure of the graph. To tackle this challenge, we propose a Quantum Entropy Structural Encoding (QESE) for GNNs. For limitations (1) and (3), we employ quantum entropy on node subgraphs and the whole graph to recognize highly similar structures. For limitation (2), we apply quantum entropy on complement parts of node subgraphs for locating node positions. Then, we obtain QESE by integrating quantum entropies of these three parts through the Holevo <span><math><mi>χ</mi></math></span> quantity. Notably, we prove that QESE always captures structural distinction in node subgraphs and the whole graph, and the Holevo <span><math><mi>χ</mi></math></span> quantity empowers QESE to represent global position of nodes. We theoretically show that QESE distinguishes strongly regular graphs that 3-WL fails to, and has the potential to be more powerful than <span><math><mi>k</mi></math></span>-WL (<span><math><mi>k</mi></math></span>>3). We adopt a plug-and-play approach to inject QESE with existing GNNs, and further design an approximated version to reduce computational complexity. Experimental results show that QESE uplifts the expressive power of GNNs beyond 3-WL and indeed captures node subgraphs. Furthermore, QESE improves the performance of various GNNs in graph learning tasks and also surpasses other SE methods.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"330 ","pages":"Article 114580"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125016193","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Structural encoding (SE) can improve the expressive power of Graph Neural Networks (GNNs). However, current SE methods have limited expressive power because they have limitations in capturing (1) node subgraphs, (2) global position of nodes, and (3) global structure of the graph. To tackle this challenge, we propose a Quantum Entropy Structural Encoding (QESE) for GNNs. For limitations (1) and (3), we employ quantum entropy on node subgraphs and the whole graph to recognize highly similar structures. For limitation (2), we apply quantum entropy on complement parts of node subgraphs for locating node positions. Then, we obtain QESE by integrating quantum entropies of these three parts through the Holevo quantity. Notably, we prove that QESE always captures structural distinction in node subgraphs and the whole graph, and the Holevo quantity empowers QESE to represent global position of nodes. We theoretically show that QESE distinguishes strongly regular graphs that 3-WL fails to, and has the potential to be more powerful than -WL (>3). We adopt a plug-and-play approach to inject QESE with existing GNNs, and further design an approximated version to reduce computational complexity. Experimental results show that QESE uplifts the expressive power of GNNs beyond 3-WL and indeed captures node subgraphs. Furthermore, QESE improves the performance of various GNNs in graph learning tasks and also surpasses other SE methods.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.