Mingyuan Li, Yanlin Yang, Lei Meng, Lu Peng, Haixing Zhao, Zhonglin Ye
{"title":"Self-supervised hypergraph structure learning","authors":"Mingyuan Li, Yanlin Yang, Lei Meng, Lu Peng, Haixing Zhao, Zhonglin Ye","doi":"10.1007/s10462-025-11199-6","DOIUrl":null,"url":null,"abstract":"<div><p>Traditional Hypergraph Neural Networks (HGNNs) often assume that hypergraph structures are perfectly constructed, yet real-world hypergraphs are typically corrupted by noise, missing data, or irrelevant information, limiting the effectiveness of hypergraph learning. To address this challenge, we propose SHSL, a novel Self-supervised Hypergraph Structure Learning framework that jointly explores and optimizes hypergraph structures without external labels. SHSL consists of two key components: a self-organizing initialization module that constructs latent hypergraph representations, and a differentiable optimization module that refines hypergraphs through gradient-based learning. These modules collaboratively capture high-order dependencies to enhance hypergraph representations. Furthermore, SHSL introduces a dual learning mechanism to simultaneously guide structure exploration and optimization within a unified framework. Experiments on six public datasets demonstrate that SHSL outperforms state-of-the-art baselines, achieving Accuracy improvements of 1.36%<span>\\(-\\)</span>32.37% and 2.23%<span>\\(-\\)</span>27.54% on hypergraph exploration and optimization tasks, and 1.19%<span>\\(-\\)</span>8.4% on non-hypergraph datasets. Robustness evaluations further validate SHSL’s effectiveness under noisy and incomplete scenarios, highlighting its practical applicability. The implementation of SHSL and all experimental codes are publicly available at: https://github.com/MingyuanLi88888/SHSL.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 6","pages":""},"PeriodicalIF":10.7000,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11199-6.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11199-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Traditional Hypergraph Neural Networks (HGNNs) often assume that hypergraph structures are perfectly constructed, yet real-world hypergraphs are typically corrupted by noise, missing data, or irrelevant information, limiting the effectiveness of hypergraph learning. To address this challenge, we propose SHSL, a novel Self-supervised Hypergraph Structure Learning framework that jointly explores and optimizes hypergraph structures without external labels. SHSL consists of two key components: a self-organizing initialization module that constructs latent hypergraph representations, and a differentiable optimization module that refines hypergraphs through gradient-based learning. These modules collaboratively capture high-order dependencies to enhance hypergraph representations. Furthermore, SHSL introduces a dual learning mechanism to simultaneously guide structure exploration and optimization within a unified framework. Experiments on six public datasets demonstrate that SHSL outperforms state-of-the-art baselines, achieving Accuracy improvements of 1.36%\(-\)32.37% and 2.23%\(-\)27.54% on hypergraph exploration and optimization tasks, and 1.19%\(-\)8.4% on non-hypergraph datasets. Robustness evaluations further validate SHSL’s effectiveness under noisy and incomplete scenarios, highlighting its practical applicability. The implementation of SHSL and all experimental codes are publicly available at: https://github.com/MingyuanLi88888/SHSL.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.