{"title":"通过利用局部和全局相似性来减轻图少射学习中的过度压缩","authors":"Yassin Mohamadi, Mostafa Haghir Chehreghani","doi":"10.1016/j.asoc.2025.113863","DOIUrl":null,"url":null,"abstract":"<div><div>Supervised machine learning models, particularly neural networks, often fail to deliver satisfactory results in scenarios with insufficient data. This becomes even more challenging when dealing with inherently complex data, such as graph data. This paper addresses the issue of learning with a limited number of samples, known as <span><math><mi>n</mi></math></span>-way <span><math><mi>k</mi></math></span>-shot learning, within the context of graph data. Our research extends the concept of similarity from neighboring nodes to the entire graph by leveraging transitivity relations. By employing edges and strong transitivity relations, we utilize a bipartite graph neural network that capitalizes on both local neighborhoods and distant, yet similar, nodes to generate node embeddings. This approach has demonstrated effectiveness in tasks such as node classification. Our proposed model’s ability to mitigate the over-squashing problem enhances its generalizability, resulting in a task-invariant model. Experimental results on various graph datasets show that the embeddings produced by our model are not task-specific. Consequently, our model outperforms other models in few-shot learning scenarios, where only a limited number of labeled nodes are available for each distinct downstream task.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"184 ","pages":"Article 113863"},"PeriodicalIF":6.6000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating over-squashing in graph few-shot learning by leveraging local and global similarities\",\"authors\":\"Yassin Mohamadi, Mostafa Haghir Chehreghani\",\"doi\":\"10.1016/j.asoc.2025.113863\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Supervised machine learning models, particularly neural networks, often fail to deliver satisfactory results in scenarios with insufficient data. This becomes even more challenging when dealing with inherently complex data, such as graph data. This paper addresses the issue of learning with a limited number of samples, known as <span><math><mi>n</mi></math></span>-way <span><math><mi>k</mi></math></span>-shot learning, within the context of graph data. Our research extends the concept of similarity from neighboring nodes to the entire graph by leveraging transitivity relations. By employing edges and strong transitivity relations, we utilize a bipartite graph neural network that capitalizes on both local neighborhoods and distant, yet similar, nodes to generate node embeddings. This approach has demonstrated effectiveness in tasks such as node classification. Our proposed model’s ability to mitigate the over-squashing problem enhances its generalizability, resulting in a task-invariant model. Experimental results on various graph datasets show that the embeddings produced by our model are not task-specific. Consequently, our model outperforms other models in few-shot learning scenarios, where only a limited number of labeled nodes are available for each distinct downstream task.</div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"184 \",\"pages\":\"Article 113863\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2025-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625011767\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625011767","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Mitigating over-squashing in graph few-shot learning by leveraging local and global similarities
Supervised machine learning models, particularly neural networks, often fail to deliver satisfactory results in scenarios with insufficient data. This becomes even more challenging when dealing with inherently complex data, such as graph data. This paper addresses the issue of learning with a limited number of samples, known as -way -shot learning, within the context of graph data. Our research extends the concept of similarity from neighboring nodes to the entire graph by leveraging transitivity relations. By employing edges and strong transitivity relations, we utilize a bipartite graph neural network that capitalizes on both local neighborhoods and distant, yet similar, nodes to generate node embeddings. This approach has demonstrated effectiveness in tasks such as node classification. Our proposed model’s ability to mitigate the over-squashing problem enhances its generalizability, resulting in a task-invariant model. Experimental results on various graph datasets show that the embeddings produced by our model are not task-specific. Consequently, our model outperforms other models in few-shot learning scenarios, where only a limited number of labeled nodes are available for each distinct downstream task.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.