{"title":"Data-efficient graph learning: Problems, progress, and prospects","authors":"Kaize Ding, Yixin Liu, Chuxu Zhang, Jianling Wang","doi":"10.1002/aaai.12200","DOIUrl":null,"url":null,"abstract":"<p>Graph-structured data, ranging from social networks to financial transaction networks, from citation networks to gene regulatory networks, have been widely used for modeling a myriad of real-world systems. As a prevailing model architecture to model graph-structured data, graph neural networks (GNNs) have drawn much attention in both academic and industrial communities in the past decades. Despite their success in different graph learning tasks, existing methods usually rely on learning from “big” data, requiring a large amount of labeled data for model training. However, it is common that real-world graphs are associated with “small” labeled data as data annotation and labeling on graphs is always time and resource-consuming. Therefore, it is imperative to investigate graph machine learning (graph ML) with low-cost human supervision for low-resource settings where limited or even no labeled data is available. This paper investigates a new research field—data-efficient graph learning, which aims to push forward the performance boundary of graph ML models with different kinds of low-cost supervision signals. Specifically, we outline the fundamental research problems, review the current progress, and discuss the future prospects of data-efficient graph learning, aiming to illuminate the path for subsequent research in this field.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"549-560"},"PeriodicalIF":2.5000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12200","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12200","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
从社交网络到金融交易网络,从引文网络到基因调控网络,图结构数据已被广泛用于模拟现实世界中的各种系统。作为图结构数据建模的主流模型架构,图神经网络(GNN)在过去几十年中引起了学术界和工业界的广泛关注。尽管它们在不同的图学习任务中取得了成功,但现有方法通常依赖于从 "大 "数据中学习,需要大量标注数据来进行模型训练。然而,现实世界中的图通常与 "小 "标注数据相关联,因为对图进行数据注释和标注总是耗费时间和资源。因此,在资源有限甚至没有标注数据的情况下,研究具有低成本人工监督的图机器学习(graph ML)势在必行。本文探讨了一个新的研究领域--数据高效图学习,旨在通过不同类型的低成本监督信号来推动图 ML 模型的性能边界。具体而言,我们概述了数据高效图学习的基础研究问题,回顾了当前的研究进展,并讨论了其未来前景,旨在为该领域的后续研究指明方向。
Data-efficient graph learning: Problems, progress, and prospects
Graph-structured data, ranging from social networks to financial transaction networks, from citation networks to gene regulatory networks, have been widely used for modeling a myriad of real-world systems. As a prevailing model architecture to model graph-structured data, graph neural networks (GNNs) have drawn much attention in both academic and industrial communities in the past decades. Despite their success in different graph learning tasks, existing methods usually rely on learning from “big” data, requiring a large amount of labeled data for model training. However, it is common that real-world graphs are associated with “small” labeled data as data annotation and labeling on graphs is always time and resource-consuming. Therefore, it is imperative to investigate graph machine learning (graph ML) with low-cost human supervision for low-resource settings where limited or even no labeled data is available. This paper investigates a new research field—data-efficient graph learning, which aims to push forward the performance boundary of graph ML models with different kinds of low-cost supervision signals. Specifically, we outline the fundamental research problems, review the current progress, and discuss the future prospects of data-efficient graph learning, aiming to illuminate the path for subsequent research in this field.
期刊介绍:
AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.