Graph Neural Networks and Applied Linear Algebra

IF 10.8 1区 数学 Q1 MATHEMATICS, APPLIED
SIAM Review Pub Date : 2025-02-06 DOI:10.1137/23m1609786
Nicholas S. Moore, Eric C. Cyr, Peter Ohm, Christopher M. Siefert, Raymond S. Tuminaro
{"title":"Graph Neural Networks and Applied Linear Algebra","authors":"Nicholas S. Moore, Eric C. Cyr, Peter Ohm, Christopher M. Siefert, Raymond S. Tuminaro","doi":"10.1137/23m1609786","DOIUrl":null,"url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 141-175, March 2025. <br/> Abstract.Sparse matrix computations are ubiquitous in scientific computing. Given the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NNs). Unfortunately, multilayer perceptron (MLP) NNs are typically not natural for either graph or sparse matrix computations. The issue lies with the fact that MLPs require fixed-sized inputs, while scientific applications generally generate sparse matrices with arbitrary dimensions and a wide range of different nonzero patterns (or matrix graph vertex interconnections). While convolutional NNs could possibly address matrix graphs where all vertices have the same number of nearest neighbors, a more general approach is needed for arbitrary sparse matrices, e.g., those arising from discretized partial differential equations on unstructured meshes. Graph neural networks (GNNs) are one such approach suitable to sparse matrices. The key idea is to define aggregation functions (e.g., summations) that operate on variable-size input data to produce data of a fixed output size so that MLPs can be applied. The goal of this paper is to provide an introduction to GNNs for a numerical linear algebra audience. Concrete GNN examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs. We focus on iterative and multigrid methods that employ computational kernels such as matrix-vector products, interpolation, relaxation methods, and strength-of-connection measures. Our GNN examples include cases where parameters are determined a priori as well as cases where parameters must be learned. The intent of this paper is to help computational scientists understand how GNNs can be used to adapt machine learning concepts to computational tasks associated with sparse matrices. It is hoped that this understanding will further stimulate data-driven extensions of classical sparse linear algebra tasks.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"40 1","pages":""},"PeriodicalIF":10.8000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Review","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/23m1609786","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

Abstract

SIAM Review, Volume 67, Issue 1, Page 141-175, March 2025.
Abstract.Sparse matrix computations are ubiquitous in scientific computing. Given the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NNs). Unfortunately, multilayer perceptron (MLP) NNs are typically not natural for either graph or sparse matrix computations. The issue lies with the fact that MLPs require fixed-sized inputs, while scientific applications generally generate sparse matrices with arbitrary dimensions and a wide range of different nonzero patterns (or matrix graph vertex interconnections). While convolutional NNs could possibly address matrix graphs where all vertices have the same number of nearest neighbors, a more general approach is needed for arbitrary sparse matrices, e.g., those arising from discretized partial differential equations on unstructured meshes. Graph neural networks (GNNs) are one such approach suitable to sparse matrices. The key idea is to define aggregation functions (e.g., summations) that operate on variable-size input data to produce data of a fixed output size so that MLPs can be applied. The goal of this paper is to provide an introduction to GNNs for a numerical linear algebra audience. Concrete GNN examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs. We focus on iterative and multigrid methods that employ computational kernels such as matrix-vector products, interpolation, relaxation methods, and strength-of-connection measures. Our GNN examples include cases where parameters are determined a priori as well as cases where parameters must be learned. The intent of this paper is to help computational scientists understand how GNNs can be used to adapt machine learning concepts to computational tasks associated with sparse matrices. It is hoped that this understanding will further stimulate data-driven extensions of classical sparse linear algebra tasks.
图神经网络与应用线性代数
SIAM评论,第67卷,第1期,第141-175页,2025年3月。摘要。稀疏矩阵计算在科学计算中无处不在。鉴于最近对科学机器学习的兴趣,很自然地要问稀疏矩阵计算如何利用神经网络(nn)。不幸的是,多层感知器(MLP)神经网络通常不适合图或稀疏矩阵计算。问题在于mlp需要固定大小的输入,而科学应用通常生成具有任意维度和各种不同非零模式(或矩阵图顶点互连)的稀疏矩阵。虽然卷积神经网络可以处理所有顶点具有相同数量近邻的矩阵图,但对于任意稀疏矩阵需要更通用的方法,例如,那些由非结构化网格上的离散偏微分方程产生的矩阵图。图神经网络(gnn)就是一种适用于稀疏矩阵的方法。关键思想是定义聚合函数(例如,求和),这些函数对可变大小的输入数据进行操作,以产生固定大小的输出数据,从而可以应用mlp。本文的目标是为数值线性代数读者提供gnn的介绍。提供了具体的GNN示例来说明使用GNN可以完成多少常见的线性代数任务。我们专注于迭代和多重网格方法,这些方法采用计算核,如矩阵向量乘积、插值、松弛方法和连接强度测量。我们的GNN示例包括参数是先验确定的情况以及参数必须学习的情况。本文的目的是帮助计算科学家理解如何使用gnn将机器学习概念应用于与稀疏矩阵相关的计算任务。希望这种理解将进一步刺激经典稀疏线性代数任务的数据驱动扩展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
SIAM Review
SIAM Review 数学-应用数学
CiteScore
16.90
自引率
0.00%
发文量
50
期刊介绍: Survey and Review feature papers that provide an integrative and current viewpoint on important topics in applied or computational mathematics and scientific computing. These papers aim to offer a comprehensive perspective on the subject matter. Research Spotlights publish concise research papers in applied and computational mathematics that are of interest to a wide range of readers in SIAM Review. The papers in this section present innovative ideas that are clearly explained and motivated. They stand out from regular publications in specific SIAM journals due to their accessibility and potential for widespread and long-lasting influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信