{"title":"Hybrid variable spiking graph neural networks for energy-efficient scientific machine learning","authors":"Isha Jain , Shailesh Garg , Shaurya Shriyam , Souvik Chakraborty","doi":"10.1016/j.jmps.2025.106152","DOIUrl":null,"url":null,"abstract":"<div><div>Graph-based representations for samples of computational mechanics-related datasets can prove instrumental when dealing with problems like irregular domains or molecular structures of materials, etc. To effectively analyze and process such datasets, deep learning offers Graph Neural Networks (GNNs) that utilize techniques like message-passing within their architecture. The issue, however, is that as the individual graph scales and/ or GNN architecture becomes increasingly complex, the increased energy budget of the overall deep learning model makes it unsustainable and restricts its applications in applications like edge computing. To overcome this, we propose in this paper Variable Spiking Graph Neural Networks (VS-GNNs) and their hybrid variants, collectively termed VS-GNN architectures, that utilize Variable Spiking Neurons (VSNs) within their architecture to promote sparse communication and hence reduce the overall energy budget. VSNs, while promoting sparse event-driven computations, also perform well for regression tasks, which are often encountered in computational mechanics applications and are the main target of this paper. Three examples dealing with the prediction of mechanical properties of materials based on their microscale/ mesoscale structures are shown to test the performance of the proposed VS-GNNs architectures in regression tasks. We have compared the performance of VS-GNN architectures with the performance of vanilla GNNs, GNNs utilizing leaky integrate and fire neurons, and GNNs utilizing recurrent leaky integrate and fire neurons. The results produced show that VS-GNN architectures perform well for regression tasks, all while promoting sparse communication and, hence, energy efficiency.</div></div>","PeriodicalId":17331,"journal":{"name":"Journal of The Mechanics and Physics of Solids","volume":"200 ","pages":"Article 106152"},"PeriodicalIF":5.0000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of The Mechanics and Physics of Solids","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0022509625001280","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Graph-based representations for samples of computational mechanics-related datasets can prove instrumental when dealing with problems like irregular domains or molecular structures of materials, etc. To effectively analyze and process such datasets, deep learning offers Graph Neural Networks (GNNs) that utilize techniques like message-passing within their architecture. The issue, however, is that as the individual graph scales and/ or GNN architecture becomes increasingly complex, the increased energy budget of the overall deep learning model makes it unsustainable and restricts its applications in applications like edge computing. To overcome this, we propose in this paper Variable Spiking Graph Neural Networks (VS-GNNs) and their hybrid variants, collectively termed VS-GNN architectures, that utilize Variable Spiking Neurons (VSNs) within their architecture to promote sparse communication and hence reduce the overall energy budget. VSNs, while promoting sparse event-driven computations, also perform well for regression tasks, which are often encountered in computational mechanics applications and are the main target of this paper. Three examples dealing with the prediction of mechanical properties of materials based on their microscale/ mesoscale structures are shown to test the performance of the proposed VS-GNNs architectures in regression tasks. We have compared the performance of VS-GNN architectures with the performance of vanilla GNNs, GNNs utilizing leaky integrate and fire neurons, and GNNs utilizing recurrent leaky integrate and fire neurons. The results produced show that VS-GNN architectures perform well for regression tasks, all while promoting sparse communication and, hence, energy efficiency.
期刊介绍:
The aim of Journal of The Mechanics and Physics of Solids is to publish research of the highest quality and of lasting significance on the mechanics of solids. The scope is broad, from fundamental concepts in mechanics to the analysis of novel phenomena and applications. Solids are interpreted broadly to include both hard and soft materials as well as natural and synthetic structures. The approach can be theoretical, experimental or computational.This research activity sits within engineering science and the allied areas of applied mathematics, materials science, bio-mechanics, applied physics, and geophysics.
The Journal was founded in 1952 by Rodney Hill, who was its Editor-in-Chief until 1968. The topics of interest to the Journal evolve with developments in the subject but its basic ethos remains the same: to publish research of the highest quality relating to the mechanics of solids. Thus, emphasis is placed on the development of fundamental concepts of mechanics and novel applications of these concepts based on theoretical, experimental or computational approaches, drawing upon the various branches of engineering science and the allied areas within applied mathematics, materials science, structural engineering, applied physics, and geophysics.
The main purpose of the Journal is to foster scientific understanding of the processes of deformation and mechanical failure of all solid materials, both technological and natural, and the connections between these processes and their underlying physical mechanisms. In this sense, the content of the Journal should reflect the current state of the discipline in analysis, experimental observation, and numerical simulation. In the interest of achieving this goal, authors are encouraged to consider the significance of their contributions for the field of mechanics and the implications of their results, in addition to describing the details of their work.