深度图神经网络设计与实现中的挑战与解决方案综述

Q2 Computer Science
Aafaq Mohi ud din, Shaima Qureshi
{"title":"深度图神经网络设计与实现中的挑战与解决方案综述","authors":"Aafaq Mohi ud din, Shaima Qureshi","doi":"10.1080/1206212X.2022.2133805","DOIUrl":null,"url":null,"abstract":"The study of graph neural networks has revealed that they can unleash new applications in a variety of disciplines using such a basic process that we cannot imagine in the context of other deep learning designs. Many limitations limit their expressiveness, and researchers are working to overcome them to fully exploit the power of graph data. There are a number of publications that explore graph neural networks (GNNs) restrictions and bottlenecks, but the common thread that runs through them all is that they can all be traced back to message passing, which is the key technique we use to train our graph models. We outline the general GNN design pipeline in this study as well as discuss solutions to the over-smoothing problem, categorize the solutions, and identify open challenges for further research. Abbreviations: CGNN: Continuous Graph Neural Networks; CNN: Convolution NeuralNetwork; DeGNN: Decomposition Graph Neural Network; DGN: Directional GraphNetworks; DGN: Differentiable Group Normalization; DL: Deep Learning; EGAI:Enhancing GNNs by a High-quality Aggregation of Beneficial Information; GAT: GraphAttention Network; GCN: Graph Convolutional Network; GDC: Graph Drop Connect; GDR: Group Distance Ratio; GNN: Graph Neural Network; GRAND: GraphRandom Neural Networks; IIG: Instance Information Gain; MAD: Man AverageDistance; PDE-GCN: Partial Differential Equations-GCN; PTDNet: ParameterizedTopological Denoising network; TDGNN: Tree Decomposition Graph NeuralNetwork;","PeriodicalId":39673,"journal":{"name":"International Journal of Computers and Applications","volume":"105 1","pages":"221 - 230"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A review of challenges and solutions in the design and implementation of deep graph neural networks\",\"authors\":\"Aafaq Mohi ud din, Shaima Qureshi\",\"doi\":\"10.1080/1206212X.2022.2133805\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The study of graph neural networks has revealed that they can unleash new applications in a variety of disciplines using such a basic process that we cannot imagine in the context of other deep learning designs. Many limitations limit their expressiveness, and researchers are working to overcome them to fully exploit the power of graph data. There are a number of publications that explore graph neural networks (GNNs) restrictions and bottlenecks, but the common thread that runs through them all is that they can all be traced back to message passing, which is the key technique we use to train our graph models. We outline the general GNN design pipeline in this study as well as discuss solutions to the over-smoothing problem, categorize the solutions, and identify open challenges for further research. Abbreviations: CGNN: Continuous Graph Neural Networks; CNN: Convolution NeuralNetwork; DeGNN: Decomposition Graph Neural Network; DGN: Directional GraphNetworks; DGN: Differentiable Group Normalization; DL: Deep Learning; EGAI:Enhancing GNNs by a High-quality Aggregation of Beneficial Information; GAT: GraphAttention Network; GCN: Graph Convolutional Network; GDC: Graph Drop Connect; GDR: Group Distance Ratio; GNN: Graph Neural Network; GRAND: GraphRandom Neural Networks; IIG: Instance Information Gain; MAD: Man AverageDistance; PDE-GCN: Partial Differential Equations-GCN; PTDNet: ParameterizedTopological Denoising network; TDGNN: Tree Decomposition Graph NeuralNetwork;\",\"PeriodicalId\":39673,\"journal\":{\"name\":\"International Journal of Computers and Applications\",\"volume\":\"105 1\",\"pages\":\"221 - 230\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computers and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/1206212X.2022.2133805\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computers and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/1206212X.2022.2133805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 1

摘要

图神经网络的研究表明,它们可以在各种学科中释放新的应用,使用这样一个基本的过程,我们无法想象在其他深度学习设计的背景下。许多限制限制了它们的表现力,研究人员正在努力克服它们,以充分利用图形数据的力量。有许多出版物探讨了图神经网络(gnn)的限制和瓶颈,但贯穿它们的共同点是,它们都可以追溯到消息传递,这是我们用来训练图模型的关键技术。在本研究中,我们概述了通用GNN设计流程,并讨论了过度平滑问题的解决方案,对解决方案进行了分类,并确定了进一步研究的开放挑战。CGNN:连续图神经网络;CNN:卷积神经网络;DeGNN:分解图神经网络;DGN:定向图网络;DGN:可微群归一化;DL:深度学习;EGAI:通过高质量的有益信息聚合来增强gnnGAT: GraphAttention Network;GCN:图卷积网络;GDC: Graph Drop Connect;GDR:组距离比;图神经网络;GRAND: GraphRandom神经网络;IIG:实例信息增益;MAD:男子平均距离;偏微分方程;gcn;PTDNet:参数化拓扑去噪网络;树分解图神经网络;
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A review of challenges and solutions in the design and implementation of deep graph neural networks
The study of graph neural networks has revealed that they can unleash new applications in a variety of disciplines using such a basic process that we cannot imagine in the context of other deep learning designs. Many limitations limit their expressiveness, and researchers are working to overcome them to fully exploit the power of graph data. There are a number of publications that explore graph neural networks (GNNs) restrictions and bottlenecks, but the common thread that runs through them all is that they can all be traced back to message passing, which is the key technique we use to train our graph models. We outline the general GNN design pipeline in this study as well as discuss solutions to the over-smoothing problem, categorize the solutions, and identify open challenges for further research. Abbreviations: CGNN: Continuous Graph Neural Networks; CNN: Convolution NeuralNetwork; DeGNN: Decomposition Graph Neural Network; DGN: Directional GraphNetworks; DGN: Differentiable Group Normalization; DL: Deep Learning; EGAI:Enhancing GNNs by a High-quality Aggregation of Beneficial Information; GAT: GraphAttention Network; GCN: Graph Convolutional Network; GDC: Graph Drop Connect; GDR: Group Distance Ratio; GNN: Graph Neural Network; GRAND: GraphRandom Neural Networks; IIG: Instance Information Gain; MAD: Man AverageDistance; PDE-GCN: Partial Differential Equations-GCN; PTDNet: ParameterizedTopological Denoising network; TDGNN: Tree Decomposition Graph NeuralNetwork;
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Computers and Applications
International Journal of Computers and Applications Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
4.70
自引率
0.00%
发文量
20
期刊介绍: The International Journal of Computers and Applications (IJCA) is a unique platform for publishing novel ideas, research outcomes and fundamental advances in all aspects of Computer Science, Computer Engineering, and Computer Applications. This is a peer-reviewed international journal with a vision to provide the academic and industrial community a platform for presenting original research ideas and applications. IJCA welcomes four special types of papers in addition to the regular research papers within its scope: (a) Papers for which all results could be easily reproducible. For such papers, the authors will be asked to upload "instructions for reproduction'''', possibly with the source codes or stable URLs (from where the codes could be downloaded). (b) Papers with negative results. For such papers, the experimental setting and negative results must be presented in detail. Also, why the negative results are important for the research community must be explained clearly. The rationale behind this kind of paper is that this would help researchers choose the correct approaches to solve problems and avoid the (already worked out) failed approaches. (c) Detailed report, case study and literature review articles about innovative software / hardware, new technology, high impact computer applications and future development with sufficient background and subject coverage. (d) Special issue papers focussing on a particular theme with significant importance or papers selected from a relevant conference with sufficient improvement and new material to differentiate from the papers published in a conference proceedings.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信