Neighborhood Graph Neural Networks under Random Perturbations and Quantization Errors

Leila Ben Saad, Nama Ajay Nagendra, B. Beferull-Lozano
{"title":"Neighborhood Graph Neural Networks under Random Perturbations and Quantization Errors","authors":"Leila Ben Saad, Nama Ajay Nagendra, B. Beferull-Lozano","doi":"10.1109/spawc51304.2022.9834020","DOIUrl":null,"url":null,"abstract":"Graph convolutional neural networks (GCNNs) have emerged as a promising tool in the deep learning community to learn complex hidden relationships of data generated from non-Euclidean domains and represented as graphs. GCNNs are formed by a cascade of layers of graph filters, which replace the classical convolution operation in convolutional neural networks. These graph filters, when operated over real networks, can be subject to random perturbations due to link losses that can be caused by noise, interference and adversarial attacks. In addition, these graph filters are executed by finite-precision processors, which generate numerical quantization errors that may affect their performance. Despite the research works studying the effect of either graph perturbations or quantization in GCNNs, their robustness against both of these problems jointly is still not well investigated and understood. In this paper, we propose a quantized GCNN architecture based on neighborhood graph filters under random graph perturbations. We investigate the stability of such architecture to both random graph perturbations and quantization errors. We prove that the expected error due to quantization and random graph perturbations at the GCNN output is upper-bounded and we show how this bound can be controlled. Numerical experiments are conducted to corroborate our theoretical findings.","PeriodicalId":423807,"journal":{"name":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/spawc51304.2022.9834020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Graph convolutional neural networks (GCNNs) have emerged as a promising tool in the deep learning community to learn complex hidden relationships of data generated from non-Euclidean domains and represented as graphs. GCNNs are formed by a cascade of layers of graph filters, which replace the classical convolution operation in convolutional neural networks. These graph filters, when operated over real networks, can be subject to random perturbations due to link losses that can be caused by noise, interference and adversarial attacks. In addition, these graph filters are executed by finite-precision processors, which generate numerical quantization errors that may affect their performance. Despite the research works studying the effect of either graph perturbations or quantization in GCNNs, their robustness against both of these problems jointly is still not well investigated and understood. In this paper, we propose a quantized GCNN architecture based on neighborhood graph filters under random graph perturbations. We investigate the stability of such architecture to both random graph perturbations and quantization errors. We prove that the expected error due to quantization and random graph perturbations at the GCNN output is upper-bounded and we show how this bound can be controlled. Numerical experiments are conducted to corroborate our theoretical findings.
随机扰动和量化误差下的邻域图神经网络
图卷积神经网络(GCNNs)已经成为深度学习社区中一个很有前途的工具,用于学习从非欧几里得域生成的数据的复杂隐藏关系,并以图表示。GCNNs由多层图滤波器级联而成,取代了卷积神经网络中的经典卷积运算。当在真实网络上操作时,这些图过滤器可能会受到随机扰动,这是由于噪声、干扰和对抗性攻击引起的链路损失。此外,这些图形过滤器是由有限精度处理器执行的,这会产生可能影响其性能的数值量化误差。尽管研究工作研究了图扰动或量化对gcnn的影响,但它们对这两种问题的鲁棒性仍然没有得到很好的研究和理解。在随机图扰动下,提出了一种基于邻域图滤波器的量化GCNN结构。我们研究了这种结构对随机图扰动和量化误差的稳定性。我们证明了由于量化和随机图扰动在GCNN输出的期望误差是上界的,我们展示了如何控制这个边界。数值实验证实了我们的理论发现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信