探索图神经网络中节点分类的不确定性

Md. Farhadul Islam, Sarah Zabeen, Fardin Bin Rahman, Md. Azharul Islam, Fahmid Bin Kibria, Meem Arafat Manab, Dewan Ziaul Karim, Annajiat Alim Rasel
{"title":"探索图神经网络中节点分类的不确定性","authors":"Md. Farhadul Islam, Sarah Zabeen, Fardin Bin Rahman, Md. Azharul Islam, Fahmid Bin Kibria, Meem Arafat Manab, Dewan Ziaul Karim, Annajiat Alim Rasel","doi":"10.1145/3564746.3587019","DOIUrl":null,"url":null,"abstract":"In order to represent and investigate interconnected data, Graph Neural Networks (GNN) offer a robust framework that deftly combines Graph theory with Machine learning. Most of the studies focus on performance but uncertainty measurement does not get enough attention. In this study, we measure the predictive uncertainty of several GNN models, to show how high performance does not ensure reliable performance. We use dropouts during the inference phase to quantify the uncertainty of these transformer models. This method, often known as Monte Carlo Dropout (MCD), is an effective low-complexity approximation for calculating uncertainty. Benchmark dataset was used with five GNN models: Graph Convolutional Network (GCN), Graph Attention Network (GAT), Personalized Propagation of Neural Predictions (PPNP), PPNP's fast approximation (APPNP) and GraphSAGE in our investigation. GAT proved to be superior to all the other models in terms of accuracy and uncertainty both in node classification. Among the other models, some that fared better in accuracy fell behind when compared using classification uncertainty.","PeriodicalId":322431,"journal":{"name":"Proceedings of the 2023 ACM Southeast Conference","volume":"13 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring Node Classification Uncertainty in Graph Neural Networks\",\"authors\":\"Md. Farhadul Islam, Sarah Zabeen, Fardin Bin Rahman, Md. Azharul Islam, Fahmid Bin Kibria, Meem Arafat Manab, Dewan Ziaul Karim, Annajiat Alim Rasel\",\"doi\":\"10.1145/3564746.3587019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to represent and investigate interconnected data, Graph Neural Networks (GNN) offer a robust framework that deftly combines Graph theory with Machine learning. Most of the studies focus on performance but uncertainty measurement does not get enough attention. In this study, we measure the predictive uncertainty of several GNN models, to show how high performance does not ensure reliable performance. We use dropouts during the inference phase to quantify the uncertainty of these transformer models. This method, often known as Monte Carlo Dropout (MCD), is an effective low-complexity approximation for calculating uncertainty. Benchmark dataset was used with five GNN models: Graph Convolutional Network (GCN), Graph Attention Network (GAT), Personalized Propagation of Neural Predictions (PPNP), PPNP's fast approximation (APPNP) and GraphSAGE in our investigation. GAT proved to be superior to all the other models in terms of accuracy and uncertainty both in node classification. Among the other models, some that fared better in accuracy fell behind when compared using classification uncertainty.\",\"PeriodicalId\":322431,\"journal\":{\"name\":\"Proceedings of the 2023 ACM Southeast Conference\",\"volume\":\"13 5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 ACM Southeast Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3564746.3587019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Southeast Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564746.3587019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

为了表示和研究相互关联的数据,图神经网络(GNN)提供了一个强大的框架,巧妙地将图论与机器学习相结合。大多数研究关注的是绩效,而不确定度的测量没有得到足够的重视。在本研究中,我们测量了几个GNN模型的预测不确定性,以显示高性能如何不能确保可靠的性能。我们在推理阶段使用dropout来量化这些变压器模型的不确定性。这种方法通常被称为蒙特卡罗Dropout (MCD),是一种计算不确定性的有效的低复杂度近似方法。在我们的研究中,基准数据集使用了5种GNN模型:图卷积网络(GCN)、图注意力网络(GAT)、个性化神经预测传播(PPNP)、PPNP的快速逼近(APPNP)和GraphSAGE。结果表明,GAT在节点分类精度和不确定性方面均优于其他模型。在其他模型中,与使用分类不确定性相比,一些在准确性方面表现较好的模型落后了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring Node Classification Uncertainty in Graph Neural Networks
In order to represent and investigate interconnected data, Graph Neural Networks (GNN) offer a robust framework that deftly combines Graph theory with Machine learning. Most of the studies focus on performance but uncertainty measurement does not get enough attention. In this study, we measure the predictive uncertainty of several GNN models, to show how high performance does not ensure reliable performance. We use dropouts during the inference phase to quantify the uncertainty of these transformer models. This method, often known as Monte Carlo Dropout (MCD), is an effective low-complexity approximation for calculating uncertainty. Benchmark dataset was used with five GNN models: Graph Convolutional Network (GCN), Graph Attention Network (GAT), Personalized Propagation of Neural Predictions (PPNP), PPNP's fast approximation (APPNP) and GraphSAGE in our investigation. GAT proved to be superior to all the other models in terms of accuracy and uncertainty both in node classification. Among the other models, some that fared better in accuracy fell behind when compared using classification uncertainty.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信