{"title":"具有多核机制的广义图注意网络","authors":"Qingwang Wang;Pengcheng Jin;Hao Xiong;Yuhang Wu;Xu Lin;Tao Shen;Jiangbo Huang;Jun Cheng;Yanfeng Gu","doi":"10.1109/TETCI.2025.3542127","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) are highly effective models for tasks involving non-Euclidean data. To improve their performance, researchers have explored strategies to increase the depth of GNN structures, as in the case of convolutional neural network (CNN)-based deep networks. However, GNNs relying on information aggregation mechanisms typically face limitations in achieving superior representation performance because of deep feature oversmoothing. Inspired by the broad learning system, in this study, we attempt to avoid the feature oversmoothing issue by expanding the width of GNNs. We propose a broad graph attention network framework with a multikernel mechanism (BGAT-MK). In particular, we propose the construction of a broad GNN using multikernel mapping to generate several reproducing kernel Hilbert spaces (RKHSs), where nodes can wander through different kernel spaces and generate representations. Furthermore, we construct a broader network by aggregating representations in different RKHSs and fusing adaptive weights to aggregate the original and enhanced mapped representations. The efficacy of BGAT-MK is validated through experiments on conventional node classification and light detection and ranging point cloud semantic segmentation tasks, demonstrating its superior performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2296-2307"},"PeriodicalIF":5.3000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Broad Graph Attention Network With Multiple Kernel Mechanism\",\"authors\":\"Qingwang Wang;Pengcheng Jin;Hao Xiong;Yuhang Wu;Xu Lin;Tao Shen;Jiangbo Huang;Jun Cheng;Yanfeng Gu\",\"doi\":\"10.1109/TETCI.2025.3542127\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph neural networks (GNNs) are highly effective models for tasks involving non-Euclidean data. To improve their performance, researchers have explored strategies to increase the depth of GNN structures, as in the case of convolutional neural network (CNN)-based deep networks. However, GNNs relying on information aggregation mechanisms typically face limitations in achieving superior representation performance because of deep feature oversmoothing. Inspired by the broad learning system, in this study, we attempt to avoid the feature oversmoothing issue by expanding the width of GNNs. We propose a broad graph attention network framework with a multikernel mechanism (BGAT-MK). In particular, we propose the construction of a broad GNN using multikernel mapping to generate several reproducing kernel Hilbert spaces (RKHSs), where nodes can wander through different kernel spaces and generate representations. Furthermore, we construct a broader network by aggregating representations in different RKHSs and fusing adaptive weights to aggregate the original and enhanced mapped representations. The efficacy of BGAT-MK is validated through experiments on conventional node classification and light detection and ranging point cloud semantic segmentation tasks, demonstrating its superior performance.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":\"9 3\",\"pages\":\"2296-2307\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-02-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10904257/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10904257/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Broad Graph Attention Network With Multiple Kernel Mechanism
Graph neural networks (GNNs) are highly effective models for tasks involving non-Euclidean data. To improve their performance, researchers have explored strategies to increase the depth of GNN structures, as in the case of convolutional neural network (CNN)-based deep networks. However, GNNs relying on information aggregation mechanisms typically face limitations in achieving superior representation performance because of deep feature oversmoothing. Inspired by the broad learning system, in this study, we attempt to avoid the feature oversmoothing issue by expanding the width of GNNs. We propose a broad graph attention network framework with a multikernel mechanism (BGAT-MK). In particular, we propose the construction of a broad GNN using multikernel mapping to generate several reproducing kernel Hilbert spaces (RKHSs), where nodes can wander through different kernel spaces and generate representations. Furthermore, we construct a broader network by aggregating representations in different RKHSs and fusing adaptive weights to aggregate the original and enhanced mapped representations. The efficacy of BGAT-MK is validated through experiments on conventional node classification and light detection and ranging point cloud semantic segmentation tasks, demonstrating its superior performance.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.