{"title":"DCBGCN:一种具有高内存和计算效率的深度图卷积网络训练算法","authors":"Weile Liu, Zhihao Tang, Lei Wang, Min Li","doi":"10.1109/AEMCSE50948.2020.00011","DOIUrl":null,"url":null,"abstract":"Graph convolutional network (GCN) has recently become a popular major focus of network representation learning (NRL). However, training a deep GCN is still quite challenging. Stacking more layers in GCN suffers vanishing gradients and GPU memory limitation and significant computational overhead. Vanishing gradients causes over-smoothing, which leads to node embedding converging to the same value. Node dependence leads to requirement to keep all the embedding in GPU memory. Neighbourhood expansion problem across GCN layers leads to significant computational overhead. In order to solve these issues, we present a model named DCBGCN (Deep and Cluster Boosting Graph Convolutional Network), which firstly uses MEITS to partition the whole graph into sub-graphs, then secondly adapts residual/dense connections between GCN layers. Extensive experiment results on PPI and Reddit tell the truth that our model can go deep with 56-layer GCN and has strong advantages in improving memory and computational efficiency. Meanwhile, we achieve promising test F1 score results on PPI and Reddit.","PeriodicalId":246841,"journal":{"name":"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","volume":"21 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"DCBGCN: An Algorithm with High Memory and Computational Efficiency for Training Deep Graph Convolutional Network\",\"authors\":\"Weile Liu, Zhihao Tang, Lei Wang, Min Li\",\"doi\":\"10.1109/AEMCSE50948.2020.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph convolutional network (GCN) has recently become a popular major focus of network representation learning (NRL). However, training a deep GCN is still quite challenging. Stacking more layers in GCN suffers vanishing gradients and GPU memory limitation and significant computational overhead. Vanishing gradients causes over-smoothing, which leads to node embedding converging to the same value. Node dependence leads to requirement to keep all the embedding in GPU memory. Neighbourhood expansion problem across GCN layers leads to significant computational overhead. In order to solve these issues, we present a model named DCBGCN (Deep and Cluster Boosting Graph Convolutional Network), which firstly uses MEITS to partition the whole graph into sub-graphs, then secondly adapts residual/dense connections between GCN layers. Extensive experiment results on PPI and Reddit tell the truth that our model can go deep with 56-layer GCN and has strong advantages in improving memory and computational efficiency. Meanwhile, we achieve promising test F1 score results on PPI and Reddit.\",\"PeriodicalId\":246841,\"journal\":{\"name\":\"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"volume\":\"21 2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AEMCSE50948.2020.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 3rd International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AEMCSE50948.2020.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DCBGCN: An Algorithm with High Memory and Computational Efficiency for Training Deep Graph Convolutional Network
Graph convolutional network (GCN) has recently become a popular major focus of network representation learning (NRL). However, training a deep GCN is still quite challenging. Stacking more layers in GCN suffers vanishing gradients and GPU memory limitation and significant computational overhead. Vanishing gradients causes over-smoothing, which leads to node embedding converging to the same value. Node dependence leads to requirement to keep all the embedding in GPU memory. Neighbourhood expansion problem across GCN layers leads to significant computational overhead. In order to solve these issues, we present a model named DCBGCN (Deep and Cluster Boosting Graph Convolutional Network), which firstly uses MEITS to partition the whole graph into sub-graphs, then secondly adapts residual/dense connections between GCN layers. Extensive experiment results on PPI and Reddit tell the truth that our model can go deep with 56-layer GCN and has strong advantages in improving memory and computational efficiency. Meanwhile, we achieve promising test F1 score results on PPI and Reddit.