Qisheng He, Soumyanil Banerjee, L. Schwiebert, Ming Dong
{"title":"AgileGCN:利用结构化剪枝加速带有残余连接的深度GCN","authors":"Qisheng He, Soumyanil Banerjee, L. Schwiebert, Ming Dong","doi":"10.1109/MIPR54900.2022.00011","DOIUrl":null,"url":null,"abstract":"Deep Graph Convolutional Networks (GCNs) with multiple layers have been used for applications such as point cloud classification and semantic segmentation and achieved state-of-the-art results. However, they are computationally expensive and have a high run-time latency. In this paper, we propose AgileGCN, a novel framework to compress and accelerate deep GCN models with residual connections using structured pruning. Specifically, in each residual structure of a deep GCN, channel sampling and padding are applied to the input and output channels of a convolutional layer, respectively, to significantly reduce its floating point operations (FLOPs) and number of parameters. Experimental results on two benchmark point cloud datasets demonstrate that AgileGCN achieves significant FLOPs and parameters reduction while maintaining the performance of the unpruned models for both point cloud classification and segmentation.","PeriodicalId":228640,"journal":{"name":"2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AgileGCN: Accelerating Deep GCN with Residual Connections using Structured Pruning\",\"authors\":\"Qisheng He, Soumyanil Banerjee, L. Schwiebert, Ming Dong\",\"doi\":\"10.1109/MIPR54900.2022.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Graph Convolutional Networks (GCNs) with multiple layers have been used for applications such as point cloud classification and semantic segmentation and achieved state-of-the-art results. However, they are computationally expensive and have a high run-time latency. In this paper, we propose AgileGCN, a novel framework to compress and accelerate deep GCN models with residual connections using structured pruning. Specifically, in each residual structure of a deep GCN, channel sampling and padding are applied to the input and output channels of a convolutional layer, respectively, to significantly reduce its floating point operations (FLOPs) and number of parameters. Experimental results on two benchmark point cloud datasets demonstrate that AgileGCN achieves significant FLOPs and parameters reduction while maintaining the performance of the unpruned models for both point cloud classification and segmentation.\",\"PeriodicalId\":228640,\"journal\":{\"name\":\"2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR)\",\"volume\":\"192 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MIPR54900.2022.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MIPR54900.2022.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AgileGCN: Accelerating Deep GCN with Residual Connections using Structured Pruning
Deep Graph Convolutional Networks (GCNs) with multiple layers have been used for applications such as point cloud classification and semantic segmentation and achieved state-of-the-art results. However, they are computationally expensive and have a high run-time latency. In this paper, we propose AgileGCN, a novel framework to compress and accelerate deep GCN models with residual connections using structured pruning. Specifically, in each residual structure of a deep GCN, channel sampling and padding are applied to the input and output channels of a convolutional layer, respectively, to significantly reduce its floating point operations (FLOPs) and number of parameters. Experimental results on two benchmark point cloud datasets demonstrate that AgileGCN achieves significant FLOPs and parameters reduction while maintaining the performance of the unpruned models for both point cloud classification and segmentation.