{"title":"GreeNX:一种基于DVFS的高效、可持续的稀疏图卷积网络加速器","authors":"Siqin Liu;Prakash Chand Kuve;Avinash Karanth","doi":"10.1109/TSUSC.2025.3577218","DOIUrl":null,"url":null,"abstract":"Graph convolutional networks (GCNs) have emerged as an effective approach to extend deep learning algorithms for graph-based data analytics. However, GCNs implementation over large, sparse datasets presents challenges due to irregular computation and dataflow patterns. Specialized GCN accelerators have emerged to deliver superior performance over generic processors. However, prior techniques that include specialized datapaths, optimized sparse computation, and memory access patterns, handle different phases of GCNs differently which results in excess energy consumption and reduced throughput due to sub-optimal dataflows. In this paper, we propose GreeNX, a computation and communication-aware GCN accelerator that uniformly applies three complementary techniques to all phases of GCN. First, we abstract two cascaded sparse-dense matrix multiplications that uniformly process the computation in both aggregation and combination phases of GCNs to improve throughput. Second, to mitigate the overheads of processing irregular sparse data, we develop a dynamic-voltage-and-frequency-scaling (DVFS) scheme by grouping a row of processing elements (PEs) that dynamically changes the applied voltage/frequency (V/F) to improve energy efficiency. Third, we conduct a comprehensive carbon footprint evaluation, analyzing both embodied and operational emissions for GCNs. Extensive simulation and experiments validate that our GreeNX consistently reduces memory accesses and energy consumption leading to an average 7.3× speedup and 5.6× energy savings on six real-world graph datasets over several state-of-the-art GCN accelerators including HyGCN, AWB-GCN, GCoD, GRIP, IGCN, and LW-GCN.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"10 5","pages":"1031-1042"},"PeriodicalIF":3.9000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GreeNX: An Energy-Efficient and Sustainable Approach to Sparse Graph Convolution Networks Accelerators Using DVFS\",\"authors\":\"Siqin Liu;Prakash Chand Kuve;Avinash Karanth\",\"doi\":\"10.1109/TSUSC.2025.3577218\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph convolutional networks (GCNs) have emerged as an effective approach to extend deep learning algorithms for graph-based data analytics. However, GCNs implementation over large, sparse datasets presents challenges due to irregular computation and dataflow patterns. Specialized GCN accelerators have emerged to deliver superior performance over generic processors. However, prior techniques that include specialized datapaths, optimized sparse computation, and memory access patterns, handle different phases of GCNs differently which results in excess energy consumption and reduced throughput due to sub-optimal dataflows. In this paper, we propose GreeNX, a computation and communication-aware GCN accelerator that uniformly applies three complementary techniques to all phases of GCN. First, we abstract two cascaded sparse-dense matrix multiplications that uniformly process the computation in both aggregation and combination phases of GCNs to improve throughput. Second, to mitigate the overheads of processing irregular sparse data, we develop a dynamic-voltage-and-frequency-scaling (DVFS) scheme by grouping a row of processing elements (PEs) that dynamically changes the applied voltage/frequency (V/F) to improve energy efficiency. Third, we conduct a comprehensive carbon footprint evaluation, analyzing both embodied and operational emissions for GCNs. Extensive simulation and experiments validate that our GreeNX consistently reduces memory accesses and energy consumption leading to an average 7.3× speedup and 5.6× energy savings on six real-world graph datasets over several state-of-the-art GCN accelerators including HyGCN, AWB-GCN, GCoD, GRIP, IGCN, and LW-GCN.\",\"PeriodicalId\":13268,\"journal\":{\"name\":\"IEEE Transactions on Sustainable Computing\",\"volume\":\"10 5\",\"pages\":\"1031-1042\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Sustainable Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11028992/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11028992/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
GreeNX: An Energy-Efficient and Sustainable Approach to Sparse Graph Convolution Networks Accelerators Using DVFS
Graph convolutional networks (GCNs) have emerged as an effective approach to extend deep learning algorithms for graph-based data analytics. However, GCNs implementation over large, sparse datasets presents challenges due to irregular computation and dataflow patterns. Specialized GCN accelerators have emerged to deliver superior performance over generic processors. However, prior techniques that include specialized datapaths, optimized sparse computation, and memory access patterns, handle different phases of GCNs differently which results in excess energy consumption and reduced throughput due to sub-optimal dataflows. In this paper, we propose GreeNX, a computation and communication-aware GCN accelerator that uniformly applies three complementary techniques to all phases of GCN. First, we abstract two cascaded sparse-dense matrix multiplications that uniformly process the computation in both aggregation and combination phases of GCNs to improve throughput. Second, to mitigate the overheads of processing irregular sparse data, we develop a dynamic-voltage-and-frequency-scaling (DVFS) scheme by grouping a row of processing elements (PEs) that dynamically changes the applied voltage/frequency (V/F) to improve energy efficiency. Third, we conduct a comprehensive carbon footprint evaluation, analyzing both embodied and operational emissions for GCNs. Extensive simulation and experiments validate that our GreeNX consistently reduces memory accesses and energy consumption leading to an average 7.3× speedup and 5.6× energy savings on six real-world graph datasets over several state-of-the-art GCN accelerators including HyGCN, AWB-GCN, GCoD, GRIP, IGCN, and LW-GCN.