S. Pal, Dong-hyeon Park, Siying Feng, Paul Gao, Jielun Tan, A. Rovinski, Shaolin Xie, Chun Zhao, Aporva Amarnath, Tim Wesley, Jonathan Beaumont, Kuan-Yu Chen, C. Chakrabarti, M. Taylor, T. Mudge, D. Blaauw, Hun-Seok Kim, R. Dreslinski
{"title":"基于40 nm内存重构的7.3 M输出非零/J稀疏矩阵-矩阵乘法加速器","authors":"S. Pal, Dong-hyeon Park, Siying Feng, Paul Gao, Jielun Tan, A. Rovinski, Shaolin Xie, Chun Zhao, Aporva Amarnath, Tim Wesley, Jonathan Beaumont, Kuan-Yu Chen, C. Chakrabarti, M. Taylor, T. Mudge, D. Blaauw, Hun-Seok Kim, R. Dreslinski","doi":"10.23919/VLSIT.2019.8776507","DOIUrl":null,"url":null,"abstract":"A Sparse Matrix-Matrix multiplication (SpMM) accelerator with 48 heterogeneous cores and a reconfigurable memory hierarchy is fabricated in 40 nm CMOS. On-chip memories are reconfigured as scratchpad or cache and interconnected with synthesizable coalescing crossbars for efficient memory access in each phase of the algorithm. The $2.0\\ \\text{mm}\\times 2.6\\ \\text{mm}$ chip exhibits $12.6\\times(8.4\\times)$ energy efficiency gain, $11.7\\times(77.6\\times)$ off-chip bandwidth efficiency gain and $17.1\\times(36.9\\times)$ compute density gain against a high-end CPU (GPU) across a diverse set of synthetic and real-world power-law graph based sparse matrices.","PeriodicalId":6752,"journal":{"name":"2019 Symposium on VLSI Technology","volume":"166 9 Suppl 1","pages":"C150-C151"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"A 7.3 M Output Non-Zeros/J Sparse Matrix-Matrix Multiplication Accelerator using Memory Reconfiguration in 40 nm\",\"authors\":\"S. Pal, Dong-hyeon Park, Siying Feng, Paul Gao, Jielun Tan, A. Rovinski, Shaolin Xie, Chun Zhao, Aporva Amarnath, Tim Wesley, Jonathan Beaumont, Kuan-Yu Chen, C. Chakrabarti, M. Taylor, T. Mudge, D. Blaauw, Hun-Seok Kim, R. Dreslinski\",\"doi\":\"10.23919/VLSIT.2019.8776507\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A Sparse Matrix-Matrix multiplication (SpMM) accelerator with 48 heterogeneous cores and a reconfigurable memory hierarchy is fabricated in 40 nm CMOS. On-chip memories are reconfigured as scratchpad or cache and interconnected with synthesizable coalescing crossbars for efficient memory access in each phase of the algorithm. The $2.0\\\\ \\\\text{mm}\\\\times 2.6\\\\ \\\\text{mm}$ chip exhibits $12.6\\\\times(8.4\\\\times)$ energy efficiency gain, $11.7\\\\times(77.6\\\\times)$ off-chip bandwidth efficiency gain and $17.1\\\\times(36.9\\\\times)$ compute density gain against a high-end CPU (GPU) across a diverse set of synthetic and real-world power-law graph based sparse matrices.\",\"PeriodicalId\":6752,\"journal\":{\"name\":\"2019 Symposium on VLSI Technology\",\"volume\":\"166 9 Suppl 1\",\"pages\":\"C150-C151\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Symposium on VLSI Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/VLSIT.2019.8776507\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Symposium on VLSI Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/VLSIT.2019.8776507","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A 7.3 M Output Non-Zeros/J Sparse Matrix-Matrix Multiplication Accelerator using Memory Reconfiguration in 40 nm
A Sparse Matrix-Matrix multiplication (SpMM) accelerator with 48 heterogeneous cores and a reconfigurable memory hierarchy is fabricated in 40 nm CMOS. On-chip memories are reconfigured as scratchpad or cache and interconnected with synthesizable coalescing crossbars for efficient memory access in each phase of the algorithm. The $2.0\ \text{mm}\times 2.6\ \text{mm}$ chip exhibits $12.6\times(8.4\times)$ energy efficiency gain, $11.7\times(77.6\times)$ off-chip bandwidth efficiency gain and $17.1\times(36.9\times)$ compute density gain against a high-end CPU (GPU) across a diverse set of synthetic and real-world power-law graph based sparse matrices.