Wen Yuan, R. Boyapati, Lei Wang, Hyunjun Jang, Yuho Jin, K. H. Yum, Eun Jung Kim
{"title":"集群内:加速数据并行架构的片上通信","authors":"Wen Yuan, R. Boyapati, Lei Wang, Hyunjun Jang, Yuho Jin, K. H. Yum, Eun Jung Kim","doi":"10.1109/SBAC-PADW.2015.15","DOIUrl":null,"url":null,"abstract":"Modern computation workloads contain abundant Data Level Parallelism (DLP), which requires specialized data parallel architectures, such as Graphics Processing Units (GPUs). With parallel programming models, such as CUDA and OpenCL, GPUs are easily to be programmed for non-graphics applications, and therefore become a cost effective approach for data parallel architectures. The large quantity of available parallelism places a heavy stress on the memory system as the limited number of pins confines the number of memory controllers on the chip. This creates a potential bottleneck for performance scalability of the GPUs. To accelerate communication with the memory system, we propose the Intra-Clustering on-chip network for data parallel architectures, which is built upon a traditional two-dimensional electrical mesh network with memory controllers connected through a nanophotonic ring and compute cores grouped into different clusters. Our evaluations with CUDA benchmarks show that the Intra-Clustering architecture can improve communication delay by an average of 17% (up to 32%) and IPC by an average of 5% (up to 11.5%).","PeriodicalId":161685,"journal":{"name":"2015 International Symposium on Computer Architecture and High Performance Computing Workshop (SBAC-PADW)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Intra-Clustering: Accelerating On-chip Communication for Data Parallel Architectures\",\"authors\":\"Wen Yuan, R. Boyapati, Lei Wang, Hyunjun Jang, Yuho Jin, K. H. Yum, Eun Jung Kim\",\"doi\":\"10.1109/SBAC-PADW.2015.15\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern computation workloads contain abundant Data Level Parallelism (DLP), which requires specialized data parallel architectures, such as Graphics Processing Units (GPUs). With parallel programming models, such as CUDA and OpenCL, GPUs are easily to be programmed for non-graphics applications, and therefore become a cost effective approach for data parallel architectures. The large quantity of available parallelism places a heavy stress on the memory system as the limited number of pins confines the number of memory controllers on the chip. This creates a potential bottleneck for performance scalability of the GPUs. To accelerate communication with the memory system, we propose the Intra-Clustering on-chip network for data parallel architectures, which is built upon a traditional two-dimensional electrical mesh network with memory controllers connected through a nanophotonic ring and compute cores grouped into different clusters. Our evaluations with CUDA benchmarks show that the Intra-Clustering architecture can improve communication delay by an average of 17% (up to 32%) and IPC by an average of 5% (up to 11.5%).\",\"PeriodicalId\":161685,\"journal\":{\"name\":\"2015 International Symposium on Computer Architecture and High Performance Computing Workshop (SBAC-PADW)\",\"volume\":\"78 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Symposium on Computer Architecture and High Performance Computing Workshop (SBAC-PADW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SBAC-PADW.2015.15\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Symposium on Computer Architecture and High Performance Computing Workshop (SBAC-PADW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBAC-PADW.2015.15","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Intra-Clustering: Accelerating On-chip Communication for Data Parallel Architectures
Modern computation workloads contain abundant Data Level Parallelism (DLP), which requires specialized data parallel architectures, such as Graphics Processing Units (GPUs). With parallel programming models, such as CUDA and OpenCL, GPUs are easily to be programmed for non-graphics applications, and therefore become a cost effective approach for data parallel architectures. The large quantity of available parallelism places a heavy stress on the memory system as the limited number of pins confines the number of memory controllers on the chip. This creates a potential bottleneck for performance scalability of the GPUs. To accelerate communication with the memory system, we propose the Intra-Clustering on-chip network for data parallel architectures, which is built upon a traditional two-dimensional electrical mesh network with memory controllers connected through a nanophotonic ring and compute cores grouped into different clusters. Our evaluations with CUDA benchmarks show that the Intra-Clustering architecture can improve communication delay by an average of 17% (up to 32%) and IPC by an average of 5% (up to 11.5%).