{"title":"Supporting x86-64 address translation for 100s of GPU lanes","authors":"Jason Power, M. Hill, D. Wood","doi":"10.1109/HPCA.2014.6835965","DOIUrl":null,"url":null,"abstract":"Efficient memory sharing between CPU and GPU threads can greatly expand the effective set of GPGPU workloads. For increased programmability, this memory should be uniformly virtualized, necessitating compatible address translation support for GPU memory references. However, even a modest GPU might need 100s of translations per cycle (6 CUs * 64 lanes/CU) with memory access patterns designed for throughput more than locality. To drive GPU MMU design, we examine GPU memory reference behavior with the Rodinia benchmarks and a database sort to find: (1) the coalescer and scratchpad memory are effective TLB bandwidth filters (reducing the translation rate by 6.8x on average), (2) TLB misses occur in bursts (60 concurrently on average), and (3) postcoalescer TLBs have high miss rates (29% average). We show how a judicious combination of extant CPU MMU ideas satisfies GPU MMU demands for 4 KB pages with minimal overheads (an average of less than 2% over ideal address translation). This proof-of-concept design uses per-compute unit TLBs, a shared highly-threaded page table walker, and a shared page walk cache.","PeriodicalId":164587,"journal":{"name":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"148","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCA.2014.6835965","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 148
Abstract
Efficient memory sharing between CPU and GPU threads can greatly expand the effective set of GPGPU workloads. For increased programmability, this memory should be uniformly virtualized, necessitating compatible address translation support for GPU memory references. However, even a modest GPU might need 100s of translations per cycle (6 CUs * 64 lanes/CU) with memory access patterns designed for throughput more than locality. To drive GPU MMU design, we examine GPU memory reference behavior with the Rodinia benchmarks and a database sort to find: (1) the coalescer and scratchpad memory are effective TLB bandwidth filters (reducing the translation rate by 6.8x on average), (2) TLB misses occur in bursts (60 concurrently on average), and (3) postcoalescer TLBs have high miss rates (29% average). We show how a judicious combination of extant CPU MMU ideas satisfies GPU MMU demands for 4 KB pages with minimal overheads (an average of less than 2% over ideal address translation). This proof-of-concept design uses per-compute unit TLBs, a shared highly-threaded page table walker, and a shared page walk cache.
CPU和GPU线程之间高效的内存共享可以极大地扩展GPGPU工作负载的有效集合。为了提高可编程性,该内存应该统一虚拟化,这就需要为GPU内存引用提供兼容的地址转换支持。然而,即使是一个普通的GPU,每个周期也可能需要100次转换(6 CU * 64 lane /CU),并且内存访问模式是为吞吐量而不是局域性设计的。为了驱动GPU MMU设计,我们使用Rodinia基准测试和数据库分类检查GPU内存参考行为,发现:(1)聚结器和刮刮板内存是有效的TLB带宽过滤器(平均降低6.8倍的转译率),(2)TLB丢失发生在突发(平均并发60次)中,(3)聚结器后TLB具有高丢失率(平均29%)。我们展示了现有CPU MMU思想的明智组合如何以最小的开销(比理想地址转换平均不到2%)满足GPU MMU对4 KB页面的需求。这种概念验证设计使用每个计算单元tlb、共享高线程页表漫游器和共享页漫游缓存。