Weiguang Yang, Yuxin Wang, Yulong Yu, Guang-yuan Kan, He Guo
{"title":"DD-L1D:提高GPU架构的解耦L1D效率","authors":"Weiguang Yang, Yuxin Wang, Yulong Yu, Guang-yuan Kan, He Guo","doi":"10.1109/NAS.2017.8026851","DOIUrl":null,"url":null,"abstract":"GPU L1 data cache contention, caused by a huge amount of concurrent threads, leads to insufficient cache utilization and poor performance, especially for cache unfriendly applications. Cache bypassing is a widely- used method to alleviate this problem, and Decoupled L1D (D-L1D) is a preventive bypassing scheme, which achieves performance improvement for cache unfriendly applications by considering the data locality of memory access streams. However, our experiments and analyses show that limited performance gain by D-L1D is attained due to the pre-defined locality threshold. To address this issue, we propose a novel bypassing scheme named as Dynamic D-L1D (DD-L1D) that directs the L1 data cache to the less contention by dynamically updating the locality threshold during runtime. We evaluate four metrics in DD-L1D to indicate the L1 cache bypassing state, and choose bypassing miss rate in our final configuration. The experimental results demonstrate that DD-L1D improves the baseline performance by 1.45X on average for cache unfriendly benchmarks. It also outperforms D-L1D and the state-of-the-art GPU cache bypassing schemes with lower hardware overhead and memory traffic.","PeriodicalId":222161,"journal":{"name":"2017 International Conference on Networking, Architecture, and Storage (NAS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"DD-L1D: Improving the Decoupled L1D Efficiency for GPU Architecture\",\"authors\":\"Weiguang Yang, Yuxin Wang, Yulong Yu, Guang-yuan Kan, He Guo\",\"doi\":\"10.1109/NAS.2017.8026851\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"GPU L1 data cache contention, caused by a huge amount of concurrent threads, leads to insufficient cache utilization and poor performance, especially for cache unfriendly applications. Cache bypassing is a widely- used method to alleviate this problem, and Decoupled L1D (D-L1D) is a preventive bypassing scheme, which achieves performance improvement for cache unfriendly applications by considering the data locality of memory access streams. However, our experiments and analyses show that limited performance gain by D-L1D is attained due to the pre-defined locality threshold. To address this issue, we propose a novel bypassing scheme named as Dynamic D-L1D (DD-L1D) that directs the L1 data cache to the less contention by dynamically updating the locality threshold during runtime. We evaluate four metrics in DD-L1D to indicate the L1 cache bypassing state, and choose bypassing miss rate in our final configuration. The experimental results demonstrate that DD-L1D improves the baseline performance by 1.45X on average for cache unfriendly benchmarks. It also outperforms D-L1D and the state-of-the-art GPU cache bypassing schemes with lower hardware overhead and memory traffic.\",\"PeriodicalId\":222161,\"journal\":{\"name\":\"2017 International Conference on Networking, Architecture, and Storage (NAS)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Networking, Architecture, and Storage (NAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NAS.2017.8026851\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Networking, Architecture, and Storage (NAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAS.2017.8026851","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DD-L1D: Improving the Decoupled L1D Efficiency for GPU Architecture
GPU L1 data cache contention, caused by a huge amount of concurrent threads, leads to insufficient cache utilization and poor performance, especially for cache unfriendly applications. Cache bypassing is a widely- used method to alleviate this problem, and Decoupled L1D (D-L1D) is a preventive bypassing scheme, which achieves performance improvement for cache unfriendly applications by considering the data locality of memory access streams. However, our experiments and analyses show that limited performance gain by D-L1D is attained due to the pre-defined locality threshold. To address this issue, we propose a novel bypassing scheme named as Dynamic D-L1D (DD-L1D) that directs the L1 data cache to the less contention by dynamically updating the locality threshold during runtime. We evaluate four metrics in DD-L1D to indicate the L1 cache bypassing state, and choose bypassing miss rate in our final configuration. The experimental results demonstrate that DD-L1D improves the baseline performance by 1.45X on average for cache unfriendly benchmarks. It also outperforms D-L1D and the state-of-the-art GPU cache bypassing schemes with lower hardware overhead and memory traffic.