{"title":"CMD: A Cache-Assisted GPU Memory Deduplication Architecture","authors":"Wei Zhao;Dan Feng;Wei Tong;Xueliang Wei;Bing Wu","doi":"10.1109/TCAD.2025.3552674","DOIUrl":null,"url":null,"abstract":"Massive off-chip accesses in graphics processing units (GPUs) are the main performance bottleneck. We find that many writes are duplicate, and the duplication can be <monospace>inter-dup</monospace> and <monospace>intra-dup</monospace>. While <monospace>inter-dup</monospace> means different memory blocks are identical, and <monospace>intra-dup</monospace> means all the 4B elements in a line are the same. In this work, we propose a cache-assisted GPU memory deduplication architecture named cache-assisted GPU memory deduplicated (CMD) to reduce the off-chip accesses via utilizing the data duplication in GPU applications. CMD includes three key design contributions which aim to reduce the three kinds of accesses: 1) a novel GPU memory deduplication architecture that removes the <monospace>intra-dup</monospace> and <monospace>inter-dup</monospace> lines. We design several techniques to manage duplicate blocks, reducing massive off-chip writes; 2) we propose a cache-assisted read scheme to reduce the reads to duplicate data. When an L2 cache miss wants to read the duplicate block, if the reference block has been fetched to L2 and it is clean, we can copy it to the L2 missed block without accessing off-chip DRAM. As for the reads to <monospace>intra-dup</monospace> data, CMD uses the on-chip metadata cache to get the data; and 3) when a cache line is evicted, the clean sectors in the line are invalidated while the dirty sectors are written back. However, most read-only victims are rereferenced from DRAM more than twice. Therefore, we add a full-associate FIFO to accommodate the read-only (it is also clean) victims to reduce the rereference counts. Experiments show that CMD can decrease the off-chip accesses by 31.01%, reduce the energy by 32.78% and improve performance by 42.53%. Besides, CMD can improve the performance of memory-intensive workloads by 57.56%.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 10","pages":"3752-3763"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10930882/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Massive off-chip accesses in graphics processing units (GPUs) are the main performance bottleneck. We find that many writes are duplicate, and the duplication can be inter-dup and intra-dup. While inter-dup means different memory blocks are identical, and intra-dup means all the 4B elements in a line are the same. In this work, we propose a cache-assisted GPU memory deduplication architecture named cache-assisted GPU memory deduplicated (CMD) to reduce the off-chip accesses via utilizing the data duplication in GPU applications. CMD includes three key design contributions which aim to reduce the three kinds of accesses: 1) a novel GPU memory deduplication architecture that removes the intra-dup and inter-dup lines. We design several techniques to manage duplicate blocks, reducing massive off-chip writes; 2) we propose a cache-assisted read scheme to reduce the reads to duplicate data. When an L2 cache miss wants to read the duplicate block, if the reference block has been fetched to L2 and it is clean, we can copy it to the L2 missed block without accessing off-chip DRAM. As for the reads to intra-dup data, CMD uses the on-chip metadata cache to get the data; and 3) when a cache line is evicted, the clean sectors in the line are invalidated while the dirty sectors are written back. However, most read-only victims are rereferenced from DRAM more than twice. Therefore, we add a full-associate FIFO to accommodate the read-only (it is also clean) victims to reduce the rereference counts. Experiments show that CMD can decrease the off-chip accesses by 31.01%, reduce the energy by 32.78% and improve performance by 42.53%. Besides, CMD can improve the performance of memory-intensive workloads by 57.56%.
期刊介绍:
The purpose of this Transactions is to publish papers of interest to individuals in the area of computer-aided design of integrated circuits and systems composed of analog, digital, mixed-signal, optical, or microwave components. The aids include methods, models, algorithms, and man-machine interfaces for system-level, physical and logical design including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, hardware-software co-design and documentation of integrated circuit and system designs of all complexities. Design tools and techniques for evaluating and designing integrated circuits and systems for metrics such as performance, power, reliability, testability, and security are a focus.