Zerun Li;Xiaoming Chen;Yuxin Yang;Feng Min;Xiaoyu Zhang;Yinhe Han
{"title":"A Data-Centric Software-Hardware Co-Designed Architecture for Large-Scale Graph Processing","authors":"Zerun Li;Xiaoming Chen;Yuxin Yang;Feng Min;Xiaoyu Zhang;Yinhe Han","doi":"10.1109/TC.2024.3514292","DOIUrl":null,"url":null,"abstract":"Graph processing plays an important role in many practical applications. However, the inherent characteristics of graph processing, including random memory access and the low computation-to-communication ratio, make it difficult to efficiently execute on traditional computing architectures, such as CPUs and GPUs. Near-memory computing has the characteristics of low latency and high bandwidth. It is widely regarded as a promising direction for designing graph processing accelerators. However, the storage space of a single device cannot meet the demand of large-scale graph processing. Using multiple devices will bring lots of inter-device data transmission, which may counteract the benefits of near-memory computing. To fundamentally reduce the data transmission overhead, we propose a data-centric graph processing framework for systems with multiple near-memory computing devices. The framework uses a data-centric programming model as the software hardware interface. For software, we propose an optimized data flow and a heuristic multi-step weighted maximum matching algorithm to achieve efficient inter-device communication and ensure load balancing. For hardware, we design a data reuse driven task controller and a data type-aware on-chip memory, which can effectively improve the utilization of the on-chip memory. Compared with the two most recent near-memory graph accelerators, our framework significantly reduces energy consumption and inter-device communication.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 4","pages":"1109-1122"},"PeriodicalIF":3.6000,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10787073/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Graph processing plays an important role in many practical applications. However, the inherent characteristics of graph processing, including random memory access and the low computation-to-communication ratio, make it difficult to efficiently execute on traditional computing architectures, such as CPUs and GPUs. Near-memory computing has the characteristics of low latency and high bandwidth. It is widely regarded as a promising direction for designing graph processing accelerators. However, the storage space of a single device cannot meet the demand of large-scale graph processing. Using multiple devices will bring lots of inter-device data transmission, which may counteract the benefits of near-memory computing. To fundamentally reduce the data transmission overhead, we propose a data-centric graph processing framework for systems with multiple near-memory computing devices. The framework uses a data-centric programming model as the software hardware interface. For software, we propose an optimized data flow and a heuristic multi-step weighted maximum matching algorithm to achieve efficient inter-device communication and ensure load balancing. For hardware, we design a data reuse driven task controller and a data type-aware on-chip memory, which can effectively improve the utilization of the on-chip memory. Compared with the two most recent near-memory graph accelerators, our framework significantly reduces energy consumption and inter-device communication.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.