{"title":"RelHDx:利用场效应晶体管加速图上学习的超维计算","authors":"Jaeyoung Kang;Minxuan Zhou;Weihong Xu;Tajana Rosing","doi":"10.1109/TC.2025.3541141","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) are a powerful machine learning (ML) method to analyze graph data. The training of GNN has compute and memory-intensive phases along with irregular data movements, which makes in-memory acceleration challenging. We present a hyperdimensional computing (HDC)-based graph ML framework called RelHDx that aggregates node features and graph structure, along with representing node and edge information in high-dimensional space. RelHDx enables single-pass training and inference with simple arithmetic operations, resulting in the efficient design of graph-based ML tasks: node classification and link prediction. We accelerate RelHDx using scalable processing in-memory (PIM) architecture based on emerging ferroelectric FET (FeFET) technology. Our accelerator uses a data allocation optimization and operation scheduler to address the irregularity of the graph and maximize the performance. Evaluation results show that RelHDx offers comparable accuracy to popular GNN-based algorithms while achieving up to <inline-formula><tex-math>$63.8\\boldsymbol{\\times}$</tex-math></inline-formula> faster speed on GPU. Our FeFET-based accelerator, RelHDx-PIM, is <inline-formula><tex-math>$32\\boldsymbol{\\times}$</tex-math></inline-formula> faster for node classification, while for link prediction it is <inline-formula><tex-math>$65.4\\boldsymbol{\\times}$</tex-math></inline-formula> faster than when running on GPU. Furthermore, RelHDx-PIM improves energy efficiency by four orders of magnitude over GPU. Compared to the state-of-the-art in-memory processing-based GNN accelerator, PIM-GCN <xref>[1]</xref>, RelHDx-PIM is <inline-formula><tex-math>$10\\boldsymbol{\\times}$</tex-math></inline-formula> faster and <inline-formula><tex-math>$986\\boldsymbol{\\times}$</tex-math></inline-formula> more energy-efficient on average.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 5","pages":"1730-1742"},"PeriodicalIF":3.6000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RelHDx: Hyperdimensional Computing for Learning on Graphs With FeFET Acceleration\",\"authors\":\"Jaeyoung Kang;Minxuan Zhou;Weihong Xu;Tajana Rosing\",\"doi\":\"10.1109/TC.2025.3541141\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph neural networks (GNNs) are a powerful machine learning (ML) method to analyze graph data. The training of GNN has compute and memory-intensive phases along with irregular data movements, which makes in-memory acceleration challenging. We present a hyperdimensional computing (HDC)-based graph ML framework called RelHDx that aggregates node features and graph structure, along with representing node and edge information in high-dimensional space. RelHDx enables single-pass training and inference with simple arithmetic operations, resulting in the efficient design of graph-based ML tasks: node classification and link prediction. We accelerate RelHDx using scalable processing in-memory (PIM) architecture based on emerging ferroelectric FET (FeFET) technology. Our accelerator uses a data allocation optimization and operation scheduler to address the irregularity of the graph and maximize the performance. Evaluation results show that RelHDx offers comparable accuracy to popular GNN-based algorithms while achieving up to <inline-formula><tex-math>$63.8\\\\boldsymbol{\\\\times}$</tex-math></inline-formula> faster speed on GPU. Our FeFET-based accelerator, RelHDx-PIM, is <inline-formula><tex-math>$32\\\\boldsymbol{\\\\times}$</tex-math></inline-formula> faster for node classification, while for link prediction it is <inline-formula><tex-math>$65.4\\\\boldsymbol{\\\\times}$</tex-math></inline-formula> faster than when running on GPU. Furthermore, RelHDx-PIM improves energy efficiency by four orders of magnitude over GPU. Compared to the state-of-the-art in-memory processing-based GNN accelerator, PIM-GCN <xref>[1]</xref>, RelHDx-PIM is <inline-formula><tex-math>$10\\\\boldsymbol{\\\\times}$</tex-math></inline-formula> faster and <inline-formula><tex-math>$986\\\\boldsymbol{\\\\times}$</tex-math></inline-formula> more energy-efficient on average.\",\"PeriodicalId\":13087,\"journal\":{\"name\":\"IEEE Transactions on Computers\",\"volume\":\"74 5\",\"pages\":\"1730-1742\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-02-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computers\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10883012/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10883012/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
RelHDx: Hyperdimensional Computing for Learning on Graphs With FeFET Acceleration
Graph neural networks (GNNs) are a powerful machine learning (ML) method to analyze graph data. The training of GNN has compute and memory-intensive phases along with irregular data movements, which makes in-memory acceleration challenging. We present a hyperdimensional computing (HDC)-based graph ML framework called RelHDx that aggregates node features and graph structure, along with representing node and edge information in high-dimensional space. RelHDx enables single-pass training and inference with simple arithmetic operations, resulting in the efficient design of graph-based ML tasks: node classification and link prediction. We accelerate RelHDx using scalable processing in-memory (PIM) architecture based on emerging ferroelectric FET (FeFET) technology. Our accelerator uses a data allocation optimization and operation scheduler to address the irregularity of the graph and maximize the performance. Evaluation results show that RelHDx offers comparable accuracy to popular GNN-based algorithms while achieving up to $63.8\boldsymbol{\times}$ faster speed on GPU. Our FeFET-based accelerator, RelHDx-PIM, is $32\boldsymbol{\times}$ faster for node classification, while for link prediction it is $65.4\boldsymbol{\times}$ faster than when running on GPU. Furthermore, RelHDx-PIM improves energy efficiency by four orders of magnitude over GPU. Compared to the state-of-the-art in-memory processing-based GNN accelerator, PIM-GCN [1], RelHDx-PIM is $10\boldsymbol{\times}$ faster and $986\boldsymbol{\times}$ more energy-efficient on average.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.