A Scalable and Energy-Efficient Processing-in-Memory Architecture for Gen-AI

IF 3.8 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Gian Singh;Sarma Vrudhula
{"title":"A Scalable and Energy-Efficient Processing-in-Memory Architecture for Gen-AI","authors":"Gian Singh;Sarma Vrudhula","doi":"10.1109/JETCAS.2025.3566929","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have achieved high accuracy in diverse NLP and computer vision tasks due to self-attention mechanisms relying on GEMM and GEMV operations. However, scaling LLMs poses significant computational and energy challenges, particularly for traditional Von-Neumann architectures (CPUs/GPUs), which incur high latency and energy consumption from frequent data movement. These issues are even more pronounced in energy-constrained edge environments. While DRAM-based near-memory architectures offer improved energy efficiency and throughput, their processing elements are limited by strict area, power, and timing constraints. This work introduces CIDAN-3D, a novel Processing-in-Memory (PIM) architecture tailored for LLMs. It features an ultra-low-power Neuron Processing Element (NPE) with high compute density (#Operations/Area), enabling efficient in-situ execution of LLM operations by leveraging high parallelism within DRAM. CIDAN-3D reduces data movement, improves locality, and achieves substantial gains in performance and energy efficiency—showing up to <inline-formula> <tex-math>$1.3\\times $ </tex-math></inline-formula> higher throughput and <inline-formula> <tex-math>$21.9\\times $ </tex-math></inline-formula> better energy efficiency for smaller models, and <inline-formula> <tex-math>$3\\times $ </tex-math></inline-formula> throughput and <inline-formula> <tex-math>$71\\times $ </tex-math></inline-formula> energy improvement for large decoder-only models compared to prior near-memory designs. As a result, CIDAN-3D offers a scalable, energy-efficient platform for LLM-driven Gen-AI applications.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"285-298"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10985893/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) have achieved high accuracy in diverse NLP and computer vision tasks due to self-attention mechanisms relying on GEMM and GEMV operations. However, scaling LLMs poses significant computational and energy challenges, particularly for traditional Von-Neumann architectures (CPUs/GPUs), which incur high latency and energy consumption from frequent data movement. These issues are even more pronounced in energy-constrained edge environments. While DRAM-based near-memory architectures offer improved energy efficiency and throughput, their processing elements are limited by strict area, power, and timing constraints. This work introduces CIDAN-3D, a novel Processing-in-Memory (PIM) architecture tailored for LLMs. It features an ultra-low-power Neuron Processing Element (NPE) with high compute density (#Operations/Area), enabling efficient in-situ execution of LLM operations by leveraging high parallelism within DRAM. CIDAN-3D reduces data movement, improves locality, and achieves substantial gains in performance and energy efficiency—showing up to $1.3\times $ higher throughput and $21.9\times $ better energy efficiency for smaller models, and $3\times $ throughput and $71\times $ energy improvement for large decoder-only models compared to prior near-memory designs. As a result, CIDAN-3D offers a scalable, energy-efficient platform for LLM-driven Gen-AI applications.
面向Gen-AI的可扩展节能内存处理架构
由于依赖于GEMM和GEMV操作的自注意机制,大型语言模型(llm)在各种NLP和计算机视觉任务中取得了很高的准确性。然而,扩展llm带来了巨大的计算和能源挑战,特别是对于传统的冯-诺伊曼架构(cpu / gpu),它会因频繁的数据移动而产生高延迟和能耗。这些问题在能源受限的边缘环境中更加明显。虽然基于dram的近内存架构提供了更高的能源效率和吞吐量,但它们的处理元素受到严格的面积、功率和时间限制。这项工作介绍了CIDAN-3D,一种为llm量身定制的新型内存中处理(PIM)架构。它具有具有高计算密度(#Operations/Area)的超低功耗神经元处理元件(NPE),通过利用DRAM内的高并行性,可以高效地原位执行LLM操作。CIDAN-3D减少了数据移动,提高了局域性,并在性能和能源效率方面取得了实质性的进步-与之前的近内存设计相比,小型模型的吞吐量提高了1.3倍,能源效率提高了21.9倍,大型解码器模型的吞吐量提高了3倍,能源效率提高了71倍。因此,CIDAN-3D为llm驱动的Gen-AI应用提供了一个可扩展、节能的平台。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.50
自引率
2.20%
发文量
86
期刊介绍: The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信