{"title":"面向Gen-AI的可扩展节能内存处理架构","authors":"Gian Singh;Sarma Vrudhula","doi":"10.1109/JETCAS.2025.3566929","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have achieved high accuracy in diverse NLP and computer vision tasks due to self-attention mechanisms relying on GEMM and GEMV operations. However, scaling LLMs poses significant computational and energy challenges, particularly for traditional Von-Neumann architectures (CPUs/GPUs), which incur high latency and energy consumption from frequent data movement. These issues are even more pronounced in energy-constrained edge environments. While DRAM-based near-memory architectures offer improved energy efficiency and throughput, their processing elements are limited by strict area, power, and timing constraints. This work introduces CIDAN-3D, a novel Processing-in-Memory (PIM) architecture tailored for LLMs. It features an ultra-low-power Neuron Processing Element (NPE) with high compute density (#Operations/Area), enabling efficient in-situ execution of LLM operations by leveraging high parallelism within DRAM. CIDAN-3D reduces data movement, improves locality, and achieves substantial gains in performance and energy efficiency—showing up to <inline-formula> <tex-math>$1.3\\times $ </tex-math></inline-formula> higher throughput and <inline-formula> <tex-math>$21.9\\times $ </tex-math></inline-formula> better energy efficiency for smaller models, and <inline-formula> <tex-math>$3\\times $ </tex-math></inline-formula> throughput and <inline-formula> <tex-math>$71\\times $ </tex-math></inline-formula> energy improvement for large decoder-only models compared to prior near-memory designs. As a result, CIDAN-3D offers a scalable, energy-efficient platform for LLM-driven Gen-AI applications.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"285-298"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Scalable and Energy-Efficient Processing-in-Memory Architecture for Gen-AI\",\"authors\":\"Gian Singh;Sarma Vrudhula\",\"doi\":\"10.1109/JETCAS.2025.3566929\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) have achieved high accuracy in diverse NLP and computer vision tasks due to self-attention mechanisms relying on GEMM and GEMV operations. However, scaling LLMs poses significant computational and energy challenges, particularly for traditional Von-Neumann architectures (CPUs/GPUs), which incur high latency and energy consumption from frequent data movement. These issues are even more pronounced in energy-constrained edge environments. While DRAM-based near-memory architectures offer improved energy efficiency and throughput, their processing elements are limited by strict area, power, and timing constraints. This work introduces CIDAN-3D, a novel Processing-in-Memory (PIM) architecture tailored for LLMs. It features an ultra-low-power Neuron Processing Element (NPE) with high compute density (#Operations/Area), enabling efficient in-situ execution of LLM operations by leveraging high parallelism within DRAM. CIDAN-3D reduces data movement, improves locality, and achieves substantial gains in performance and energy efficiency—showing up to <inline-formula> <tex-math>$1.3\\\\times $ </tex-math></inline-formula> higher throughput and <inline-formula> <tex-math>$21.9\\\\times $ </tex-math></inline-formula> better energy efficiency for smaller models, and <inline-formula> <tex-math>$3\\\\times $ </tex-math></inline-formula> throughput and <inline-formula> <tex-math>$71\\\\times $ </tex-math></inline-formula> energy improvement for large decoder-only models compared to prior near-memory designs. As a result, CIDAN-3D offers a scalable, energy-efficient platform for LLM-driven Gen-AI applications.\",\"PeriodicalId\":48827,\"journal\":{\"name\":\"IEEE Journal on Emerging and Selected Topics in Circuits and Systems\",\"volume\":\"15 2\",\"pages\":\"285-298\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal on Emerging and Selected Topics in Circuits and Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10985893/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10985893/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
A Scalable and Energy-Efficient Processing-in-Memory Architecture for Gen-AI
Large language models (LLMs) have achieved high accuracy in diverse NLP and computer vision tasks due to self-attention mechanisms relying on GEMM and GEMV operations. However, scaling LLMs poses significant computational and energy challenges, particularly for traditional Von-Neumann architectures (CPUs/GPUs), which incur high latency and energy consumption from frequent data movement. These issues are even more pronounced in energy-constrained edge environments. While DRAM-based near-memory architectures offer improved energy efficiency and throughput, their processing elements are limited by strict area, power, and timing constraints. This work introduces CIDAN-3D, a novel Processing-in-Memory (PIM) architecture tailored for LLMs. It features an ultra-low-power Neuron Processing Element (NPE) with high compute density (#Operations/Area), enabling efficient in-situ execution of LLM operations by leveraging high parallelism within DRAM. CIDAN-3D reduces data movement, improves locality, and achieves substantial gains in performance and energy efficiency—showing up to $1.3\times $ higher throughput and $21.9\times $ better energy efficiency for smaller models, and $3\times $ throughput and $71\times $ energy improvement for large decoder-only models compared to prior near-memory designs. As a result, CIDAN-3D offers a scalable, energy-efficient platform for LLM-driven Gen-AI applications.
期刊介绍:
The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.