Gaoche Zhang;Dingyang Zou;Kairui Sun;Zhihuan Chen;Meiqi Wang;Zhongfeng Wang
{"title":"GEMMV:用于GEMM Verilog生成的基于llm的自动性能感知框架","authors":"Gaoche Zhang;Dingyang Zou;Kairui Sun;Zhihuan Chen;Meiqi Wang;Zhongfeng Wang","doi":"10.1109/JETCAS.2025.3568712","DOIUrl":null,"url":null,"abstract":"Recent advancements in artificial intelligence (AI) models have intensified the need for specialized AI accelerators. The design of optimized general matrix multiplication (GEMM) module tailored for these accelerators is crucial but time-consuming and expertise-demanding, creating a demand for automating design processes. Large language models (LLMs), capable of generating high-quality designs from human instructions, show great promise in automating GEMM module creation. However, the GEMM module’s vast design space and stringent performance requirements, along with the limitations of datasets and the lack of hardware performance awareness of LLMs, have made previous LLM-based register transfer level (RTL) code generation efforts unsuitable for GEMM design. To tackle these challenges, this paper proposes an automated performance-aware LLM-based framework, GEMMV, for generating high-correctness and high-performance Verilog code for GEMM. This framework utilizes in-context learning based on GPT-4 to automatically generate high-quality and well-annotated Verilog code for different variants of the GEMM. Additionally, it leverages in-context learning to obtain performance awareness by integrating a multi-level performance model (MLPM) with fine-tuned LLMs. The Verilog code generated by this framework reduces latency by 3.1x and improves syntax correctness by 65% and functionality correctness by 70% compared to earlier efforts.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"325-336"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GEMMV: An LLM-Based Automated Performance-Aware Framework for GEMM Verilog Generation\",\"authors\":\"Gaoche Zhang;Dingyang Zou;Kairui Sun;Zhihuan Chen;Meiqi Wang;Zhongfeng Wang\",\"doi\":\"10.1109/JETCAS.2025.3568712\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advancements in artificial intelligence (AI) models have intensified the need for specialized AI accelerators. The design of optimized general matrix multiplication (GEMM) module tailored for these accelerators is crucial but time-consuming and expertise-demanding, creating a demand for automating design processes. Large language models (LLMs), capable of generating high-quality designs from human instructions, show great promise in automating GEMM module creation. However, the GEMM module’s vast design space and stringent performance requirements, along with the limitations of datasets and the lack of hardware performance awareness of LLMs, have made previous LLM-based register transfer level (RTL) code generation efforts unsuitable for GEMM design. To tackle these challenges, this paper proposes an automated performance-aware LLM-based framework, GEMMV, for generating high-correctness and high-performance Verilog code for GEMM. This framework utilizes in-context learning based on GPT-4 to automatically generate high-quality and well-annotated Verilog code for different variants of the GEMM. Additionally, it leverages in-context learning to obtain performance awareness by integrating a multi-level performance model (MLPM) with fine-tuned LLMs. The Verilog code generated by this framework reduces latency by 3.1x and improves syntax correctness by 65% and functionality correctness by 70% compared to earlier efforts.\",\"PeriodicalId\":48827,\"journal\":{\"name\":\"IEEE Journal on Emerging and Selected Topics in Circuits and Systems\",\"volume\":\"15 2\",\"pages\":\"325-336\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-03-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal on Emerging and Selected Topics in Circuits and Systems\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10994474/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10994474/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
GEMMV: An LLM-Based Automated Performance-Aware Framework for GEMM Verilog Generation
Recent advancements in artificial intelligence (AI) models have intensified the need for specialized AI accelerators. The design of optimized general matrix multiplication (GEMM) module tailored for these accelerators is crucial but time-consuming and expertise-demanding, creating a demand for automating design processes. Large language models (LLMs), capable of generating high-quality designs from human instructions, show great promise in automating GEMM module creation. However, the GEMM module’s vast design space and stringent performance requirements, along with the limitations of datasets and the lack of hardware performance awareness of LLMs, have made previous LLM-based register transfer level (RTL) code generation efforts unsuitable for GEMM design. To tackle these challenges, this paper proposes an automated performance-aware LLM-based framework, GEMMV, for generating high-correctness and high-performance Verilog code for GEMM. This framework utilizes in-context learning based on GPT-4 to automatically generate high-quality and well-annotated Verilog code for different variants of the GEMM. Additionally, it leverages in-context learning to obtain performance awareness by integrating a multi-level performance model (MLPM) with fine-tuned LLMs. The Verilog code generated by this framework reduces latency by 3.1x and improves syntax correctness by 65% and functionality correctness by 70% compared to earlier efforts.
期刊介绍:
The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.