IEEE Journal on Emerging and Selected Topics in Circuits and Systems最新文献

筛选
英文 中文
Efficient Hardware Architecture Design for Rotary Position Embedding of Large Language Models 大型语言模型旋转位置嵌入的高效硬件架构设计
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-31 DOI: 10.1109/JETCAS.2025.3556443
Wenjie Li;Gang Wang;Dongxu Lyu;Ningyi Xu;Guanghui He
{"title":"Efficient Hardware Architecture Design for Rotary Position Embedding of Large Language Models","authors":"Wenjie Li;Gang Wang;Dongxu Lyu;Ningyi Xu;Guanghui He","doi":"10.1109/JETCAS.2025.3556443","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3556443","url":null,"abstract":"Due to the substantial demands of storage and computation imposed by large language models (LLMs), there has been a surge of research interest in their hardware acceleration. As a technique involving non-linear operations, rotary position embedding (RoPE) has been adopted by some recently released LLMs. However, there is currently no reported research on its hardware design. This paper, for the first time, presents an efficient hardware architecture design for RoPE of LLMs. We first explore the similarities between RoPE and the coordinate rotation digital computer (CORDIC) algorithm, while also considering the commonly used quantization scheme for LLMs. Additionally, we propose a hardware-friendly solution to address the issue of excessively large input angle ranges. Then we present a CORDIC-based approximation for RoPE and develop a hardware architecture for it. The experimental results demonstrate that our design can save up to 45.7% area cost and 31.0% power consumption when compared with the fixed-point counterpart, while maintaining almost the same model performance. Compared to the straightforward implementation using floating-point arithmetic, our design can reduce up to 91.4% area cost and 88.9% power consumption, with negligible performance loss.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"244-257"},"PeriodicalIF":3.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
F3: An FPGA-Based Transformer Fine-Tuning Accelerator With Flexible Floating Point Format F3:基于fpga的灵活浮点格式变压器微调加速器
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-31 DOI: 10.1109/JETCAS.2025.3555970
Zerong He;Xi Jin;Zhongguang Xu
{"title":"F3: An FPGA-Based Transformer Fine-Tuning Accelerator With Flexible Floating Point Format","authors":"Zerong He;Xi Jin;Zhongguang Xu","doi":"10.1109/JETCAS.2025.3555970","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3555970","url":null,"abstract":"Transformers have demonstrated remarkable success across various deep learning tasks. However, their inference and fine-tuning require substantial computation and memory resources, posing challenges for existing hardware platforms, particularly resource-constrained edge devices. To address these limitations, we propose F<sup>3</sup>, an FPGA-based accelerator for transformer fine-tuning. To reduce computation and memory overhead, this paper proposes a flexible floating point (FFP) format which consumes fewer resources than traditional floating-point formats of the same bitwidth. We adapt low-rank adaptation to FFP format and propose a fine-tuning strategy named LR-FFP which reduces the number of trainable parameters without compromising fine-tuning accuracy. At the hardware level, we design specialized processing elements (PEs) for the FFP format. The PE maximizes the utilization of DSP resources, enabling a single DSP to perform two multiply-accumulate operations per cycle. The PEs are organized into a systolic array (SA) to efficiently handle general matrix multiplication during fine-tuning. Through theoretical analysis and experimental evaluation, we determine the optimal dataflow and SA parameters to balance performance and resource consumption. We implement the architecture on the Xilinx VCU128 FPGA platform and F<sup>3</sup> achieves a performance of 8.2 TFlops at 250 MHz. Compared with CPU and GPU implementations, F<sup>3</sup> achieves speedups of <inline-formula> <tex-math>$15.22 times $ </tex-math></inline-formula> and <inline-formula> <tex-math>$3.44 times $ </tex-math></inline-formula>, respectively, and energy efficiency improvements of <inline-formula> <tex-math>$70.52 times $ </tex-math></inline-formula> and <inline-formula> <tex-math>$9.44 times $ </tex-math></inline-formula>.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"258-271"},"PeriodicalIF":3.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GenPolar: Generative AI-Aided Complexity Reduction for Polar SCL Decoding GenPolar:生成ai辅助的极性SCL解码复杂性降低
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-19 DOI: 10.1109/JETCAS.2025.3561330
Yutai Sun;Jingyi Chen;Yuqing Ren;Houren Ji;Yongming Huang;Xiaohu You;Chuan Zhang
{"title":"GenPolar: Generative AI-Aided Complexity Reduction for Polar SCL Decoding","authors":"Yutai Sun;Jingyi Chen;Yuqing Ren;Houren Ji;Yongming Huang;Xiaohu You;Chuan Zhang","doi":"10.1109/JETCAS.2025.3561330","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3561330","url":null,"abstract":"The CRC-aided successive cancellation list (CA-SCL) decoding algorithm for polar codes has gained widespread adoption thanks to its outstanding performance. However, with the evolution of 6G technologies, the high complexity of CA-SCL decoding poses a challenge in meeting growing performance requirements. Consequently, it is crucial to devise strategies that reduce this complexity without compromising error rates. Current efforts to mitigate the complexity mainly depend on harnessing <monospace>special nodes</monospace> associated with the code construction sequences, such as Fast-SCL decoding. However, these strategies suffer from redundant complexity due to ill-suited construction sequences and unnecessary sorting operations within special nodes. Addressing this issue, this paper proposes a hardware-friendly and GenAI-aided complexity reduction approach for Fast-SCL decoding, named GenPolar. This approach involves two-step optimization techniques: 1) <italic>Transformer encoder models</i> for generating polar construction sequences, and 2) <italic>a sorting entropy based method</i> for sorting reduction. These two-step techniques result in reduced complexity with negligible performance loss. For polar codes of length-1024 with code rates of 0.25, 0.50, and 0.75, GenPolar achieves latency reductions of 20.6%, 29.8%, and 40.6%, respectively. Even benchmarking against the reduced-complexity version of Fast-SCL decoding, the relative gains are 14.0%, 17.8%, and 22.3%, respectively. It should be noted that the immediate application is not limited to Fast-SCL decoding but also extends to other node-based SCL decoding algorithms like SSCL-SPC and SR-SCL.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"312-324"},"PeriodicalIF":3.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11007206","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial on Circuits and Systems for Green Video Communications 绿色视频通信的电路和系统社论
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-12 DOI: 10.1109/JETCAS.2025.3541767
Christian Herglotz;Daniel Palomino;Olivier Le Meur;C.-C. Jay Kuo
{"title":"Editorial on Circuits and Systems for Green Video Communications","authors":"Christian Herglotz;Daniel Palomino;Olivier Le Meur;C.-C. Jay Kuo","doi":"10.1109/JETCAS.2025.3541767","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3541767","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 1","pages":"1-3"},"PeriodicalIF":3.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10924431","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143602037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Information for Authors IEEE关于电路和系统信息中新兴和选定主题的作者期刊
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-12 DOI: 10.1109/JETCAS.2025.3538141
{"title":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems Information for Authors","authors":"","doi":"10.1109/JETCAS.2025.3538141","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3538141","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 1","pages":"143-143"},"PeriodicalIF":3.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10924454","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143602041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Publication Information IEEE关于电路和系统中新兴和选定主题的期刊出版信息
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-12 DOI: 10.1109/JETCAS.2025.3538139
{"title":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems Publication Information","authors":"","doi":"10.1109/JETCAS.2025.3538139","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3538139","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 1","pages":"C2-C2"},"PeriodicalIF":3.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10924430","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal on Emerging and Selected Topics in Circuits and Systems IEEE电路与系统中新兴和选定主题杂志
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-12 DOI: 10.1109/JETCAS.2025.3538143
{"title":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","authors":"","doi":"10.1109/JETCAS.2025.3538143","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3538143","url":null,"abstract":"","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 1","pages":"C3-C3"},"PeriodicalIF":3.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10924450","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143602010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLM4Netlist: LLM-Enabled Step-Based Netlist Generation From Natural Language Description LLM4Netlist:从自然语言描述生成基于llm的基于步骤的网络列表
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-09 DOI: 10.1109/JETCAS.2025.3568548
Kailiang Ye;Qingyu Yang;Zheng Lu;Heng Yu;Tianxiang Cui;Ruibin Bai;Linlin Shen
{"title":"LLM4Netlist: LLM-Enabled Step-Based Netlist Generation From Natural Language Description","authors":"Kailiang Ye;Qingyu Yang;Zheng Lu;Heng Yu;Tianxiang Cui;Ruibin Bai;Linlin Shen","doi":"10.1109/JETCAS.2025.3568548","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3568548","url":null,"abstract":"Empowered by Large Language Models (LLMs), substantial progress has been made in enhancing the EDA design flow in terms of high-level synthesis, such as direct translation from high-level language into RTL description. On the other hand, little research has been done for logic synthesis on the netlist generation. A direct application of LLMs for netlist generation presents additional challenges due to the scarcity of netlist-specific data, the need for tailored fine-tuning, and effective generation methods. This work first presents a novel training set and two evaluation sets catered for direct netlist generation LLMs, and an effective dataset construction pipeline to construct these datasets. Then this work proposes <sc>LLM4Netlist</small>, a novel step-based netlist generation framework via fine-tuned LLM. The framework consists of a step-based prompt construction module, a fine-tuned LLM, a code confidence estimator, and a feedback loop module, and is able to generate netlist codes directly from natural language functional descriptions. We evaluate the efficacy of our approach with our novel evaluation datasets. The experimental results demonstrate that, compared to the average score of the 10 commercial LLMs listed in our experiments, our method shows a functional correctness increase of 183.41% on the NetlistEval dataset and a 91.07% increase on NGen. The training and testing data, along with the processing code, can be found at <uri>https://github.com/klyebit/LLM4Netlist.git</uri>","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"337-348"},"PeriodicalIF":3.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPTAC: Domain-Specific Generative Pre-Trained Model for Approximate Circuit Design Exploration 面向近似电路设计探索的领域特定生成预训练模型
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-09 DOI: 10.1109/JETCAS.2025.3568606
Sipei Yi;Weichuan Zuo;Hongyi Wu;Ruicheng Dai;Weikang Qian;Jienan Chen
{"title":"GPTAC: Domain-Specific Generative Pre-Trained Model for Approximate Circuit Design Exploration","authors":"Sipei Yi;Weichuan Zuo;Hongyi Wu;Ruicheng Dai;Weikang Qian;Jienan Chen","doi":"10.1109/JETCAS.2025.3568606","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3568606","url":null,"abstract":"Automatically designing fast and low-cost digital circuits is challenging because of the discrete nature of circuits and the enormous design space, particularly in the exploration of approximate circuits. However, recent advances in generative artificial intelligence (GAI) have shed light to address these challenges. In this work, we present GPTAC, a domain-specific generative pre-trained (GPT) model customized for designing approximate circuits. By specifying the desired circuit accuracy or area, GPTAC can automatically generate an approximate circuit using its generative capabilities. We represent circuits using domain-specific language tokens, refined through a hardware description language keyword filter applied to gate-level code. This representation enables GPTAC to effectively learn approximate circuits from existing datasets by leveraging the GPT language model, as the training data can be directly derived from gate-level code. Additionally, by focusing on a domain-specific language, only a limited set of keywords is maintained, facilitating faster model convergence. To improve the success rate of the generated circuits, we introduce a circuit check rule that masks the GPTAC inference results when necessary. The experiment indicated that GPTAC is capable of producing approximate multipliers in under 15 seconds while utilizing merely 4GB of GPU memory, achieving a 10-40% reduction in area relative to the accuracy multiplier depending on various accuracy needs.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"349-360"},"PeriodicalIF":3.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Acceleration of Generative Models With Runtime Regularized KV Cache Management 基于运行时正则化KV缓存管理的生成模型端到端加速
IF 3.7 2区 工程技术
IEEE Journal on Emerging and Selected Topics in Circuits and Systems Pub Date : 2025-03-09 DOI: 10.1109/JETCAS.2025.3568716
Ashkan Moradifirouzabadi;Mingu Kang
{"title":"End-to-End Acceleration of Generative Models With Runtime Regularized KV Cache Management","authors":"Ashkan Moradifirouzabadi;Mingu Kang","doi":"10.1109/JETCAS.2025.3568716","DOIUrl":"https://doi.org/10.1109/JETCAS.2025.3568716","url":null,"abstract":"Despite their remarkable success in achieving high performance, Transformer-based models impose substantial computational and memory bandwidth requirements, posing significant challenges for hardware deployment. A key contributor to these challenges is the large KV cache, which increases data movement costs in addition to the model parameters. While various token pruning techniques have been proposed to reduce the computational complexity and storage requirements of the attention mechanism by eliminating redundant tokens, these methods often introduce irregularities in the sparsity patterns that complicate hardware implementation. To address these challenges, we propose a hardware and algorithm co-design approach. Our solution features a Runtime Cache Eviction (RCE) algorithm that removes the least relevant tokens and replaces them with newly generated ones, maintaining a constant KV cache size across blocks and inputs. To support this algorithm, we design an accelerator equipped with a KV Memory Management Unit (KV-MMU), which efficiently manages active tokens through eviction and replacement, thereby optimizing DRAM storage and access. Additionally, our design integrates batch processing and an optimized processing pipeline to improve end-to-end throughput, effectively meeting the requirements of both pre-filling and generation stages. The proposed system achieves up to <inline-formula> <tex-math>$8times $ </tex-math></inline-formula> KV cache size reduction with minimal accuracy degradation. In a 65 nm process, the proposed accelerator demonstrates <inline-formula> <tex-math>$1.52times $ </tex-math></inline-formula> energy savings and <inline-formula> <tex-math>$3.62times $ </tex-math></inline-formula> delay reductions when processing a batch size of 16, with only a 1.11% energy overhead attributed to the specialized KV-MMU.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 2","pages":"217-230"},"PeriodicalIF":3.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信