用于端到端集成电路控制的0.45 mm²3.49-TOPS/W低温深度强化学习模块

IF 3.2
Jiachen Xu;John Kan;Yuyi Shen;Ethan Chen;Vanessa Chen
{"title":"用于端到端集成电路控制的0.45 mm²3.49-TOPS/W低温深度强化学习模块","authors":"Jiachen Xu;John Kan;Yuyi Shen;Ethan Chen;Vanessa Chen","doi":"10.1109/OJSSCS.2025.3601153","DOIUrl":null,"url":null,"abstract":"This work presents a fully unrolled on-chip deep reinforcement learning (DRL) module with a deep Q-network (DQN) and its system integration for integrated circuits control and functionality augmentation tasks, including voltage regulation of a cryogenic single-input triple-output dc–dc converter and recovery of RF fingerprints (RFFs) using a reconfigurable power amplifier (PA) under temperature variations. The complete DRL module features 6-bit fixed-point model parameters, 116 kB of memory, and 128 processing elements. It is equipped with on-chip training capabilities, fully unrolled on a 0.45-<inline-formula> <tex-math>${\\mathrm { mm}}^{2}$ </tex-math></inline-formula> core area using 28-nm technology. The design achieves an efficiency of 0.12 nJ per action and a control latency of <inline-formula> <tex-math>$4.925~\\mu $ </tex-math></inline-formula>s, with a maximum operational efficiency of 3.49 TOPS/W. Temperature effects on the chip are thoroughly demonstrated across a wide temperature range from 358 K (<inline-formula> <tex-math>$85~^{\\circ }$ </tex-math></inline-formula>C) to 4.2 K (–<inline-formula> <tex-math>$269~^{\\circ }$ </tex-math></inline-formula>C).","PeriodicalId":100633,"journal":{"name":"IEEE Open Journal of the Solid-State Circuits Society","volume":"5 ","pages":"240-250"},"PeriodicalIF":3.2000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11133470","citationCount":"0","resultStr":"{\"title\":\"A 0.45-mm² 3.49-TOPS/W Cryogenic Deep Reinforcement Learning Module for End-to-End Integrated Circuits Control\",\"authors\":\"Jiachen Xu;John Kan;Yuyi Shen;Ethan Chen;Vanessa Chen\",\"doi\":\"10.1109/OJSSCS.2025.3601153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work presents a fully unrolled on-chip deep reinforcement learning (DRL) module with a deep Q-network (DQN) and its system integration for integrated circuits control and functionality augmentation tasks, including voltage regulation of a cryogenic single-input triple-output dc–dc converter and recovery of RF fingerprints (RFFs) using a reconfigurable power amplifier (PA) under temperature variations. The complete DRL module features 6-bit fixed-point model parameters, 116 kB of memory, and 128 processing elements. It is equipped with on-chip training capabilities, fully unrolled on a 0.45-<inline-formula> <tex-math>${\\\\mathrm { mm}}^{2}$ </tex-math></inline-formula> core area using 28-nm technology. The design achieves an efficiency of 0.12 nJ per action and a control latency of <inline-formula> <tex-math>$4.925~\\\\mu $ </tex-math></inline-formula>s, with a maximum operational efficiency of 3.49 TOPS/W. Temperature effects on the chip are thoroughly demonstrated across a wide temperature range from 358 K (<inline-formula> <tex-math>$85~^{\\\\circ }$ </tex-math></inline-formula>C) to 4.2 K (–<inline-formula> <tex-math>$269~^{\\\\circ }$ </tex-math></inline-formula>C).\",\"PeriodicalId\":100633,\"journal\":{\"name\":\"IEEE Open Journal of the Solid-State Circuits Society\",\"volume\":\"5 \",\"pages\":\"240-250\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11133470\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Solid-State Circuits Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11133470/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Solid-State Circuits Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11133470/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

这项工作提出了一个完全展开的片上深度强化学习(DRL)模块,该模块具有深度q -网络(DQN)及其系统集成,用于集成电路控制和功能增强任务,包括低温单输入三输出dc-dc转换器的电压调节和使用可重构功率放大器(PA)在温度变化下恢复射频指纹(rff)。完整的DRL模块具有6位定点模型参数,116 kB内存和128个处理元件。它配备了片上训练功能,完全展开在0.45- ${\ mathm {mm}}^{2}$核心区域上,采用28纳米技术。该设计实现了每动作0.12 nJ的效率,控制延迟为4.925~\mu $ s,最大运行效率为3.49 TOPS/W。温度对芯片的影响在358k ($85~^{\circ}$ C)到4.2 K (- $269~^{\circ}$ C)的宽温度范围内得到了彻底的证明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A 0.45-mm² 3.49-TOPS/W Cryogenic Deep Reinforcement Learning Module for End-to-End Integrated Circuits Control
This work presents a fully unrolled on-chip deep reinforcement learning (DRL) module with a deep Q-network (DQN) and its system integration for integrated circuits control and functionality augmentation tasks, including voltage regulation of a cryogenic single-input triple-output dc–dc converter and recovery of RF fingerprints (RFFs) using a reconfigurable power amplifier (PA) under temperature variations. The complete DRL module features 6-bit fixed-point model parameters, 116 kB of memory, and 128 processing elements. It is equipped with on-chip training capabilities, fully unrolled on a 0.45- ${\mathrm { mm}}^{2}$ core area using 28-nm technology. The design achieves an efficiency of 0.12 nJ per action and a control latency of $4.925~\mu $ s, with a maximum operational efficiency of 3.49 TOPS/W. Temperature effects on the chip are thoroughly demonstrated across a wide temperature range from 358 K ( $85~^{\circ }$ C) to 4.2 K (– $269~^{\circ }$ C).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信