A scalable 0.128-to-1Tb/s 0.8-to-2.6pJ/b 64-lane parallel I/O in 32nm CMOS

M. Mansuri, J. Jaussi, J. Kennedy, Tzu-Chien Hsueh, S. Shekhar, G. Balamurugan, F. O’Mahony, Clark Roberts, R. Mooney, B. Casper
{"title":"A scalable 0.128-to-1Tb/s 0.8-to-2.6pJ/b 64-lane parallel I/O in 32nm CMOS","authors":"M. Mansuri, J. Jaussi, J. Kennedy, Tzu-Chien Hsueh, S. Shekhar, G. Balamurugan, F. O’Mahony, Clark Roberts, R. Mooney, B. Casper","doi":"10.1109/ISSCC.2013.6487788","DOIUrl":null,"url":null,"abstract":"High-performance computing (HPC) systems demand aggressive scaling of memory and I/O to achieve multiple terabits/sec of bandwidth. Minimizing I/O cost, area and power are crucial to achieving a practically realizable system with such large bandwidth. To meet these needs, we developed a low-power dense 64-lane I/O system with per-port aggregate bandwidth up to 1Tb/s and 2.6pJ/bit power efficiency. We developed a high-density connector and cable, attached to the top side of the package that enables this high interconnect density. A lane-failover mechanism provides design robustness for fault-tolerance. To further optimize power efficiency, the lane data rate scales from 2 to 16Gb/s with non-linear power efficiency of 0.8 to 2.6pJ/bit, providing scalable aggregate bandwidth of 0.128 to 1Tb/s. Highly power scalable circuits such as CMOS clocking and reconfigurable current-mode (CM) or voltage-mode (VM) TX driver enable the 8× bandwidth and 3× power efficiency scalability with aggressive supply voltage scaling (0.6 to 1.08V).","PeriodicalId":6378,"journal":{"name":"2013 IEEE International Solid-State Circuits Conference Digest of Technical Papers","volume":"26 1","pages":"402-403"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"38","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Solid-State Circuits Conference Digest of Technical Papers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSCC.2013.6487788","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 38

Abstract

High-performance computing (HPC) systems demand aggressive scaling of memory and I/O to achieve multiple terabits/sec of bandwidth. Minimizing I/O cost, area and power are crucial to achieving a practically realizable system with such large bandwidth. To meet these needs, we developed a low-power dense 64-lane I/O system with per-port aggregate bandwidth up to 1Tb/s and 2.6pJ/bit power efficiency. We developed a high-density connector and cable, attached to the top side of the package that enables this high interconnect density. A lane-failover mechanism provides design robustness for fault-tolerance. To further optimize power efficiency, the lane data rate scales from 2 to 16Gb/s with non-linear power efficiency of 0.8 to 2.6pJ/bit, providing scalable aggregate bandwidth of 0.128 to 1Tb/s. Highly power scalable circuits such as CMOS clocking and reconfigurable current-mode (CM) or voltage-mode (VM) TX driver enable the 8× bandwidth and 3× power efficiency scalability with aggressive supply voltage scaling (0.6 to 1.08V).
可扩展的0.128- 1tb /s 0.8- 2.6 pj /b 64通道并行I/O在32nm CMOS
高性能计算(HPC)系统需要积极扩展内存和I/O,以实现数太比特/秒的带宽。最小化I/O成本、面积和功耗对于实现具有如此大带宽的实际可实现系统至关重要。为了满足这些需求,我们开发了一种低功耗密度的64通道I/O系统,每端口聚合带宽高达1Tb/s,功率效率为2.6pJ/bit。我们开发了一种高密度连接器和电缆,连接到封装的顶部,实现了这种高互连密度。通道故障转移机制为容错提供了设计稳健性。为了进一步优化功率效率,通道数据速率范围从2到16Gb/s,非线性功率效率为0.8到2.6pJ/bit,提供0.128到1Tb/s的可扩展聚合带宽。高功率可扩展电路,如CMOS时钟和可重构的电流模式(CM)或电压模式(VM) TX驱动器,通过积极的电源电压缩放(0.6至1.08V),实现8倍带宽和3倍功率效率可扩展性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信