A Novel Time‐Division Multiplexing Architecture Revealed by Reconfigurable Synapse for Deep Neural Networks

IF 27.4 1区 材料科学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Yu‐Tao Li, Kui Xu, Yu‐Zhe Ma, Jun‐Ze Li, Yang Luo, Xin‐Ru Li, Peng‐Hui Shen, Lu‐Yu Zhao, Hang Liu, Li Ren, De‐Hui Li, Lian‐Mao Peng, Li Ding, Tian‐Ling Ren, Yeliang Wang
{"title":"A Novel Time‐Division Multiplexing Architecture Revealed by Reconfigurable Synapse for Deep Neural Networks","authors":"Yu‐Tao Li, Kui Xu, Yu‐Zhe Ma, Jun‐Ze Li, Yang Luo, Xin‐Ru Li, Peng‐Hui Shen, Lu‐Yu Zhao, Hang Liu, Li Ren, De‐Hui Li, Lian‐Mao Peng, Li Ding, Tian‐Ling Ren, Yeliang Wang","doi":"10.1002/adma.202420218","DOIUrl":null,"url":null,"abstract":"Deep learning's growing complexity demands advanced AI chips, increasing hardware costs. Time‐division multiplexing (TDM) neural networks offer a promising solution to simplify integration. However, it is difficult for current synapse transistors to physically implement TDM networks due to inherent device limitations, hindering their practical deployment in modern systems. Here, a novel graphene/2D perovskite/carbon nanotubes (CNTs) synapse transistor featuring a sandwich structure is presented. This transistor enables the realization of TDM neural networks at the hardware level. In this structure, the 2D perovskite layer, characterized by high ion concentration, serves as a neurotransmitter, thereby enhancing synaptic transmission efficiency. Additionally, the CNTs' field‐effect transistors, with their large on‐off ratio, demonstrate a wider range of synaptic current changes. The device mechanism is theoretically analyzed using molecular dynamics simulation. Furthermore, the impact of TDM on the scale, power, and latency of neural network hardware implementation is investigated. Qualitative analysis is performed to elucidate the advantages of TDM in the hardware implementation of larger deep learning models. This study offers a new approach to reducing the integration complexity of neural networks hardware implementation, holding significant promise for the development of intelligent nanoelectronic devices in the future.","PeriodicalId":114,"journal":{"name":"Advanced Materials","volume":"21 1","pages":""},"PeriodicalIF":27.4000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Materials","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1002/adma.202420218","RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning's growing complexity demands advanced AI chips, increasing hardware costs. Time‐division multiplexing (TDM) neural networks offer a promising solution to simplify integration. However, it is difficult for current synapse transistors to physically implement TDM networks due to inherent device limitations, hindering their practical deployment in modern systems. Here, a novel graphene/2D perovskite/carbon nanotubes (CNTs) synapse transistor featuring a sandwich structure is presented. This transistor enables the realization of TDM neural networks at the hardware level. In this structure, the 2D perovskite layer, characterized by high ion concentration, serves as a neurotransmitter, thereby enhancing synaptic transmission efficiency. Additionally, the CNTs' field‐effect transistors, with their large on‐off ratio, demonstrate a wider range of synaptic current changes. The device mechanism is theoretically analyzed using molecular dynamics simulation. Furthermore, the impact of TDM on the scale, power, and latency of neural network hardware implementation is investigated. Qualitative analysis is performed to elucidate the advantages of TDM in the hardware implementation of larger deep learning models. This study offers a new approach to reducing the integration complexity of neural networks hardware implementation, holding significant promise for the development of intelligent nanoelectronic devices in the future.
基于可重构突触的深度神经网络时分复用新架构
深度学习日益增长的复杂性需要先进的人工智能芯片,从而增加了硬件成本。时分多路复用(TDM)神经网络为简化集成提供了一个有前途的解决方案。然而,由于固有的器件限制,目前的突触晶体管难以物理地实现时分复用网络,阻碍了它们在现代系统中的实际部署。本文提出了一种具有三明治结构的新型石墨烯/二维钙钛矿/碳纳米管(CNTs)突触晶体管。该晶体管使TDM神经网络在硬件层面得以实现。在这种结构中,二维钙钛矿层具有离子浓度高的特点,可以作为神经递质,从而提高突触传递效率。此外,由于碳纳米管的场效应晶体管具有较大的通断比,因此具有更大的突触电流变化范围。利用分子动力学模拟理论分析了器件的机理。此外,还研究了时分复用对神经网络硬件实现的规模、功耗和延迟的影响。定性分析阐明了TDM在大型深度学习模型硬件实现中的优势。该研究为降低神经网络硬件实现的集成复杂性提供了一种新的方法,对未来智能纳米电子器件的发展具有重要的前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Advanced Materials
Advanced Materials 工程技术-材料科学:综合
CiteScore
43.00
自引率
4.10%
发文量
2182
审稿时长
2 months
期刊介绍: Advanced Materials, one of the world's most prestigious journals and the foundation of the Advanced portfolio, is the home of choice for best-in-class materials science for more than 30 years. Following this fast-growing and interdisciplinary field, we are considering and publishing the most important discoveries on any and all materials from materials scientists, chemists, physicists, engineers as well as health and life scientists and bringing you the latest results and trends in modern materials-related research every week.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信