Yu‐Tao Li, Kui Xu, Yu‐Zhe Ma, Jun‐Ze Li, Yang Luo, Xin‐Ru Li, Peng‐Hui Shen, Lu‐Yu Zhao, Hang Liu, Li Ren, De‐Hui Li, Lian‐Mao Peng, Li Ding, Tian‐Ling Ren, Yeliang Wang
{"title":"A Novel Time‐Division Multiplexing Architecture Revealed by Reconfigurable Synapse for Deep Neural Networks","authors":"Yu‐Tao Li, Kui Xu, Yu‐Zhe Ma, Jun‐Ze Li, Yang Luo, Xin‐Ru Li, Peng‐Hui Shen, Lu‐Yu Zhao, Hang Liu, Li Ren, De‐Hui Li, Lian‐Mao Peng, Li Ding, Tian‐Ling Ren, Yeliang Wang","doi":"10.1002/adma.202420218","DOIUrl":null,"url":null,"abstract":"Deep learning's growing complexity demands advanced AI chips, increasing hardware costs. Time‐division multiplexing (TDM) neural networks offer a promising solution to simplify integration. However, it is difficult for current synapse transistors to physically implement TDM networks due to inherent device limitations, hindering their practical deployment in modern systems. Here, a novel graphene/2D perovskite/carbon nanotubes (CNTs) synapse transistor featuring a sandwich structure is presented. This transistor enables the realization of TDM neural networks at the hardware level. In this structure, the 2D perovskite layer, characterized by high ion concentration, serves as a neurotransmitter, thereby enhancing synaptic transmission efficiency. Additionally, the CNTs' field‐effect transistors, with their large on‐off ratio, demonstrate a wider range of synaptic current changes. The device mechanism is theoretically analyzed using molecular dynamics simulation. Furthermore, the impact of TDM on the scale, power, and latency of neural network hardware implementation is investigated. Qualitative analysis is performed to elucidate the advantages of TDM in the hardware implementation of larger deep learning models. This study offers a new approach to reducing the integration complexity of neural networks hardware implementation, holding significant promise for the development of intelligent nanoelectronic devices in the future.","PeriodicalId":114,"journal":{"name":"Advanced Materials","volume":"21 1","pages":""},"PeriodicalIF":27.4000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Materials","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1002/adma.202420218","RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning's growing complexity demands advanced AI chips, increasing hardware costs. Time‐division multiplexing (TDM) neural networks offer a promising solution to simplify integration. However, it is difficult for current synapse transistors to physically implement TDM networks due to inherent device limitations, hindering their practical deployment in modern systems. Here, a novel graphene/2D perovskite/carbon nanotubes (CNTs) synapse transistor featuring a sandwich structure is presented. This transistor enables the realization of TDM neural networks at the hardware level. In this structure, the 2D perovskite layer, characterized by high ion concentration, serves as a neurotransmitter, thereby enhancing synaptic transmission efficiency. Additionally, the CNTs' field‐effect transistors, with their large on‐off ratio, demonstrate a wider range of synaptic current changes. The device mechanism is theoretically analyzed using molecular dynamics simulation. Furthermore, the impact of TDM on the scale, power, and latency of neural network hardware implementation is investigated. Qualitative analysis is performed to elucidate the advantages of TDM in the hardware implementation of larger deep learning models. This study offers a new approach to reducing the integration complexity of neural networks hardware implementation, holding significant promise for the development of intelligent nanoelectronic devices in the future.
期刊介绍:
Advanced Materials, one of the world's most prestigious journals and the foundation of the Advanced portfolio, is the home of choice for best-in-class materials science for more than 30 years. Following this fast-growing and interdisciplinary field, we are considering and publishing the most important discoveries on any and all materials from materials scientists, chemists, physicists, engineers as well as health and life scientists and bringing you the latest results and trends in modern materials-related research every week.