Ruihua Yu, Ze Wang, Qi Liu, Bin Gao, Zhenqi Hao, Tao Guo, Sanchuan Ding, Junyang Zhang, Qi Qin, Dong Wu, Peng Yao, Qingtian Zhang, Jianshi Tang, He Qian, Huaqiang Wu
{"title":"A full-stack memristor-based computation-in-memory system with software-hardware co-development","authors":"Ruihua Yu, Ze Wang, Qi Liu, Bin Gao, Zhenqi Hao, Tao Guo, Sanchuan Ding, Junyang Zhang, Qi Qin, Dong Wu, Peng Yao, Qingtian Zhang, Jianshi Tang, He Qian, Huaqiang Wu","doi":"10.1038/s41467-025-57183-0","DOIUrl":null,"url":null,"abstract":"<p>The practicality of memristor-based computation-in-memory (CIM) systems is limited by the specific hardware design and the manual parameters tuning process. Here, we introduce a software-hardware co-development approach to improve the flexibility and efficiency of the CIM system. The hardware component supports flexible dataflow, and facilitates various weight and input mappings. The software aspect enables automatic model placement and multiple efficient optimizations. The proposed optimization methods can enhance the robustness of model weights against hardware nonidealities during the training phase and automatically identify the optimal hardware parameters to suppress the impacts of analogue computing noise during the inference phase. Utilizing the full-stack system, we experimentally demonstrate six neural network models across four distinct tasks on the hardware automatically. With the help of optimization methods, we observe a 4.76% accuracy improvement for ResNet-32 during the training phase, and a 3.32% to 9.45% improvement across the six models during the on-chip inference phase.</p>","PeriodicalId":19066,"journal":{"name":"Nature Communications","volume":"66 1","pages":""},"PeriodicalIF":14.7000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Communications","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1038/s41467-025-57183-0","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
The practicality of memristor-based computation-in-memory (CIM) systems is limited by the specific hardware design and the manual parameters tuning process. Here, we introduce a software-hardware co-development approach to improve the flexibility and efficiency of the CIM system. The hardware component supports flexible dataflow, and facilitates various weight and input mappings. The software aspect enables automatic model placement and multiple efficient optimizations. The proposed optimization methods can enhance the robustness of model weights against hardware nonidealities during the training phase and automatically identify the optimal hardware parameters to suppress the impacts of analogue computing noise during the inference phase. Utilizing the full-stack system, we experimentally demonstrate six neural network models across four distinct tasks on the hardware automatically. With the help of optimization methods, we observe a 4.76% accuracy improvement for ResNet-32 during the training phase, and a 3.32% to 9.45% improvement across the six models during the on-chip inference phase.
期刊介绍:
Nature Communications, an open-access journal, publishes high-quality research spanning all areas of the natural sciences. Papers featured in the journal showcase significant advances relevant to specialists in each respective field. With a 2-year impact factor of 16.6 (2022) and a median time of 8 days from submission to the first editorial decision, Nature Communications is committed to rapid dissemination of research findings. As a multidisciplinary journal, it welcomes contributions from biological, health, physical, chemical, Earth, social, mathematical, applied, and engineering sciences, aiming to highlight important breakthroughs within each domain.