{"title":"使用忆阻器交叉棒阵列实现内存学习的困难和方法","authors":"Wei Wang, Yang Li, Minghua Wang","doi":"10.1088/2634-4386/ad6732","DOIUrl":null,"url":null,"abstract":"\n Crossbar arrays of memristors are promising to accelerate the deep learning algorithm as a non-von-Neumann architecture, where the computation happens at the location of the memory. The computations are parallelly conducted employing the basic physical laws. However, current research works mainly focus on the offline training of deep neural networks, i.e., only the information forwarding is accelerated by the crossbar arrays. Two other essential operations, i.e., error backpropagation and weight update, are mostly simulated and coordinated by a conventional computer in von Neumann architecture, respectively. Several different in situ learning schemes incorporating error backpropagation and/or weight updates have been proposed and investigated through simulation. Nevertheless, they met the issues of non-ideal synaptic behaviors of the memristors and the complexities of the neural circuits surrounding crossbar arrays. Here we review the difficulties in implementing the error backpropagation and weight update operations for online training or in-memory learning that are adapted to noisy and non-ideal memristors. We hope this work will bridge the gap between the device engineers who are struggling to develop an ideal synaptic device and neural network algorithmists who are assuming that ideal devices are right at hand. The close of this gap could push forward the information processing system paradigm from computing-in-memory to learning-in-memory, aiming at a standalone non-von-Neumann computing system.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"65 19","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Difficulties and approaches in enabling learning-in-memory using crossbar arrays of memristors\",\"authors\":\"Wei Wang, Yang Li, Minghua Wang\",\"doi\":\"10.1088/2634-4386/ad6732\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Crossbar arrays of memristors are promising to accelerate the deep learning algorithm as a non-von-Neumann architecture, where the computation happens at the location of the memory. The computations are parallelly conducted employing the basic physical laws. However, current research works mainly focus on the offline training of deep neural networks, i.e., only the information forwarding is accelerated by the crossbar arrays. Two other essential operations, i.e., error backpropagation and weight update, are mostly simulated and coordinated by a conventional computer in von Neumann architecture, respectively. Several different in situ learning schemes incorporating error backpropagation and/or weight updates have been proposed and investigated through simulation. Nevertheless, they met the issues of non-ideal synaptic behaviors of the memristors and the complexities of the neural circuits surrounding crossbar arrays. Here we review the difficulties in implementing the error backpropagation and weight update operations for online training or in-memory learning that are adapted to noisy and non-ideal memristors. We hope this work will bridge the gap between the device engineers who are struggling to develop an ideal synaptic device and neural network algorithmists who are assuming that ideal devices are right at hand. The close of this gap could push forward the information processing system paradigm from computing-in-memory to learning-in-memory, aiming at a standalone non-von-Neumann computing system.\",\"PeriodicalId\":198030,\"journal\":{\"name\":\"Neuromorphic Computing and Engineering\",\"volume\":\"65 19\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuromorphic Computing and Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/2634-4386/ad6732\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuromorphic Computing and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2634-4386/ad6732","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Difficulties and approaches in enabling learning-in-memory using crossbar arrays of memristors
Crossbar arrays of memristors are promising to accelerate the deep learning algorithm as a non-von-Neumann architecture, where the computation happens at the location of the memory. The computations are parallelly conducted employing the basic physical laws. However, current research works mainly focus on the offline training of deep neural networks, i.e., only the information forwarding is accelerated by the crossbar arrays. Two other essential operations, i.e., error backpropagation and weight update, are mostly simulated and coordinated by a conventional computer in von Neumann architecture, respectively. Several different in situ learning schemes incorporating error backpropagation and/or weight updates have been proposed and investigated through simulation. Nevertheless, they met the issues of non-ideal synaptic behaviors of the memristors and the complexities of the neural circuits surrounding crossbar arrays. Here we review the difficulties in implementing the error backpropagation and weight update operations for online training or in-memory learning that are adapted to noisy and non-ideal memristors. We hope this work will bridge the gap between the device engineers who are struggling to develop an ideal synaptic device and neural network algorithmists who are assuming that ideal devices are right at hand. The close of this gap could push forward the information processing system paradigm from computing-in-memory to learning-in-memory, aiming at a standalone non-von-Neumann computing system.