{"title":"Memristor-Based Artificial Neural Networks for Hardware Neuromorphic Computing.","authors":"Boyan Jin, Zhenlong Wang, Tianyu Wang, Jialin Meng","doi":"10.34133/research.0758","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial neural networks have long been studied to emulate the cognitive capabilities of the human brain for artificial intelligence (AI) computing. However, as computational demands intensify, conventional hardware based on transistor and complementary metal oxide semiconductor (CMOS) technology faces substantial limitations due to the separation of memory and processing, a challenge commonly known as the von Neumann bottleneck. In this review, we examine how memristors, which are novel nonvolatile memory devices that exhibit memory-dependent resistance, can be harnessed to build more efficient and scalable neural networks. We provide a comprehensive background on the evolution of neural network models and memristors, as well as introduce the principles of memristive devices, which mimic the dynamic behavior of biological synapses. Various neural network architectures, including convolutional, recurrent, and spiking models, are discussed, highlighting the advantages of integrating memristors for in-memory computing and parallel processing. Our review further examines key mechanisms such as synaptic plasticity, encompassing both long-term potentiation and depression, as well as emerging learning algorithms that leverage memristive behavior. Finally, we identify current challenges, such as achieving ultra-low power consumption, high device uniformity, and seamless system integration, and propose future directions in materials science, device engineering, system integration, and industrialization. These advances suggest that memristor-based neural networks may pave the way for next-generation AI systems that combine low power consumption with high computational performance, ultimately bridging the gap between biological and electronic information processing.</p>","PeriodicalId":21120,"journal":{"name":"Research","volume":"8 ","pages":"0758"},"PeriodicalIF":10.7000,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12231232/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.34133/research.0758","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"Multidisciplinary","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial neural networks have long been studied to emulate the cognitive capabilities of the human brain for artificial intelligence (AI) computing. However, as computational demands intensify, conventional hardware based on transistor and complementary metal oxide semiconductor (CMOS) technology faces substantial limitations due to the separation of memory and processing, a challenge commonly known as the von Neumann bottleneck. In this review, we examine how memristors, which are novel nonvolatile memory devices that exhibit memory-dependent resistance, can be harnessed to build more efficient and scalable neural networks. We provide a comprehensive background on the evolution of neural network models and memristors, as well as introduce the principles of memristive devices, which mimic the dynamic behavior of biological synapses. Various neural network architectures, including convolutional, recurrent, and spiking models, are discussed, highlighting the advantages of integrating memristors for in-memory computing and parallel processing. Our review further examines key mechanisms such as synaptic plasticity, encompassing both long-term potentiation and depression, as well as emerging learning algorithms that leverage memristive behavior. Finally, we identify current challenges, such as achieving ultra-low power consumption, high device uniformity, and seamless system integration, and propose future directions in materials science, device engineering, system integration, and industrialization. These advances suggest that memristor-based neural networks may pave the way for next-generation AI systems that combine low power consumption with high computational performance, ultimately bridging the gap between biological and electronic information processing.
期刊介绍:
Research serves as a global platform for academic exchange, collaboration, and technological advancements. This journal welcomes high-quality research contributions from any domain, with open arms to authors from around the globe.
Comprising fundamental research in the life and physical sciences, Research also highlights significant findings and issues in engineering and applied science. The journal proudly features original research articles, reviews, perspectives, and editorials, fostering a diverse and dynamic scholarly environment.