{"title":"简介:神经形态材料","authors":"A. Alec Talin, Bilge Yildiz","doi":"10.1021/acs.chemrev.5c00340","DOIUrl":null,"url":null,"abstract":"Published as part of <i>Chemical Reviews</i> special issue “Neuromorphic Materials”. The explosive growth in data collection and the need to process it efficiently, as well as the desire to automate increasingly complex tasks in transportation, medical care, manufacturing, security and many other fields have motivated a growing interest in neuromorphic computing. (1) Unlike the binary, transistor-based ON/OFF logic gates and separate logic and memory functionalities employed in digital computing, neuromorphic computing is inspired by animal brains that use interconnected synapses and neurons to perform processing, storage and transmission of information at the same location, while only consuming ∼20 W or less of power. Motivated by the brain’s efficiency, adaptability, self-learning and resiliency qualities, neuromorphic computing can be broadly defined as an approach to processing and storing information using hardware and algorithms inspired by models of biological neural systems. Present research in neuromorphic computing encompasses approaches that vary significantly in their degree of neuro-inspiration, from systems that only incorporate features such as asynchronous, event-driven operation or use crossbar arrays of nonvolatile memory (NVM) elements to accelerate deep neural networks (DNNs), to designs that embrace the extreme parallelism, sparsity, reconfigurability, adaptability, complexity and stochasticity observed in nervous systems. (2) The term ‘neuromorphic’ computing is often credited to Carver Mead, who in the 1980s investigated Si-based analog electronics to replicate functions of the animal retina. (3) Earlier important advances in this field include the work of Frank Rosenblatt, (4) who proposed the concept of the perceptron, Bernard Widrow, (5) who used this concept to build one of the first analog neural networks, the Adaline, and many other researchers (see ref (6) for an historical perspective on neuromorphic computing). With the recent increase in the use of artificial intelligence and large language models, and rising concerns over the associated energy costs, interest in neuromorphic hardware has expanded rapidly. According to some estimates, driven largely by the drastic growth in the training use of artificial intelligence (AI) models using the current computing architectures, the energy cost of computing is projected to reach the energy supply worldwide by 2045. (7) While this is not a realistic outcome, it means that, if more efficient computing technologies are not developed─soon─the world will soon become one where demand for energy and market constraints limit the continued increase of societal access to AI and cloud services from data centers. Data centers used for training and use of these models consume hundreds of terawatt hours of electricity, already past 4% of the US electricity demand. (8) Numerous established microelectronics manufacturers and startups have announced efforts to commercialize energy-efficient neuromorphic chips, with some systems that contain over one billion neurons, capable of supporting spiking algorithms, event-driven asynchronous communications, and some level of reconfigurability. (1,9) Nevertheless, the computational abilities of these schemes remain restricted to relatively narrow tasks and fall far short in terms of learning efficiency, contextualization and other aspects of general intelligence associated with mammalian brains. (10) In fact, the gap in computational abilities between artificial and biological systems with regard to general intelligence is enormous, despite very impressive progress in neuromorphic device technologies. To narrow this gap and to increase functionality and efficiency, a growing number of researchers have focused on exploring new neuromorphic device concepts that exploit spin, ionic, ferroelectric, microstructural, Mott, and other physical/chemical mechanisms to develop novel computational primitives for neuromorphic computing. (11) Many of these approaches have shown encouraging results for training and inference acceleration of deep neural networks, edge processing of sensor signals, Bayesian neural networks, graph neural networks and physical reservoir computing schemes. (12) Also promising are approaches that explore coupling between different effects or state variables (e.g., Joule heating leading to Mott or spin transitions) to emulate complex neuronal dynamics, axon-like signal transmission and ensemble effects. (13,14) However, despite a growing number of compelling demonstrations of performance at the individual device level, the realization of practical neuromorphic computing systems based on emerging device concepts that can challenge digital Si CMOS-based computing systems remains a challenge. This is in part because most practical computing applications require scaling to many devices, as well as their integration with other components, including digital CMOS. Without such scaling and integration, validation of predicted computing advantages is difficult. Also difficult is the design of novel architectures and neuromorphic algorithms, which require a substantial level of abstraction at the device and small circuit scale, as well as a ‘user-friendly’ interface for programming and software development. The need to reliably fabricate at scale and integrate devices necessitates a detailed mechanistic understanding of the physical and chemical processes that underpin the computation primitives, the effects of material composition, structure, defects, interfaces, device geometries and dimensions, as well as external variables and drivers such as temperature and potential. This is a daunting task that calls out for a multidisciplinary codesign approach with contributions from chemistry, physics, materials science, electrical engineering, computer science and neuroscience. In this special issue of <i>Chemical Reviews</i>, we include contributions from leading researchers engaged in advancing neuromorphic computing by focusing on the materials used to make neuromorphic hardware, the special mechanisms that enable computational primitives, their advantages in terms of efficiency and latency, and the challenges to making these new computing paradigms broadly applicable. The authors in this issue covered several distinct topics with some overlaps, that can be broadly categorized by the type of materials (e.g., organic versus inorganic) as well as applications (e.g., biointegration versus chip scale systems). For example, S. Ramanathan et al. discuss how doping with protons of various organic and inorganic functional materials leads to behaviors useful for neuromorphic computing, and how these characteristics are related to biological neurotransmitters. They also discuss extensively the approaches and challenges to characterizing proton transport and effects in materials. Y. Zhou et al. review the scientific basis, status and challenges related to flexible neuromorphic materials and devices, including quantum dots, nanowires, nanocrystals, 2D layered semiconductors, nanomaterials (zero-, one-, and two-dimensional nanomaterials, and heterostructures), graphene and polymers. T-W. Lee et al. focus on biocompatible neuromorphic materials and devices, emphasizing both the sensor and the processing aspects involved in realizing functional interfaces between machines and the nervous system, including brain-computer interfaces and artificial muscle systems. V. K. Sangwan and M. C. Hersam et al. review the recent advances in 2D materials such as the transition metal dichalcogenides for neuromorphic hardware, with emphasis on establishing robust relations between the growth, fabrication, transport and device characteristics, as well as the challenges for integration of 2D materials and van der Waals heterojunctions for neuromorphic electronic and optoelectronic devices, and circuits. J. J. Yang et al. provide a detailed review of memristive devices that exploit ion dynamics to realize various characteristics useful for neuromorphic computing, ranging from analog synaptic behavior to complex dynamics that emulate neuronal models and involve coupling of several mechanisms. S. Kumar et al. review the history, mechanisms and opportunities for neuromorphic device engineering based on filament formation in devices based on various materials and configurations. They discuss both thermodynamic and kinetic aspects to provide a more unified understanding of the various phenomena and how these can be leveraged for advancing neuromorphic device concepts. A. A. Talin, Y. Li and B. Yildiz et al. review the scientific foundations and device applications of electrochemical random access memory (ECRAM), including extensive discussions of protonic, lithium-ion and oxygen vacancy types of electrochemical memories, their respective advantages and disadvantages, and the opportunities for realizing artificial synaptic and neuronal devices. D. Ielmini and G. Pedretti review the potential of resistive-switching random-access memory (RRAM) for in-memory computing (IMC), outlining its advantages, and addressing the paths to address the requirements for a range of storage and computing applications, from materials, device, circuit, and application viewpoints. G. S. Syed et al. review the current state of phase-change materials (PCM), PCM device physics, and the design and fabrication of PCM-based chips for in memory computing and provide an overview of the landscape for applications and future developments. We hope that these Reviews will help investigators interested in contributing to this rapidly evolving and fertile field get an appreciation of how the different aspects and challenges are connected and identify opportunities for innovative solutions guided by fundamental understanding. A. Alec Talin is a Senior Scientist at Sandia National Laboratories Chemistry, Combustion and Material Science Center and is an Adjunct Associate Professor of Materials Science at the University of Maryland, College Park. Prior to joining Sandia, Alec spent 6 years at Motorola Laboratories, where he managed the Materials Characterization Lab, and 3 years at the National Institute of Standards and Technology, where he was a project lead for energy conversion and storage. His research is focused on microelectronics and ionics, with applications to energy efficient computing, analog electronics, radiation effects and energy technologies. He is a Fellow of the American Physical Society. Bilge Yildiz is the Breene M. Kerr (1951) Professor at Massachusetts Institute of Technology, with the Departments of Nuclear Science and Engineering, and Materials Science and Engineering. She leads the Laboratory for Electrochemical Interfaces. Yildiz’s research focuses on laying the scientific groundwork to enable next generation electrochemical devices for energy conversion and information processing. The scientific insights derived from her research guide the design of novel materials and interfaces for efficient and durable solid oxide fuel and electrolysis cells, energy-efficient brain-inspired computing, and solid-state batteries. Yildiz’s research and teaching efforts have been recognized by the Argonne Pace Setter (2006), ANS Outstanding Teaching (2008), NSF CAREER (2011), IU-MRS Somiya (2012), the ECS Charles Tobias Young Investigator (2012), the ACerS Ross Coffin Purdy (2018) and the LG Chem Global Innovation Contest (2020) awards, Rahmi M. Koc Medal of Science (2022) and the Faraday Medal of the Royal Society of Chemistry (2024). She is a Fellow of the American Physical Society (2021), the Royal Society of Chemistry (2022), and the Electrochemical Society (2023) and an elected member of the Austrian Academy of Science (2023). AAT was supported by the Sandia Laboratory Research and Development (LDRD) program. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. DOE’s National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the editorial do not necessarily represent the views of the U.S. DOE or the United States Government. BY was supported by the U.S. DOE’s Office of Science, Basic Energy Sciences under Award No. DE-SC0023450 as part of the Hydrogen in Energy and Information Sciences (HEISs) Research Center, by the MIT-IBM Watson AI Lab, and by the SUPREME Center of the JUMP 2.0 Program of the Semiconductor Research Consortium. This article references 14 other publications. This article has not yet been cited by other publications.","PeriodicalId":32,"journal":{"name":"Chemical Reviews","volume":"2 1","pages":""},"PeriodicalIF":55.8000,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Introduction: Neuromorphic Materials\",\"authors\":\"A. Alec Talin, Bilge Yildiz\",\"doi\":\"10.1021/acs.chemrev.5c00340\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Published as part of <i>Chemical Reviews</i> special issue “Neuromorphic Materials”. The explosive growth in data collection and the need to process it efficiently, as well as the desire to automate increasingly complex tasks in transportation, medical care, manufacturing, security and many other fields have motivated a growing interest in neuromorphic computing. (1) Unlike the binary, transistor-based ON/OFF logic gates and separate logic and memory functionalities employed in digital computing, neuromorphic computing is inspired by animal brains that use interconnected synapses and neurons to perform processing, storage and transmission of information at the same location, while only consuming ∼20 W or less of power. Motivated by the brain’s efficiency, adaptability, self-learning and resiliency qualities, neuromorphic computing can be broadly defined as an approach to processing and storing information using hardware and algorithms inspired by models of biological neural systems. Present research in neuromorphic computing encompasses approaches that vary significantly in their degree of neuro-inspiration, from systems that only incorporate features such as asynchronous, event-driven operation or use crossbar arrays of nonvolatile memory (NVM) elements to accelerate deep neural networks (DNNs), to designs that embrace the extreme parallelism, sparsity, reconfigurability, adaptability, complexity and stochasticity observed in nervous systems. (2) The term ‘neuromorphic’ computing is often credited to Carver Mead, who in the 1980s investigated Si-based analog electronics to replicate functions of the animal retina. (3) Earlier important advances in this field include the work of Frank Rosenblatt, (4) who proposed the concept of the perceptron, Bernard Widrow, (5) who used this concept to build one of the first analog neural networks, the Adaline, and many other researchers (see ref (6) for an historical perspective on neuromorphic computing). With the recent increase in the use of artificial intelligence and large language models, and rising concerns over the associated energy costs, interest in neuromorphic hardware has expanded rapidly. According to some estimates, driven largely by the drastic growth in the training use of artificial intelligence (AI) models using the current computing architectures, the energy cost of computing is projected to reach the energy supply worldwide by 2045. (7) While this is not a realistic outcome, it means that, if more efficient computing technologies are not developed─soon─the world will soon become one where demand for energy and market constraints limit the continued increase of societal access to AI and cloud services from data centers. Data centers used for training and use of these models consume hundreds of terawatt hours of electricity, already past 4% of the US electricity demand. (8) Numerous established microelectronics manufacturers and startups have announced efforts to commercialize energy-efficient neuromorphic chips, with some systems that contain over one billion neurons, capable of supporting spiking algorithms, event-driven asynchronous communications, and some level of reconfigurability. (1,9) Nevertheless, the computational abilities of these schemes remain restricted to relatively narrow tasks and fall far short in terms of learning efficiency, contextualization and other aspects of general intelligence associated with mammalian brains. (10) In fact, the gap in computational abilities between artificial and biological systems with regard to general intelligence is enormous, despite very impressive progress in neuromorphic device technologies. To narrow this gap and to increase functionality and efficiency, a growing number of researchers have focused on exploring new neuromorphic device concepts that exploit spin, ionic, ferroelectric, microstructural, Mott, and other physical/chemical mechanisms to develop novel computational primitives for neuromorphic computing. (11) Many of these approaches have shown encouraging results for training and inference acceleration of deep neural networks, edge processing of sensor signals, Bayesian neural networks, graph neural networks and physical reservoir computing schemes. (12) Also promising are approaches that explore coupling between different effects or state variables (e.g., Joule heating leading to Mott or spin transitions) to emulate complex neuronal dynamics, axon-like signal transmission and ensemble effects. (13,14) However, despite a growing number of compelling demonstrations of performance at the individual device level, the realization of practical neuromorphic computing systems based on emerging device concepts that can challenge digital Si CMOS-based computing systems remains a challenge. This is in part because most practical computing applications require scaling to many devices, as well as their integration with other components, including digital CMOS. Without such scaling and integration, validation of predicted computing advantages is difficult. Also difficult is the design of novel architectures and neuromorphic algorithms, which require a substantial level of abstraction at the device and small circuit scale, as well as a ‘user-friendly’ interface for programming and software development. The need to reliably fabricate at scale and integrate devices necessitates a detailed mechanistic understanding of the physical and chemical processes that underpin the computation primitives, the effects of material composition, structure, defects, interfaces, device geometries and dimensions, as well as external variables and drivers such as temperature and potential. This is a daunting task that calls out for a multidisciplinary codesign approach with contributions from chemistry, physics, materials science, electrical engineering, computer science and neuroscience. In this special issue of <i>Chemical Reviews</i>, we include contributions from leading researchers engaged in advancing neuromorphic computing by focusing on the materials used to make neuromorphic hardware, the special mechanisms that enable computational primitives, their advantages in terms of efficiency and latency, and the challenges to making these new computing paradigms broadly applicable. The authors in this issue covered several distinct topics with some overlaps, that can be broadly categorized by the type of materials (e.g., organic versus inorganic) as well as applications (e.g., biointegration versus chip scale systems). For example, S. Ramanathan et al. discuss how doping with protons of various organic and inorganic functional materials leads to behaviors useful for neuromorphic computing, and how these characteristics are related to biological neurotransmitters. They also discuss extensively the approaches and challenges to characterizing proton transport and effects in materials. Y. Zhou et al. review the scientific basis, status and challenges related to flexible neuromorphic materials and devices, including quantum dots, nanowires, nanocrystals, 2D layered semiconductors, nanomaterials (zero-, one-, and two-dimensional nanomaterials, and heterostructures), graphene and polymers. T-W. Lee et al. focus on biocompatible neuromorphic materials and devices, emphasizing both the sensor and the processing aspects involved in realizing functional interfaces between machines and the nervous system, including brain-computer interfaces and artificial muscle systems. V. K. Sangwan and M. C. Hersam et al. review the recent advances in 2D materials such as the transition metal dichalcogenides for neuromorphic hardware, with emphasis on establishing robust relations between the growth, fabrication, transport and device characteristics, as well as the challenges for integration of 2D materials and van der Waals heterojunctions for neuromorphic electronic and optoelectronic devices, and circuits. J. J. Yang et al. provide a detailed review of memristive devices that exploit ion dynamics to realize various characteristics useful for neuromorphic computing, ranging from analog synaptic behavior to complex dynamics that emulate neuronal models and involve coupling of several mechanisms. S. Kumar et al. review the history, mechanisms and opportunities for neuromorphic device engineering based on filament formation in devices based on various materials and configurations. They discuss both thermodynamic and kinetic aspects to provide a more unified understanding of the various phenomena and how these can be leveraged for advancing neuromorphic device concepts. A. A. Talin, Y. Li and B. Yildiz et al. review the scientific foundations and device applications of electrochemical random access memory (ECRAM), including extensive discussions of protonic, lithium-ion and oxygen vacancy types of electrochemical memories, their respective advantages and disadvantages, and the opportunities for realizing artificial synaptic and neuronal devices. D. Ielmini and G. Pedretti review the potential of resistive-switching random-access memory (RRAM) for in-memory computing (IMC), outlining its advantages, and addressing the paths to address the requirements for a range of storage and computing applications, from materials, device, circuit, and application viewpoints. G. S. Syed et al. review the current state of phase-change materials (PCM), PCM device physics, and the design and fabrication of PCM-based chips for in memory computing and provide an overview of the landscape for applications and future developments. We hope that these Reviews will help investigators interested in contributing to this rapidly evolving and fertile field get an appreciation of how the different aspects and challenges are connected and identify opportunities for innovative solutions guided by fundamental understanding. A. Alec Talin is a Senior Scientist at Sandia National Laboratories Chemistry, Combustion and Material Science Center and is an Adjunct Associate Professor of Materials Science at the University of Maryland, College Park. Prior to joining Sandia, Alec spent 6 years at Motorola Laboratories, where he managed the Materials Characterization Lab, and 3 years at the National Institute of Standards and Technology, where he was a project lead for energy conversion and storage. His research is focused on microelectronics and ionics, with applications to energy efficient computing, analog electronics, radiation effects and energy technologies. He is a Fellow of the American Physical Society. Bilge Yildiz is the Breene M. Kerr (1951) Professor at Massachusetts Institute of Technology, with the Departments of Nuclear Science and Engineering, and Materials Science and Engineering. She leads the Laboratory for Electrochemical Interfaces. Yildiz’s research focuses on laying the scientific groundwork to enable next generation electrochemical devices for energy conversion and information processing. The scientific insights derived from her research guide the design of novel materials and interfaces for efficient and durable solid oxide fuel and electrolysis cells, energy-efficient brain-inspired computing, and solid-state batteries. Yildiz’s research and teaching efforts have been recognized by the Argonne Pace Setter (2006), ANS Outstanding Teaching (2008), NSF CAREER (2011), IU-MRS Somiya (2012), the ECS Charles Tobias Young Investigator (2012), the ACerS Ross Coffin Purdy (2018) and the LG Chem Global Innovation Contest (2020) awards, Rahmi M. Koc Medal of Science (2022) and the Faraday Medal of the Royal Society of Chemistry (2024). She is a Fellow of the American Physical Society (2021), the Royal Society of Chemistry (2022), and the Electrochemical Society (2023) and an elected member of the Austrian Academy of Science (2023). AAT was supported by the Sandia Laboratory Research and Development (LDRD) program. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. DOE’s National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the editorial do not necessarily represent the views of the U.S. DOE or the United States Government. BY was supported by the U.S. DOE’s Office of Science, Basic Energy Sciences under Award No. DE-SC0023450 as part of the Hydrogen in Energy and Information Sciences (HEISs) Research Center, by the MIT-IBM Watson AI Lab, and by the SUPREME Center of the JUMP 2.0 Program of the Semiconductor Research Consortium. This article references 14 other publications. This article has not yet been cited by other publications.\",\"PeriodicalId\":32,\"journal\":{\"name\":\"Chemical Reviews\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":55.8000,\"publicationDate\":\"2025-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chemical Reviews\",\"FirstCategoryId\":\"92\",\"ListUrlMain\":\"https://doi.org/10.1021/acs.chemrev.5c00340\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chemical Reviews","FirstCategoryId":"92","ListUrlMain":"https://doi.org/10.1021/acs.chemrev.5c00340","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
作为化学评论特刊“神经形态材料”的一部分发表。数据收集的爆炸式增长和有效处理数据的需求,以及运输、医疗、制造、安全和许多其他领域日益复杂的任务自动化的愿望,激发了人们对神经形态计算日益增长的兴趣。(1)与数字计算中使用的二进制、基于晶体管的ON/OFF逻辑门以及单独的逻辑和记忆功能不同,神经形态计算的灵感来自动物大脑,它们使用相互连接的突触和神经元在同一位置执行信息的处理、存储和传输,同时仅消耗~ 20w或更少的功率。由于大脑的效率、适应性、自我学习和弹性,神经形态计算可以被广泛定义为一种利用生物神经系统模型启发的硬件和算法来处理和存储信息的方法。目前在神经形态计算领域的研究涵盖了在神经启发程度上差异很大的方法,从只包含异步、事件驱动操作等特征的系统,或使用非易失性存储器(NVM)元素的交叉阵列来加速深度神经网络(dnn)的系统,到包含神经系统中观察到的极端并行性、稀疏性、可重构性、适应性、复杂性和随机性的设计。(2)“神经形态”计算这一术语通常被认为是卡弗·米德(Carver Mead)提出的,他在20世纪80年代研究了基于硅的模拟电子学,以复制动物视网膜的功能。(3)该领域早期的重要进展包括Frank Rosenblatt的工作(4),他提出了感知机的概念,Bernard Widrow(5),他使用这个概念构建了第一个模拟神经网络之一,Adaline,以及许多其他研究人员(参见参考文献(6),了解神经形态计算的历史观点)。随着最近人工智能和大型语言模型使用的增加,以及对相关能源成本的担忧日益增加,对神经形态硬件的兴趣迅速扩大。根据一些估计,主要是由于使用当前计算架构的人工智能(AI)模型的训练使用急剧增长,到2045年,计算的能源成本预计将达到全球能源供应。(7)虽然这不是一个现实的结果,但它意味着,如果不尽快开发出更高效的计算技术,世界将很快变成这样一个世界:对能源的需求和市场约束将限制社会从数据中心获得人工智能和云服务的持续增长。用于培训和使用这些模型的数据中心消耗数百太瓦时的电力,已经超过了美国电力需求的4%。(8)许多成熟的微电子制造商和初创公司已经宣布努力将节能的神经形态芯片商业化,其中一些系统包含超过10亿个神经元,能够支持尖峰算法,事件驱动的异步通信,以及一定程度的可重构性。(1,9)然而,这些方案的计算能力仍然局限于相对狭窄的任务,在学习效率、情境化和与哺乳动物大脑相关的一般智能的其他方面远远不足。(10)事实上,尽管神经形态装置技术取得了令人印象深刻的进步,但人工系统和生物系统在一般智能方面的计算能力差距还是巨大的。为了缩小这一差距,提高功能和效率,越来越多的研究人员专注于探索新的神经形态设备概念,利用自旋、离子、铁电、微结构、莫特和其他物理/化学机制来开发新的神经形态计算计算原语。(11)其中许多方法在深度神经网络的训练和推理加速、传感器信号的边缘处理、贝叶斯神经网络、图神经网络和物理储层计算方案方面显示出令人鼓舞的结果。(12)同样有希望的是探索不同效应或状态变量之间耦合的方法(例如,焦耳加热导致莫特或自旋转变),以模拟复杂的神经元动力学,轴突样信号传输和集合效应。(13,14)然而,尽管越来越多的令人信服的单个器件级性能演示,基于新兴器件概念的实用神经形态计算系统的实现可以挑战基于数字Si cmos的计算系统仍然是一个挑战。这部分是因为大多数实际的计算应用需要扩展到许多设备,以及它们与其他组件的集成,包括数字CMOS。 如果没有这样的扩展和集成,预测计算优势的验证是困难的。同样困难的是设计新颖的架构和神经形态算法,这需要在设备和小电路规模上进行大量的抽象,以及用于编程和软件开发的“用户友好”界面。为了可靠地大规模制造和集成设备,需要对支撑计算原语的物理和化学过程、材料成分、结构、缺陷、界面、设备几何形状和尺寸的影响以及外部变量和驱动因素(如温度和电势)有详细的机械理解。这是一项艰巨的任务,需要化学、物理、材料科学、电子工程、计算机科学和神经科学等多学科的协同设计方法。在本期《化学评论》的特刊中,我们收录了致力于推进神经形态计算的主要研究人员的贡献,他们专注于用于制造神经形态硬件的材料,实现计算原语的特殊机制,它们在效率和延迟方面的优势,以及使这些新的计算范式广泛应用的挑战。本期作者涵盖了几个不同的主题,其中有一些重叠,可以根据材料类型(例如,有机与无机)以及应用(例如,生物集成与芯片规模系统)进行大致分类。例如,S. Ramanathan等人讨论了掺杂各种有机和无机功能材料的质子如何导致对神经形态计算有用的行为,以及这些特征如何与生物神经递质相关。他们还广泛讨论了表征材料中质子输运和效应的方法和挑战。Y. Zhou等人综述了柔性神经形态材料和器件的科学基础、现状和挑战,包括量子点、纳米线、纳米晶体、二维层状半导体、纳米材料(零、一维和二维纳米材料以及异质结构)、石墨烯和聚合物。线性调频。Lee等人专注于生物相容性神经形态材料和设备,强调传感器和处理方面涉及实现机器和神经系统之间的功能接口,包括脑机接口和人工肌肉系统。V. K. Sangwan和M. C. Hersam等人回顾了用于神经形态硬件的过渡金属二硫化物等二维材料的最新进展,重点介绍了在生长、制造、传输和器件特性之间建立稳健的关系,以及在神经形态电子和光电子器件和电路中集成二维材料和范德华异质结的挑战。J. J. Yang等人详细回顾了利用离子动力学实现对神经形态计算有用的各种特性的记忆装置,从模拟突触行为到模拟神经元模型并涉及多种机制耦合的复杂动力学。S. Kumar等人回顾了基于各种材料和配置的设备中纤维形成的神经形态设备工程的历史、机制和机遇。他们讨论了热力学和动力学方面,以提供对各种现象的更统一的理解,以及如何利用这些来推进神经形态装置概念。A. A. Talin, Y. Li和B. Yildiz等人回顾了电化学随机存取存储器(ECRAM)的科学基础和器件应用,包括广泛讨论了电化学存储器的质子、锂离子和氧空位类型及其各自的优缺点,以及实现人工突触和神经元器件的机会。D. Ielmini和G. Pedretti回顾了用于内存计算(IMC)的电阻开关随机存取存储器(RRAM)的潜力,概述了它的优点,并从材料、器件、电路和应用的角度阐述了满足一系列存储和计算应用需求的途径。G. S. Syed等人回顾了相变材料(PCM)的现状,PCM器件物理,以及基于PCM的内存计算芯片的设计和制造,并提供了应用前景和未来发展的概述。我们希望这些评论将帮助有兴趣为这一快速发展和肥沃的领域做出贡献的研究人员了解不同方面和挑战是如何联系在一起的,并在基本理解的指导下确定创新解决方案的机会。一个。 Alec Talin是桑迪亚国家实验室化学、燃烧和材料科学中心的高级科学家,也是马里兰大学帕克分校材料科学的兼职副教授。在加入Sandia之前,Alec在摩托罗拉实验室工作了6年,在那里他管理材料表征实验室,并在国家标准与技术研究所工作了3年,在那里他是能源转换和存储的项目负责人。他的研究主要集中在微电子学和离子学,以及在节能计算、模拟电子学、辐射效应和能源技术方面的应用。他是美国物理学会会员。Bilge Yildiz是麻省理工学院核科学与工程系和材料科学与工程系的Breene M. Kerr(1951)教授。她领导着电化学界面实验室。Yildiz的研究重点是为下一代用于能量转换和信息处理的电化学设备奠定科学基础。从她的研究中获得的科学见解指导了新型材料和界面的设计,用于高效耐用的固体氧化物燃料和电解电池,节能的脑启发计算和固态电池。Yildiz的研究和教学工作获得了Argonne Pace Setter(2006年),ANS杰出教学(2008年),NSF CAREER(2011年),IU-MRS Somiya(2012年),ECS Charles Tobias青年研究员(2012年),ACerS Ross Coffin Purdy(2018年)和LG化学全球创新大赛(2020年)奖,Rahmi M. Koc科学奖章(2022年)和皇家化学学会法拉第奖章(2024年)。她是美国物理学会(2021年)、英国皇家化学学会(2022年)和电化学学会(2023年)的会员,也是奥地利科学院的当选成员(2023年)。AAT由桑迪亚实验室研究与发展(LDRD)项目支持。桑迪亚国家实验室是由美国国家技术公司管理和运营的多任务实验室。根据合同DE-NA-0003525,为美国能源部国家核安全管理局提供桑迪亚工程解决方案,该公司是霍尼韦尔国际公司的全资子公司。社论中表达的观点不一定代表美国能源部或美国政府的观点。BY项目得到了美国能源部基础能源科学办公室的支持。DE-SC0023450是氢能源与信息科学(HEISs)研究中心的一部分,由麻省理工学院- ibm沃森人工智能实验室和半导体研究联盟JUMP 2.0计划的SUPREME中心负责。本文引用了其他14个出版物。这篇文章尚未被其他出版物引用。
Published as part of Chemical Reviews special issue “Neuromorphic Materials”. The explosive growth in data collection and the need to process it efficiently, as well as the desire to automate increasingly complex tasks in transportation, medical care, manufacturing, security and many other fields have motivated a growing interest in neuromorphic computing. (1) Unlike the binary, transistor-based ON/OFF logic gates and separate logic and memory functionalities employed in digital computing, neuromorphic computing is inspired by animal brains that use interconnected synapses and neurons to perform processing, storage and transmission of information at the same location, while only consuming ∼20 W or less of power. Motivated by the brain’s efficiency, adaptability, self-learning and resiliency qualities, neuromorphic computing can be broadly defined as an approach to processing and storing information using hardware and algorithms inspired by models of biological neural systems. Present research in neuromorphic computing encompasses approaches that vary significantly in their degree of neuro-inspiration, from systems that only incorporate features such as asynchronous, event-driven operation or use crossbar arrays of nonvolatile memory (NVM) elements to accelerate deep neural networks (DNNs), to designs that embrace the extreme parallelism, sparsity, reconfigurability, adaptability, complexity and stochasticity observed in nervous systems. (2) The term ‘neuromorphic’ computing is often credited to Carver Mead, who in the 1980s investigated Si-based analog electronics to replicate functions of the animal retina. (3) Earlier important advances in this field include the work of Frank Rosenblatt, (4) who proposed the concept of the perceptron, Bernard Widrow, (5) who used this concept to build one of the first analog neural networks, the Adaline, and many other researchers (see ref (6) for an historical perspective on neuromorphic computing). With the recent increase in the use of artificial intelligence and large language models, and rising concerns over the associated energy costs, interest in neuromorphic hardware has expanded rapidly. According to some estimates, driven largely by the drastic growth in the training use of artificial intelligence (AI) models using the current computing architectures, the energy cost of computing is projected to reach the energy supply worldwide by 2045. (7) While this is not a realistic outcome, it means that, if more efficient computing technologies are not developed─soon─the world will soon become one where demand for energy and market constraints limit the continued increase of societal access to AI and cloud services from data centers. Data centers used for training and use of these models consume hundreds of terawatt hours of electricity, already past 4% of the US electricity demand. (8) Numerous established microelectronics manufacturers and startups have announced efforts to commercialize energy-efficient neuromorphic chips, with some systems that contain over one billion neurons, capable of supporting spiking algorithms, event-driven asynchronous communications, and some level of reconfigurability. (1,9) Nevertheless, the computational abilities of these schemes remain restricted to relatively narrow tasks and fall far short in terms of learning efficiency, contextualization and other aspects of general intelligence associated with mammalian brains. (10) In fact, the gap in computational abilities between artificial and biological systems with regard to general intelligence is enormous, despite very impressive progress in neuromorphic device technologies. To narrow this gap and to increase functionality and efficiency, a growing number of researchers have focused on exploring new neuromorphic device concepts that exploit spin, ionic, ferroelectric, microstructural, Mott, and other physical/chemical mechanisms to develop novel computational primitives for neuromorphic computing. (11) Many of these approaches have shown encouraging results for training and inference acceleration of deep neural networks, edge processing of sensor signals, Bayesian neural networks, graph neural networks and physical reservoir computing schemes. (12) Also promising are approaches that explore coupling between different effects or state variables (e.g., Joule heating leading to Mott or spin transitions) to emulate complex neuronal dynamics, axon-like signal transmission and ensemble effects. (13,14) However, despite a growing number of compelling demonstrations of performance at the individual device level, the realization of practical neuromorphic computing systems based on emerging device concepts that can challenge digital Si CMOS-based computing systems remains a challenge. This is in part because most practical computing applications require scaling to many devices, as well as their integration with other components, including digital CMOS. Without such scaling and integration, validation of predicted computing advantages is difficult. Also difficult is the design of novel architectures and neuromorphic algorithms, which require a substantial level of abstraction at the device and small circuit scale, as well as a ‘user-friendly’ interface for programming and software development. The need to reliably fabricate at scale and integrate devices necessitates a detailed mechanistic understanding of the physical and chemical processes that underpin the computation primitives, the effects of material composition, structure, defects, interfaces, device geometries and dimensions, as well as external variables and drivers such as temperature and potential. This is a daunting task that calls out for a multidisciplinary codesign approach with contributions from chemistry, physics, materials science, electrical engineering, computer science and neuroscience. In this special issue of Chemical Reviews, we include contributions from leading researchers engaged in advancing neuromorphic computing by focusing on the materials used to make neuromorphic hardware, the special mechanisms that enable computational primitives, their advantages in terms of efficiency and latency, and the challenges to making these new computing paradigms broadly applicable. The authors in this issue covered several distinct topics with some overlaps, that can be broadly categorized by the type of materials (e.g., organic versus inorganic) as well as applications (e.g., biointegration versus chip scale systems). For example, S. Ramanathan et al. discuss how doping with protons of various organic and inorganic functional materials leads to behaviors useful for neuromorphic computing, and how these characteristics are related to biological neurotransmitters. They also discuss extensively the approaches and challenges to characterizing proton transport and effects in materials. Y. Zhou et al. review the scientific basis, status and challenges related to flexible neuromorphic materials and devices, including quantum dots, nanowires, nanocrystals, 2D layered semiconductors, nanomaterials (zero-, one-, and two-dimensional nanomaterials, and heterostructures), graphene and polymers. T-W. Lee et al. focus on biocompatible neuromorphic materials and devices, emphasizing both the sensor and the processing aspects involved in realizing functional interfaces between machines and the nervous system, including brain-computer interfaces and artificial muscle systems. V. K. Sangwan and M. C. Hersam et al. review the recent advances in 2D materials such as the transition metal dichalcogenides for neuromorphic hardware, with emphasis on establishing robust relations between the growth, fabrication, transport and device characteristics, as well as the challenges for integration of 2D materials and van der Waals heterojunctions for neuromorphic electronic and optoelectronic devices, and circuits. J. J. Yang et al. provide a detailed review of memristive devices that exploit ion dynamics to realize various characteristics useful for neuromorphic computing, ranging from analog synaptic behavior to complex dynamics that emulate neuronal models and involve coupling of several mechanisms. S. Kumar et al. review the history, mechanisms and opportunities for neuromorphic device engineering based on filament formation in devices based on various materials and configurations. They discuss both thermodynamic and kinetic aspects to provide a more unified understanding of the various phenomena and how these can be leveraged for advancing neuromorphic device concepts. A. A. Talin, Y. Li and B. Yildiz et al. review the scientific foundations and device applications of electrochemical random access memory (ECRAM), including extensive discussions of protonic, lithium-ion and oxygen vacancy types of electrochemical memories, their respective advantages and disadvantages, and the opportunities for realizing artificial synaptic and neuronal devices. D. Ielmini and G. Pedretti review the potential of resistive-switching random-access memory (RRAM) for in-memory computing (IMC), outlining its advantages, and addressing the paths to address the requirements for a range of storage and computing applications, from materials, device, circuit, and application viewpoints. G. S. Syed et al. review the current state of phase-change materials (PCM), PCM device physics, and the design and fabrication of PCM-based chips for in memory computing and provide an overview of the landscape for applications and future developments. We hope that these Reviews will help investigators interested in contributing to this rapidly evolving and fertile field get an appreciation of how the different aspects and challenges are connected and identify opportunities for innovative solutions guided by fundamental understanding. A. Alec Talin is a Senior Scientist at Sandia National Laboratories Chemistry, Combustion and Material Science Center and is an Adjunct Associate Professor of Materials Science at the University of Maryland, College Park. Prior to joining Sandia, Alec spent 6 years at Motorola Laboratories, where he managed the Materials Characterization Lab, and 3 years at the National Institute of Standards and Technology, where he was a project lead for energy conversion and storage. His research is focused on microelectronics and ionics, with applications to energy efficient computing, analog electronics, radiation effects and energy technologies. He is a Fellow of the American Physical Society. Bilge Yildiz is the Breene M. Kerr (1951) Professor at Massachusetts Institute of Technology, with the Departments of Nuclear Science and Engineering, and Materials Science and Engineering. She leads the Laboratory for Electrochemical Interfaces. Yildiz’s research focuses on laying the scientific groundwork to enable next generation electrochemical devices for energy conversion and information processing. The scientific insights derived from her research guide the design of novel materials and interfaces for efficient and durable solid oxide fuel and electrolysis cells, energy-efficient brain-inspired computing, and solid-state batteries. Yildiz’s research and teaching efforts have been recognized by the Argonne Pace Setter (2006), ANS Outstanding Teaching (2008), NSF CAREER (2011), IU-MRS Somiya (2012), the ECS Charles Tobias Young Investigator (2012), the ACerS Ross Coffin Purdy (2018) and the LG Chem Global Innovation Contest (2020) awards, Rahmi M. Koc Medal of Science (2022) and the Faraday Medal of the Royal Society of Chemistry (2024). She is a Fellow of the American Physical Society (2021), the Royal Society of Chemistry (2022), and the Electrochemical Society (2023) and an elected member of the Austrian Academy of Science (2023). AAT was supported by the Sandia Laboratory Research and Development (LDRD) program. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. DOE’s National Nuclear Security Administration under contract DE-NA-0003525. The views expressed in the editorial do not necessarily represent the views of the U.S. DOE or the United States Government. BY was supported by the U.S. DOE’s Office of Science, Basic Energy Sciences under Award No. DE-SC0023450 as part of the Hydrogen in Energy and Information Sciences (HEISs) Research Center, by the MIT-IBM Watson AI Lab, and by the SUPREME Center of the JUMP 2.0 Program of the Semiconductor Research Consortium. This article references 14 other publications. This article has not yet been cited by other publications.
期刊介绍:
Chemical Reviews is a highly regarded and highest-ranked journal covering the general topic of chemistry. Its mission is to provide comprehensive, authoritative, critical, and readable reviews of important recent research in organic, inorganic, physical, analytical, theoretical, and biological chemistry.
Since 1985, Chemical Reviews has also published periodic thematic issues that focus on a single theme or direction of emerging research.