Integrated Neuromorphic Photonic Computing for AI Acceleration: Emerging Devices, Network Architectures, and Future Paradigms.

IF 26.8 1区 材料科学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Gaofei Wang,Junyan Che,Chen Gao,Zhou Han,Jiabin Shen,Zengguang Cheng,Peng Zhou
{"title":"Integrated Neuromorphic Photonic Computing for AI Acceleration: Emerging Devices, Network Architectures, and Future Paradigms.","authors":"Gaofei Wang,Junyan Che,Chen Gao,Zhou Han,Jiabin Shen,Zengguang Cheng,Peng Zhou","doi":"10.1002/adma.202508029","DOIUrl":null,"url":null,"abstract":"Deep learning stands as a cornerstone of modern artificial intelligence (AI), revolutionizing fields from computer vision to large language models (LLMs). However, as electronic hardware approaches fundamental physical limits-constrained by transistor scaling challenges, von Neuman architecture, and thermal dissipation-critical bottlenecks emerge in computational density and energy efficiency. To bridge the gap between algorithmic ambition and hardware limitations, photonic neuromorphic computing emerges as a transformative candidate, exploiting light's inherent parallelism, sub-nanosecond latency, and near-zero thermal losses to natively execute matrix operations-the computational backbone of neural networks. Photonic neural networks (PNNs) have achieved influential milestones in AI acceleration, demonstrating single-chip integration of both inference and in situ training-a leap forward with profound implications for next-generation computing. This review synthesizes a decade of progress in PNNs core components, critically analyzing advances in linear synaptic devices, nonlinear neuron devices, and network architectures, summarizing their respective strengths and persistent challenges. Furthermore, application-specific requirements are systematically analyzed for PNN deployment across computational regimes: cloud-scale and edge/client-side AIs. Finally, actionable pathways are outlined for overcoming material- and system-level barriers, emphasizing topology-optimized active/passive devices and advanced packaging strategies. These multidisciplinary advances position PNNs as a paradigm-shifting platform for post-Moore AI hardware.","PeriodicalId":114,"journal":{"name":"Advanced Materials","volume":"10 1","pages":"e08029"},"PeriodicalIF":26.8000,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Materials","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1002/adma.202508029","RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning stands as a cornerstone of modern artificial intelligence (AI), revolutionizing fields from computer vision to large language models (LLMs). However, as electronic hardware approaches fundamental physical limits-constrained by transistor scaling challenges, von Neuman architecture, and thermal dissipation-critical bottlenecks emerge in computational density and energy efficiency. To bridge the gap between algorithmic ambition and hardware limitations, photonic neuromorphic computing emerges as a transformative candidate, exploiting light's inherent parallelism, sub-nanosecond latency, and near-zero thermal losses to natively execute matrix operations-the computational backbone of neural networks. Photonic neural networks (PNNs) have achieved influential milestones in AI acceleration, demonstrating single-chip integration of both inference and in situ training-a leap forward with profound implications for next-generation computing. This review synthesizes a decade of progress in PNNs core components, critically analyzing advances in linear synaptic devices, nonlinear neuron devices, and network architectures, summarizing their respective strengths and persistent challenges. Furthermore, application-specific requirements are systematically analyzed for PNN deployment across computational regimes: cloud-scale and edge/client-side AIs. Finally, actionable pathways are outlined for overcoming material- and system-level barriers, emphasizing topology-optimized active/passive devices and advanced packaging strategies. These multidisciplinary advances position PNNs as a paradigm-shifting platform for post-Moore AI hardware.
人工智能加速的集成神经形态光子计算:新兴设备,网络架构和未来范例。
深度学习是现代人工智能(AI)的基石,彻底改变了从计算机视觉到大型语言模型(llm)的领域。然而,随着电子硬件接近基本的物理极限,受到晶体管缩放挑战、冯·诺伊曼架构和散热的限制,计算密度和能量效率出现了关键瓶颈。为了弥合算法野心和硬件限制之间的差距,光子神经形态计算作为一种变革的候选者出现,利用光固有的并行性、亚纳秒的延迟和接近零的热损耗来本地执行矩阵运算——神经网络的计算骨干。光子神经网络(PNNs)在人工智能加速方面取得了重要的里程碑,展示了推理和原位训练的单芯片集成-这是对下一代计算具有深远影响的飞跃。本文综合了pnn核心组件十年来的进展,批判性地分析了线性突触器件、非线性神经元器件和网络架构的进展,总结了它们各自的优势和持续的挑战。此外,系统地分析了跨计算机制(云规模和边缘/客户端ai)部署PNN的特定应用需求。最后,概述了克服材料和系统级障碍的可行途径,强调拓扑优化的有源/无源器件和先进的封装策略。这些多学科的进步使pnn成为后摩尔人工智能硬件的范式转换平台。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Advanced Materials
Advanced Materials 工程技术-材料科学:综合
CiteScore
43.00
自引率
4.10%
发文量
2182
审稿时长
2 months
期刊介绍: Advanced Materials, one of the world's most prestigious journals and the foundation of the Advanced portfolio, is the home of choice for best-in-class materials science for more than 30 years. Following this fast-growing and interdisciplinary field, we are considering and publishing the most important discoveries on any and all materials from materials scientists, chemists, physicists, engineers as well as health and life scientists and bringing you the latest results and trends in modern materials-related research every week.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信