{"title":"Integrated Neuromorphic Photonic Computing for AI Acceleration: Emerging Devices, Network Architectures, and Future Paradigms.","authors":"Gaofei Wang,Junyan Che,Chen Gao,Zhou Han,Jiabin Shen,Zengguang Cheng,Peng Zhou","doi":"10.1002/adma.202508029","DOIUrl":null,"url":null,"abstract":"Deep learning stands as a cornerstone of modern artificial intelligence (AI), revolutionizing fields from computer vision to large language models (LLMs). However, as electronic hardware approaches fundamental physical limits-constrained by transistor scaling challenges, von Neuman architecture, and thermal dissipation-critical bottlenecks emerge in computational density and energy efficiency. To bridge the gap between algorithmic ambition and hardware limitations, photonic neuromorphic computing emerges as a transformative candidate, exploiting light's inherent parallelism, sub-nanosecond latency, and near-zero thermal losses to natively execute matrix operations-the computational backbone of neural networks. Photonic neural networks (PNNs) have achieved influential milestones in AI acceleration, demonstrating single-chip integration of both inference and in situ training-a leap forward with profound implications for next-generation computing. This review synthesizes a decade of progress in PNNs core components, critically analyzing advances in linear synaptic devices, nonlinear neuron devices, and network architectures, summarizing their respective strengths and persistent challenges. Furthermore, application-specific requirements are systematically analyzed for PNN deployment across computational regimes: cloud-scale and edge/client-side AIs. Finally, actionable pathways are outlined for overcoming material- and system-level barriers, emphasizing topology-optimized active/passive devices and advanced packaging strategies. These multidisciplinary advances position PNNs as a paradigm-shifting platform for post-Moore AI hardware.","PeriodicalId":114,"journal":{"name":"Advanced Materials","volume":"10 1","pages":"e08029"},"PeriodicalIF":26.8000,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Materials","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1002/adma.202508029","RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning stands as a cornerstone of modern artificial intelligence (AI), revolutionizing fields from computer vision to large language models (LLMs). However, as electronic hardware approaches fundamental physical limits-constrained by transistor scaling challenges, von Neuman architecture, and thermal dissipation-critical bottlenecks emerge in computational density and energy efficiency. To bridge the gap between algorithmic ambition and hardware limitations, photonic neuromorphic computing emerges as a transformative candidate, exploiting light's inherent parallelism, sub-nanosecond latency, and near-zero thermal losses to natively execute matrix operations-the computational backbone of neural networks. Photonic neural networks (PNNs) have achieved influential milestones in AI acceleration, demonstrating single-chip integration of both inference and in situ training-a leap forward with profound implications for next-generation computing. This review synthesizes a decade of progress in PNNs core components, critically analyzing advances in linear synaptic devices, nonlinear neuron devices, and network architectures, summarizing their respective strengths and persistent challenges. Furthermore, application-specific requirements are systematically analyzed for PNN deployment across computational regimes: cloud-scale and edge/client-side AIs. Finally, actionable pathways are outlined for overcoming material- and system-level barriers, emphasizing topology-optimized active/passive devices and advanced packaging strategies. These multidisciplinary advances position PNNs as a paradigm-shifting platform for post-Moore AI hardware.
期刊介绍:
Advanced Materials, one of the world's most prestigious journals and the foundation of the Advanced portfolio, is the home of choice for best-in-class materials science for more than 30 years. Following this fast-growing and interdisciplinary field, we are considering and publishing the most important discoveries on any and all materials from materials scientists, chemists, physicists, engineers as well as health and life scientists and bringing you the latest results and trends in modern materials-related research every week.