{"title":"Ultra-compact multi-task processor based on in-memory optical computing","authors":"Wencan Liu, Yuyao Huang, Run Sun, Tingzhao Fu, Sigang Yang, Hongwei Chen","doi":"10.1038/s41377-025-01814-0","DOIUrl":null,"url":null,"abstract":"<p>To enhance the computational density and energy efficiency of on-chip neuromorphic hardware, this study introduces a novel network architecture for multi-task processing with in-memory optical computing. On-chip optical neural networks are celebrated for their capability to transduce a substantial volume of parameters into optical form while conducting passive computing, yet they encounter challenges in scalability and multitasking. Leveraging the principles of transfer learning, this approach involves embedding the majority of parameters into fixed optical components and a minority into adjustable electrical components. Furthermore, with deep regression algorithm in modeling physical propagation process, a compact optical neural network achieve to handle diverse tasks. In this work, two ultra-compact in-memory diffraction-based chips with integration of more than 60,000 parameters/mm<sup>2</sup> were fabricated, employing deep neural network model and the hard parameter sharing algorithm, to perform multifaceted classification and regression tasks, respectively. The experimental results demonstrate that these chips achieve accuracies comparable to those of electrical networks while significantly reducing the power-intensive digital computation by 90%. Our work heralds strong potential for advancing in-memory optical computing frameworks and next generation of artificial intelligence platforms.</p>","PeriodicalId":18069,"journal":{"name":"Light-Science & Applications","volume":"71 1","pages":""},"PeriodicalIF":20.6000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Light-Science & Applications","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1038/s41377-025-01814-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0
Abstract
To enhance the computational density and energy efficiency of on-chip neuromorphic hardware, this study introduces a novel network architecture for multi-task processing with in-memory optical computing. On-chip optical neural networks are celebrated for their capability to transduce a substantial volume of parameters into optical form while conducting passive computing, yet they encounter challenges in scalability and multitasking. Leveraging the principles of transfer learning, this approach involves embedding the majority of parameters into fixed optical components and a minority into adjustable electrical components. Furthermore, with deep regression algorithm in modeling physical propagation process, a compact optical neural network achieve to handle diverse tasks. In this work, two ultra-compact in-memory diffraction-based chips with integration of more than 60,000 parameters/mm2 were fabricated, employing deep neural network model and the hard parameter sharing algorithm, to perform multifaceted classification and regression tasks, respectively. The experimental results demonstrate that these chips achieve accuracies comparable to those of electrical networks while significantly reducing the power-intensive digital computation by 90%. Our work heralds strong potential for advancing in-memory optical computing frameworks and next generation of artificial intelligence platforms.