Architectures, variants, and performance of neural operators: A comparative review

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shengjun Liu , Yu Yu , Ting Zhang , Hanchao Liu , Xinru Liu , Deyu Meng
{"title":"Architectures, variants, and performance of neural operators: A comparative review","authors":"Shengjun Liu ,&nbsp;Yu Yu ,&nbsp;Ting Zhang ,&nbsp;Hanchao Liu ,&nbsp;Xinru Liu ,&nbsp;Deyu Meng","doi":"10.1016/j.neucom.2025.130518","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, neural operators have emerged as effective alternatives to traditional numerical solvers. They are known for their efficient computation, excellent generalization, and high solving accuracy. Many researchers have shown interest in their design and application. This paper provides a comprehensive summary and analysis of neural operators. We categorize them into three types based on their architecture: deep operator networks (DeepONets), integral kernel operators, and transformer-based neural operators. We then discuss the basic structures and properties of these operator types. Furthermore, we summarize and discuss the various variants and extensions of these three types of neural operators from three directions: (1) operator basis-based neural operator variants; (2) physics-informed neural operator variants; and (3) application of neural operator variants in complex systems. We also analyze the characteristics and performance of different operator methods through numerical experiments. Taking into account these discussions and analyses, we provide perspectives and suggestions regarding the challenges and potential enhancements for different neural operators. This offers valuable guidance and suggestions for the practical application and development of neural operators.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130518"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225011907","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, neural operators have emerged as effective alternatives to traditional numerical solvers. They are known for their efficient computation, excellent generalization, and high solving accuracy. Many researchers have shown interest in their design and application. This paper provides a comprehensive summary and analysis of neural operators. We categorize them into three types based on their architecture: deep operator networks (DeepONets), integral kernel operators, and transformer-based neural operators. We then discuss the basic structures and properties of these operator types. Furthermore, we summarize and discuss the various variants and extensions of these three types of neural operators from three directions: (1) operator basis-based neural operator variants; (2) physics-informed neural operator variants; and (3) application of neural operator variants in complex systems. We also analyze the characteristics and performance of different operator methods through numerical experiments. Taking into account these discussions and analyses, we provide perspectives and suggestions regarding the challenges and potential enhancements for different neural operators. This offers valuable guidance and suggestions for the practical application and development of neural operators.
神经算子的结构、变体和性能:比较回顾
近年来,神经算子已成为传统数值求解方法的有效替代品。它们以计算效率高、泛化性好、求解精度高而著称。许多研究人员对其设计和应用表现出兴趣。本文对神经算子进行了全面的总结和分析。我们根据它们的架构将它们分为三种类型:深度算子网络(DeepONets)、积分核算子和基于变压器的神经算子。然后讨论这些算子类型的基本结构和性质。此外,我们从三个方面总结和讨论了这三类神经算子的各种变体和扩展:(1)基于算子的神经算子变体;(2)物理信息神经算子变体;(3)神经算子变体在复杂系统中的应用。通过数值实验分析了不同算子方法的特点和性能。考虑到这些讨论和分析,我们就不同的神经算子面临的挑战和潜在的改进提出了观点和建议。这为神经算子的实际应用和发展提供了有价值的指导和建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信