随机构型网络的理论进展。

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiufeng Yan,Dianhui Wang,Ivan Y Tyukin
{"title":"随机构型网络的理论进展。","authors":"Xiufeng Yan,Dianhui Wang,Ivan Y Tyukin","doi":"10.1109/tnnls.2025.3608555","DOIUrl":null,"url":null,"abstract":"This article advances the theoretical foundations of stochastic configuration networks (SCNs) by rigorously analyzing their convergence properties, approximation guarantees, and the limitations of nonadaptive randomized methods. We introduce a principled objective function that aligns incremental training with orthogonal projection, ensuring maximal residual reduction at each iteration without recomputing output weights. Under this formulation, we derive a novel necessary and sufficient condition for strong convergence in Hilbert spaces and establish sufficient conditions for uniform geometric convergence, offering the first theoretical justification of the SCN residual constraint. To assess the feasibility of unguided random initialization, we present a probabilistic analysis showing that even small support shifts markedly reduce the likelihood of sampling effective nodes in high-dimensional settings, thereby highlighting the necessity of adaptive refinement in the sampling distribution. Motivated by these insights, we propose greedy SCNs (GSCNs) and two optimized variants-Newton-Raphson GSCN (NR-GSCN) and particle swarm optimization GSCN (PSO-GSCN)-that incorporate Newton-Raphson refinement and particle swarm-based exploration to improve node selection. Empirical results on synthetic and real-world datasets demonstrate that the proposed methods achieve faster convergence, better approximation accuracy, and more compact architectures compared to existing SCN training schemes. Collectively, this work establishes a rigorous theoretical and algorithmic framework for SCNs, laying out a principled foundation for subsequent developments in the field of randomized neural network (NN) training.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"124 1","pages":""},"PeriodicalIF":8.9000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Theoretical Advances on Stochastic Configuration Networks.\",\"authors\":\"Xiufeng Yan,Dianhui Wang,Ivan Y Tyukin\",\"doi\":\"10.1109/tnnls.2025.3608555\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article advances the theoretical foundations of stochastic configuration networks (SCNs) by rigorously analyzing their convergence properties, approximation guarantees, and the limitations of nonadaptive randomized methods. We introduce a principled objective function that aligns incremental training with orthogonal projection, ensuring maximal residual reduction at each iteration without recomputing output weights. Under this formulation, we derive a novel necessary and sufficient condition for strong convergence in Hilbert spaces and establish sufficient conditions for uniform geometric convergence, offering the first theoretical justification of the SCN residual constraint. To assess the feasibility of unguided random initialization, we present a probabilistic analysis showing that even small support shifts markedly reduce the likelihood of sampling effective nodes in high-dimensional settings, thereby highlighting the necessity of adaptive refinement in the sampling distribution. Motivated by these insights, we propose greedy SCNs (GSCNs) and two optimized variants-Newton-Raphson GSCN (NR-GSCN) and particle swarm optimization GSCN (PSO-GSCN)-that incorporate Newton-Raphson refinement and particle swarm-based exploration to improve node selection. Empirical results on synthetic and real-world datasets demonstrate that the proposed methods achieve faster convergence, better approximation accuracy, and more compact architectures compared to existing SCN training schemes. Collectively, this work establishes a rigorous theoretical and algorithmic framework for SCNs, laying out a principled foundation for subsequent developments in the field of randomized neural network (NN) training.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"124 1\",\"pages\":\"\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tnnls.2025.3608555\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3608555","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本文通过严格分析随机组态网络的收敛性、近似保证和非自适应随机化方法的局限性,提出了随机组态网络的理论基础。我们引入了一个原则性的目标函数,它将增量训练与正交投影相结合,确保在每次迭代中最大限度地减少残差,而无需重新计算输出权重。在此公式下,我们导出了Hilbert空间强收敛的一个新的充分必要条件,建立了一致几何收敛的充分条件,首次从理论上证明了SCN残差约束的存在。为了评估非引导随机初始化的可行性,我们提出了一个概率分析,表明即使很小的支持位移也会显着降低高维设置中采样有效节点的可能性,从而突出了采样分布中自适应细化的必要性。在这些见解的激励下,我们提出了贪心scn (GSCN)和两个优化变体-牛顿-拉夫森GSCN (NR-GSCN)和粒子群优化GSCN (PSO-GSCN)-结合牛顿-拉夫森改进和基于粒子群的探索来改进节点选择。在合成数据集和真实数据集上的经验结果表明,与现有的SCN训练方案相比,所提出的方法具有更快的收敛速度、更好的近似精度和更紧凑的结构。总的来说,这项工作为SCNs建立了一个严格的理论和算法框架,为随机神经网络(NN)训练领域的后续发展奠定了原则基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Theoretical Advances on Stochastic Configuration Networks.
This article advances the theoretical foundations of stochastic configuration networks (SCNs) by rigorously analyzing their convergence properties, approximation guarantees, and the limitations of nonadaptive randomized methods. We introduce a principled objective function that aligns incremental training with orthogonal projection, ensuring maximal residual reduction at each iteration without recomputing output weights. Under this formulation, we derive a novel necessary and sufficient condition for strong convergence in Hilbert spaces and establish sufficient conditions for uniform geometric convergence, offering the first theoretical justification of the SCN residual constraint. To assess the feasibility of unguided random initialization, we present a probabilistic analysis showing that even small support shifts markedly reduce the likelihood of sampling effective nodes in high-dimensional settings, thereby highlighting the necessity of adaptive refinement in the sampling distribution. Motivated by these insights, we propose greedy SCNs (GSCNs) and two optimized variants-Newton-Raphson GSCN (NR-GSCN) and particle swarm optimization GSCN (PSO-GSCN)-that incorporate Newton-Raphson refinement and particle swarm-based exploration to improve node selection. Empirical results on synthetic and real-world datasets demonstrate that the proposed methods achieve faster convergence, better approximation accuracy, and more compact architectures compared to existing SCN training schemes. Collectively, this work establishes a rigorous theoretical and algorithmic framework for SCNs, laying out a principled foundation for subsequent developments in the field of randomized neural network (NN) training.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信