{"title":"Theoretical Advances on Stochastic Configuration Networks.","authors":"Xiufeng Yan,Dianhui Wang,Ivan Y Tyukin","doi":"10.1109/tnnls.2025.3608555","DOIUrl":null,"url":null,"abstract":"This article advances the theoretical foundations of stochastic configuration networks (SCNs) by rigorously analyzing their convergence properties, approximation guarantees, and the limitations of nonadaptive randomized methods. We introduce a principled objective function that aligns incremental training with orthogonal projection, ensuring maximal residual reduction at each iteration without recomputing output weights. Under this formulation, we derive a novel necessary and sufficient condition for strong convergence in Hilbert spaces and establish sufficient conditions for uniform geometric convergence, offering the first theoretical justification of the SCN residual constraint. To assess the feasibility of unguided random initialization, we present a probabilistic analysis showing that even small support shifts markedly reduce the likelihood of sampling effective nodes in high-dimensional settings, thereby highlighting the necessity of adaptive refinement in the sampling distribution. Motivated by these insights, we propose greedy SCNs (GSCNs) and two optimized variants-Newton-Raphson GSCN (NR-GSCN) and particle swarm optimization GSCN (PSO-GSCN)-that incorporate Newton-Raphson refinement and particle swarm-based exploration to improve node selection. Empirical results on synthetic and real-world datasets demonstrate that the proposed methods achieve faster convergence, better approximation accuracy, and more compact architectures compared to existing SCN training schemes. Collectively, this work establishes a rigorous theoretical and algorithmic framework for SCNs, laying out a principled foundation for subsequent developments in the field of randomized neural network (NN) training.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"124 1","pages":""},"PeriodicalIF":8.9000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3608555","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This article advances the theoretical foundations of stochastic configuration networks (SCNs) by rigorously analyzing their convergence properties, approximation guarantees, and the limitations of nonadaptive randomized methods. We introduce a principled objective function that aligns incremental training with orthogonal projection, ensuring maximal residual reduction at each iteration without recomputing output weights. Under this formulation, we derive a novel necessary and sufficient condition for strong convergence in Hilbert spaces and establish sufficient conditions for uniform geometric convergence, offering the first theoretical justification of the SCN residual constraint. To assess the feasibility of unguided random initialization, we present a probabilistic analysis showing that even small support shifts markedly reduce the likelihood of sampling effective nodes in high-dimensional settings, thereby highlighting the necessity of adaptive refinement in the sampling distribution. Motivated by these insights, we propose greedy SCNs (GSCNs) and two optimized variants-Newton-Raphson GSCN (NR-GSCN) and particle swarm optimization GSCN (PSO-GSCN)-that incorporate Newton-Raphson refinement and particle swarm-based exploration to improve node selection. Empirical results on synthetic and real-world datasets demonstrate that the proposed methods achieve faster convergence, better approximation accuracy, and more compact architectures compared to existing SCN training schemes. Collectively, this work establishes a rigorous theoretical and algorithmic framework for SCNs, laying out a principled foundation for subsequent developments in the field of randomized neural network (NN) training.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.