Chinnamuthu Subramani , Ravi Prasad K. Jagannath , Venkatanareshbabu Kuppili
{"title":"通过正则化参数的计算优化改进深度随机向量泛函链路网络","authors":"Chinnamuthu Subramani , Ravi Prasad K. Jagannath , Venkatanareshbabu Kuppili","doi":"10.1016/j.engappai.2025.110389","DOIUrl":null,"url":null,"abstract":"<div><div>Deep Random Vector Functional Link (dRVFL) networks are a class of randomization-based deep neural networks known for their rapid learning capabilities and universal approximation potential. Despite these advantages, the dRVFL model faces substantial computational and memory challenges as training data, hidden layers, or input feature dimensions grow. These issues are intensified by regularization, as optimal performance requires repeatedly solving large-scale linear systems to tune the regularization parameter. Traditional matrix inversion methods for this task are computationally intensive, memory-demanding, and prone to numerical instability. This study introduces a computationally efficient two-stage approach to address these limitations. In the first stage, a randomized algorithm-based low-rank approximation computes a compressed representation of the feature matrix in <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>m</mi><mi>n</mi><mi>k</mi><mo>)</mo></mrow></mrow></math></span> for an <span><math><mrow><mi>m</mi><mo>×</mo><mi>n</mi></mrow></math></span> matrix (<span><math><mrow><mi>k</mi><mo>≪</mo><mo>min</mo><mrow><mo>(</mo><mi>m</mi><mo>,</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span>), requiring only two passes over the data, significantly reducing costs. In the second stage, the derived solution is used to optimize <span><math><mi>λ</mi></math></span> via the Golden Section search and Menger curvature-based technique, enabling efficient exploration of the continuous domain with minimal function evaluations. Furthermore, the theoretical framework derives an upper bound on the error of the approximated regularized solution, combining deterministic and probabilistic insights. The proposed method, validated on multiple classification datasets, demonstrates up to 6% improvement in accuracy compared to state-of-the-art feedforward neural network algorithms. Statistical evaluations further confirm its robustness and superior performance over seven baseline algorithms. Overall, the proposed method enhances the scalability, robustness, and efficiency of dRVFL for large-scale applications.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"148 ","pages":"Article 110389"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Deep Random Vector Functional Link Networks through computational optimization of regularization parameters\",\"authors\":\"Chinnamuthu Subramani , Ravi Prasad K. Jagannath , Venkatanareshbabu Kuppili\",\"doi\":\"10.1016/j.engappai.2025.110389\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep Random Vector Functional Link (dRVFL) networks are a class of randomization-based deep neural networks known for their rapid learning capabilities and universal approximation potential. Despite these advantages, the dRVFL model faces substantial computational and memory challenges as training data, hidden layers, or input feature dimensions grow. These issues are intensified by regularization, as optimal performance requires repeatedly solving large-scale linear systems to tune the regularization parameter. Traditional matrix inversion methods for this task are computationally intensive, memory-demanding, and prone to numerical instability. This study introduces a computationally efficient two-stage approach to address these limitations. In the first stage, a randomized algorithm-based low-rank approximation computes a compressed representation of the feature matrix in <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>m</mi><mi>n</mi><mi>k</mi><mo>)</mo></mrow></mrow></math></span> for an <span><math><mrow><mi>m</mi><mo>×</mo><mi>n</mi></mrow></math></span> matrix (<span><math><mrow><mi>k</mi><mo>≪</mo><mo>min</mo><mrow><mo>(</mo><mi>m</mi><mo>,</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span>), requiring only two passes over the data, significantly reducing costs. In the second stage, the derived solution is used to optimize <span><math><mi>λ</mi></math></span> via the Golden Section search and Menger curvature-based technique, enabling efficient exploration of the continuous domain with minimal function evaluations. Furthermore, the theoretical framework derives an upper bound on the error of the approximated regularized solution, combining deterministic and probabilistic insights. The proposed method, validated on multiple classification datasets, demonstrates up to 6% improvement in accuracy compared to state-of-the-art feedforward neural network algorithms. Statistical evaluations further confirm its robustness and superior performance over seven baseline algorithms. Overall, the proposed method enhances the scalability, robustness, and efficiency of dRVFL for large-scale applications.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"148 \",\"pages\":\"Article 110389\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625003896\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625003896","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Improving Deep Random Vector Functional Link Networks through computational optimization of regularization parameters
Deep Random Vector Functional Link (dRVFL) networks are a class of randomization-based deep neural networks known for their rapid learning capabilities and universal approximation potential. Despite these advantages, the dRVFL model faces substantial computational and memory challenges as training data, hidden layers, or input feature dimensions grow. These issues are intensified by regularization, as optimal performance requires repeatedly solving large-scale linear systems to tune the regularization parameter. Traditional matrix inversion methods for this task are computationally intensive, memory-demanding, and prone to numerical instability. This study introduces a computationally efficient two-stage approach to address these limitations. In the first stage, a randomized algorithm-based low-rank approximation computes a compressed representation of the feature matrix in for an matrix (), requiring only two passes over the data, significantly reducing costs. In the second stage, the derived solution is used to optimize via the Golden Section search and Menger curvature-based technique, enabling efficient exploration of the continuous domain with minimal function evaluations. Furthermore, the theoretical framework derives an upper bound on the error of the approximated regularized solution, combining deterministic and probabilistic insights. The proposed method, validated on multiple classification datasets, demonstrates up to 6% improvement in accuracy compared to state-of-the-art feedforward neural network algorithms. Statistical evaluations further confirm its robustness and superior performance over seven baseline algorithms. Overall, the proposed method enhances the scalability, robustness, and efficiency of dRVFL for large-scale applications.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.