{"title":"Learning of Quasi-nonlinear Long-term Cognitive Networks using iterative numerical methods","authors":"Gonzalo Nápoles , Yamisleydi Salgueiro","doi":"10.1016/j.knosys.2025.113464","DOIUrl":null,"url":null,"abstract":"<div><div>Quasi-nonlinear Long-term Cognitive Networks (LTCNs) are an extension of Fuzzy Cognitive Maps (FCMs) for simulation and prediction problems ranging from regression and pattern classification to time series forecasting. In this extension, the quasi-nonlinear reasoning allows the model to escape from unique fixed-point attractors, while the unbounded weights equip the network with improved approximation capabilities. However, training these neural systems continues to be challenging due to their recurrent nature. Existing error-driven learning algorithms (metaheuristic-based, regression-based, and gradient-based) are either computationally demanding, fail to fine-tune the recurrent connections, or suffer from vanishing/exploding gradient issues. To bridge this gap, this paper presents a learning procedure that employs numerical iterative optimizers to solve a regularized least squares problem, aiming to enhance the precision and generalization of LTCN models. These optimizers do not require analytical knowledge about the Jacobian or the Hessian and were carefully chosen to address the inherent challenges of training recurrent neural networks. They are devoted to solving nonlinear optimization problems using trust regions, linear or quadratic approximations, and interpolations between the Gauss–Newton and gradient descent methods. In addition, we explore the model’s performance for several activation functions including piecewise, sigmoid, and hyperbolic variants. The empirical studies indicate that the proposed learning procedure outperforms state-of-the-art algorithms to a significant extent.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"317 ","pages":"Article 113464"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125005118","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Quasi-nonlinear Long-term Cognitive Networks (LTCNs) are an extension of Fuzzy Cognitive Maps (FCMs) for simulation and prediction problems ranging from regression and pattern classification to time series forecasting. In this extension, the quasi-nonlinear reasoning allows the model to escape from unique fixed-point attractors, while the unbounded weights equip the network with improved approximation capabilities. However, training these neural systems continues to be challenging due to their recurrent nature. Existing error-driven learning algorithms (metaheuristic-based, regression-based, and gradient-based) are either computationally demanding, fail to fine-tune the recurrent connections, or suffer from vanishing/exploding gradient issues. To bridge this gap, this paper presents a learning procedure that employs numerical iterative optimizers to solve a regularized least squares problem, aiming to enhance the precision and generalization of LTCN models. These optimizers do not require analytical knowledge about the Jacobian or the Hessian and were carefully chosen to address the inherent challenges of training recurrent neural networks. They are devoted to solving nonlinear optimization problems using trust regions, linear or quadratic approximations, and interpolations between the Gauss–Newton and gradient descent methods. In addition, we explore the model’s performance for several activation functions including piecewise, sigmoid, and hyperbolic variants. The empirical studies indicate that the proposed learning procedure outperforms state-of-the-art algorithms to a significant extent.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.