{"title":"Some Implications of System Dynamics Analysis of Discrete-Time Recurrent Neural Networks for Learning Algorithms Design","authors":"J. Cervantes, Maria Gomez, A. Schaum","doi":"10.1109/MICAI.2013.14","DOIUrl":null,"url":null,"abstract":"It is not clear so far what the implications of bifurcations in Discrete-Time Recurrent Neural Networks dynamics are with respect to learning algorithms. Previous studies discussed different phenomena in a general purpose framework, and here we are going to discuss in more detail. We perform an analysis of the dynamics of a neuron with feedback in order to find the different behaviors that it shows depending on the magnitude of the offset weight, the input weight and the feedback weight. We calculate the bifurcation manifolds that show the regions where the neuron behavior changes. We discuss the implications that these findings can have for the design of DTRNN learning algorithms.","PeriodicalId":340039,"journal":{"name":"2013 12th Mexican International Conference on Artificial Intelligence","volume":"181 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 12th Mexican International Conference on Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MICAI.2013.14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
It is not clear so far what the implications of bifurcations in Discrete-Time Recurrent Neural Networks dynamics are with respect to learning algorithms. Previous studies discussed different phenomena in a general purpose framework, and here we are going to discuss in more detail. We perform an analysis of the dynamics of a neuron with feedback in order to find the different behaviors that it shows depending on the magnitude of the offset weight, the input weight and the feedback weight. We calculate the bifurcation manifolds that show the regions where the neuron behavior changes. We discuss the implications that these findings can have for the design of DTRNN learning algorithms.