具有局部循环自适应突触的自组织模糊神经网络

D. Coyle, G. Prasad, T. McGinnity
{"title":"具有局部循环自适应突触的自组织模糊神经网络","authors":"D. Coyle, G. Prasad, T. McGinnity","doi":"10.1109/EAIS.2011.5945927","DOIUrl":null,"url":null,"abstract":"This paper describes a modification to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve learning ability. Previously the SOFNN's computational efficiency was improved using a new method of checking the network structure after it has been modified. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron's firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. To exploit the temporal information contained in the record of saved firing strengths, a new architecture of the SOFNN is proposed in this paper where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from the data and, in contrast to pure feed forward architectures, which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification, which conforms to the requirements for autonomy and has no additional hyperparameters, can significantly improve the performance of the SOFNN's prediction capacity under certain constraints","PeriodicalId":243348,"journal":{"name":"2011 IEEE Workshop on Evolving and Adaptive Intelligent Systems (EAIS)","volume":"13 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A self-organising fuzzy neural network with locally recurrent self-adaptive synapses\",\"authors\":\"D. Coyle, G. Prasad, T. McGinnity\",\"doi\":\"10.1109/EAIS.2011.5945927\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes a modification to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve learning ability. Previously the SOFNN's computational efficiency was improved using a new method of checking the network structure after it has been modified. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron's firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. To exploit the temporal information contained in the record of saved firing strengths, a new architecture of the SOFNN is proposed in this paper where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from the data and, in contrast to pure feed forward architectures, which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification, which conforms to the requirements for autonomy and has no additional hyperparameters, can significantly improve the performance of the SOFNN's prediction capacity under certain constraints\",\"PeriodicalId\":243348,\"journal\":{\"name\":\"2011 IEEE Workshop on Evolving and Adaptive Intelligent Systems (EAIS)\",\"volume\":\"13 6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE Workshop on Evolving and Adaptive Intelligent Systems (EAIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EAIS.2011.5945927\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE Workshop on Evolving and Adaptive Intelligent Systems (EAIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EAIS.2011.5945927","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

本文对自组织模糊神经网络(SOFNN)的学习算法和结构进行了改进,以提高其学习能力。在此之前,SOFNN的计算效率是通过一种新的方法来提高的,这种方法是在修改后检查网络结构。与每次修改整个结构时都进行测试不同的是,该系统保留了每个神经元对之前由网络聚集的所有数据的放电强度的记录。该记录随着训练的进行而更新,并用于减少检查网络结构变化的计算负荷,以确保不会发生性能下降,从而显著减少训练时间。为了利用保存的发射强度记录中包含的时间信息,本文提出了一种新的SOFNN结构,该结构在结构的第三层神经元中添加了循环反馈连接。循环连接允许网络从数据中学习时间信息,与纯前馈架构相反,它提前表现出静态的输入输出行为,循环模型能够存储过去的信息(例如,过去对时间序列的测量),因此更适合分析动态系统。每个循环反馈连接都包含一个必须学习的权重。在这项工作中,提出了一种学习方法,其中循环反馈权重在线更新(非迭代),并与每个模糊神经元的总放电活动成比例。结果表明,这种修改既符合自主性要求,又没有额外的超参数,可以在一定约束条件下显著提高SOFNN的预测能力
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A self-organising fuzzy neural network with locally recurrent self-adaptive synapses
This paper describes a modification to the learning algorithm and architecture of the self-organizing fuzzy neural network (SOFNN) to improve learning ability. Previously the SOFNN's computational efficiency was improved using a new method of checking the network structure after it has been modified. Instead of testing the entire structure every time it has been modified, a record is kept of each neuron's firing strength for all data previously clustered by the network. This record is updated as training progresses and is used to reduce the computational load of checking network structure changes, to ensure performance degradation does not occur, resulting in significantly reduced training times. To exploit the temporal information contained in the record of saved firing strengths, a new architecture of the SOFNN is proposed in this paper where recurrent feedback connections are added to neurons in layer three of the structure. Recurrent connections allow the network to learn the temporal information from the data and, in contrast to pure feed forward architectures, which exhibit static input-output behavior in advance, recurrent models are able to store information from the past (e.g., past measurements of the time-series) and are therefore better suited to analyzing dynamic systems. Each recurrent feedback connection includes a weight which must be learned. In this work a learning approach is proposed where the recurrent feedback weight is updated online (not iteratively) and proportional to the aggregate firing activity of each fuzzy neuron. It is shown that this modification, which conforms to the requirements for autonomy and has no additional hyperparameters, can significantly improve the performance of the SOFNN's prediction capacity under certain constraints
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信