6结论与观点

Michael. Aupetit, P. Couturier, P. Massotte
{"title":"6结论与观点","authors":"Michael. Aupetit, P. Couturier, P. Massotte","doi":"10.1515/9783110344578-006","DOIUrl":null,"url":null,"abstract":"Figure 3: Comparison between original Growing Neural-Gas (a) and Recruiting Growing Neural-Gas (b) algorithms. We have presented a new method for function approximation with Neural-Gas networks. This method combines closely both supervised and unsupervised learning to gather the neurons according to the error density made in the output space approximating a function, and to the input distribution density of the data. The resulting neural network is called a Recruiting Neural-Gas because the neurons which make a higher error tend to recruit their neighbors to help them in their approximation task. This method gives a better accuracy than the original NG on the presented test. It allows to reduce drastically the number of neurons needed to represent the input-output function at a given accuracy (33% in our experiment), and so to reduce the computation time. It also allows to avoid the scattering of the neurons in Growing Neural-Gas networks fully justifying its use in function approximation tasks. This approach is very promising because the neurons tend to gather in the interesting regions of the input data distribution, which are those associated to a change in the output function. However, the crucial point of the algorithm is certainly to choose a good \" recruiting \" parameter (here it is the error density). We intend to investigate this theoretical part of the algorithm and to apply it in time series predictions. Moreover, the fact the neurons are always moving in the input space to adapt to the input distribution seems to limit the approximation performances of the network. And to decrease the global input and output learning rates implies to make sure the function is correctly approximated everywhere and the distribution will not change anymore. That's why we think that the input learning rate could be attached to each neuron and locally controlled according to its output error. In that way, the neurons would be \" frozen \" in the input space when they make a low error giving a good approximation accuracy, but they would keep adapting to the input distribution in region where the error is high. This idea is also part of our future work. Neural-gas \" network for vector quantization and its application to time-series prediction. The neurons tend to scatter over the input distribution The neurons tend to gather in regions of variations (contour lines) f(ξ) is constant in some regions and changes rapidly between these regions. Our …","PeriodicalId":186985,"journal":{"name":"Risk Management and Education","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"6 Conclusions and perspectives\",\"authors\":\"Michael. Aupetit, P. Couturier, P. Massotte\",\"doi\":\"10.1515/9783110344578-006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Figure 3: Comparison between original Growing Neural-Gas (a) and Recruiting Growing Neural-Gas (b) algorithms. We have presented a new method for function approximation with Neural-Gas networks. This method combines closely both supervised and unsupervised learning to gather the neurons according to the error density made in the output space approximating a function, and to the input distribution density of the data. The resulting neural network is called a Recruiting Neural-Gas because the neurons which make a higher error tend to recruit their neighbors to help them in their approximation task. This method gives a better accuracy than the original NG on the presented test. It allows to reduce drastically the number of neurons needed to represent the input-output function at a given accuracy (33% in our experiment), and so to reduce the computation time. It also allows to avoid the scattering of the neurons in Growing Neural-Gas networks fully justifying its use in function approximation tasks. This approach is very promising because the neurons tend to gather in the interesting regions of the input data distribution, which are those associated to a change in the output function. However, the crucial point of the algorithm is certainly to choose a good \\\" recruiting \\\" parameter (here it is the error density). We intend to investigate this theoretical part of the algorithm and to apply it in time series predictions. Moreover, the fact the neurons are always moving in the input space to adapt to the input distribution seems to limit the approximation performances of the network. And to decrease the global input and output learning rates implies to make sure the function is correctly approximated everywhere and the distribution will not change anymore. That's why we think that the input learning rate could be attached to each neuron and locally controlled according to its output error. In that way, the neurons would be \\\" frozen \\\" in the input space when they make a low error giving a good approximation accuracy, but they would keep adapting to the input distribution in region where the error is high. This idea is also part of our future work. Neural-gas \\\" network for vector quantization and its application to time-series prediction. The neurons tend to scatter over the input distribution The neurons tend to gather in regions of variations (contour lines) f(ξ) is constant in some regions and changes rapidly between these regions. Our …\",\"PeriodicalId\":186985,\"journal\":{\"name\":\"Risk Management and Education\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Risk Management and Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/9783110344578-006\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk Management and Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/9783110344578-006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图3:原始Growing Neural-Gas (a)和Recruiting Growing Neural-Gas (b)算法的比较。提出了一种新的神经-气体网络函数逼近方法。该方法将监督学习和无监督学习紧密结合,根据在近似函数的输出空间中产生的误差密度和数据的输入分布密度来收集神经元。由此产生的神经网络被称为“招募神经气体”,因为产生较高误差的神经元倾向于招募它们的邻居来帮助它们完成近似任务。该方法在给出的测试中给出了比原始NG更好的准确性。它允许在给定精度下(在我们的实验中为33%)大幅减少表示输入输出函数所需的神经元数量,从而减少计算时间。它还可以避免生长神经气体网络中神经元的散射,充分证明了它在函数近似任务中的应用。这种方法非常有前途,因为神经元倾向于聚集在输入数据分布的有趣区域,这些区域与输出函数的变化有关。然而,算法的关键点当然是选择一个好的“招募”参数(这里是误差密度)。我们打算研究这个算法的理论部分,并将其应用于时间序列预测。此外,神经元总是在输入空间中移动以适应输入分布的事实似乎限制了网络的近似性能。降低全局输入和输出学习率意味着要确保函数在任何地方都是正确的近似并且分布不会再改变。这就是为什么我们认为输入学习率可以附加到每个神经元上,并根据其输出误差进行局部控制。这样,当神经元产生较低的误差并给出较好的近似精度时,它们就会“冻结”在输入空间中,但在误差较大的区域,它们会继续适应输入分布。这个想法也是我们未来工作的一部分。向量量化神经气体网络及其在时间序列预测中的应用。神经元倾向于在输入分布上分散,神经元倾向于聚集在变化区域(等高线)中,f(ξ)在某些区域是恒定的,并且在这些区域之间迅速变化。我们的…
本文章由计算机程序翻译,如有差异,请以英文原文为准。
6 Conclusions and perspectives
Figure 3: Comparison between original Growing Neural-Gas (a) and Recruiting Growing Neural-Gas (b) algorithms. We have presented a new method for function approximation with Neural-Gas networks. This method combines closely both supervised and unsupervised learning to gather the neurons according to the error density made in the output space approximating a function, and to the input distribution density of the data. The resulting neural network is called a Recruiting Neural-Gas because the neurons which make a higher error tend to recruit their neighbors to help them in their approximation task. This method gives a better accuracy than the original NG on the presented test. It allows to reduce drastically the number of neurons needed to represent the input-output function at a given accuracy (33% in our experiment), and so to reduce the computation time. It also allows to avoid the scattering of the neurons in Growing Neural-Gas networks fully justifying its use in function approximation tasks. This approach is very promising because the neurons tend to gather in the interesting regions of the input data distribution, which are those associated to a change in the output function. However, the crucial point of the algorithm is certainly to choose a good " recruiting " parameter (here it is the error density). We intend to investigate this theoretical part of the algorithm and to apply it in time series predictions. Moreover, the fact the neurons are always moving in the input space to adapt to the input distribution seems to limit the approximation performances of the network. And to decrease the global input and output learning rates implies to make sure the function is correctly approximated everywhere and the distribution will not change anymore. That's why we think that the input learning rate could be attached to each neuron and locally controlled according to its output error. In that way, the neurons would be " frozen " in the input space when they make a low error giving a good approximation accuracy, but they would keep adapting to the input distribution in region where the error is high. This idea is also part of our future work. Neural-gas " network for vector quantization and its application to time-series prediction. The neurons tend to scatter over the input distribution The neurons tend to gather in regions of variations (contour lines) f(ξ) is constant in some regions and changes rapidly between these regions. Our …
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信