{"title":"6结论与观点","authors":"Michael. Aupetit, P. Couturier, P. Massotte","doi":"10.1515/9783110344578-006","DOIUrl":null,"url":null,"abstract":"Figure 3: Comparison between original Growing Neural-Gas (a) and Recruiting Growing Neural-Gas (b) algorithms. We have presented a new method for function approximation with Neural-Gas networks. This method combines closely both supervised and unsupervised learning to gather the neurons according to the error density made in the output space approximating a function, and to the input distribution density of the data. The resulting neural network is called a Recruiting Neural-Gas because the neurons which make a higher error tend to recruit their neighbors to help them in their approximation task. This method gives a better accuracy than the original NG on the presented test. It allows to reduce drastically the number of neurons needed to represent the input-output function at a given accuracy (33% in our experiment), and so to reduce the computation time. It also allows to avoid the scattering of the neurons in Growing Neural-Gas networks fully justifying its use in function approximation tasks. This approach is very promising because the neurons tend to gather in the interesting regions of the input data distribution, which are those associated to a change in the output function. However, the crucial point of the algorithm is certainly to choose a good \" recruiting \" parameter (here it is the error density). We intend to investigate this theoretical part of the algorithm and to apply it in time series predictions. Moreover, the fact the neurons are always moving in the input space to adapt to the input distribution seems to limit the approximation performances of the network. And to decrease the global input and output learning rates implies to make sure the function is correctly approximated everywhere and the distribution will not change anymore. That's why we think that the input learning rate could be attached to each neuron and locally controlled according to its output error. In that way, the neurons would be \" frozen \" in the input space when they make a low error giving a good approximation accuracy, but they would keep adapting to the input distribution in region where the error is high. This idea is also part of our future work. Neural-gas \" network for vector quantization and its application to time-series prediction. The neurons tend to scatter over the input distribution The neurons tend to gather in regions of variations (contour lines) f(ξ) is constant in some regions and changes rapidly between these regions. Our …","PeriodicalId":186985,"journal":{"name":"Risk Management and Education","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"6 Conclusions and perspectives\",\"authors\":\"Michael. Aupetit, P. Couturier, P. Massotte\",\"doi\":\"10.1515/9783110344578-006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Figure 3: Comparison between original Growing Neural-Gas (a) and Recruiting Growing Neural-Gas (b) algorithms. We have presented a new method for function approximation with Neural-Gas networks. This method combines closely both supervised and unsupervised learning to gather the neurons according to the error density made in the output space approximating a function, and to the input distribution density of the data. The resulting neural network is called a Recruiting Neural-Gas because the neurons which make a higher error tend to recruit their neighbors to help them in their approximation task. This method gives a better accuracy than the original NG on the presented test. It allows to reduce drastically the number of neurons needed to represent the input-output function at a given accuracy (33% in our experiment), and so to reduce the computation time. It also allows to avoid the scattering of the neurons in Growing Neural-Gas networks fully justifying its use in function approximation tasks. This approach is very promising because the neurons tend to gather in the interesting regions of the input data distribution, which are those associated to a change in the output function. However, the crucial point of the algorithm is certainly to choose a good \\\" recruiting \\\" parameter (here it is the error density). We intend to investigate this theoretical part of the algorithm and to apply it in time series predictions. Moreover, the fact the neurons are always moving in the input space to adapt to the input distribution seems to limit the approximation performances of the network. And to decrease the global input and output learning rates implies to make sure the function is correctly approximated everywhere and the distribution will not change anymore. That's why we think that the input learning rate could be attached to each neuron and locally controlled according to its output error. In that way, the neurons would be \\\" frozen \\\" in the input space when they make a low error giving a good approximation accuracy, but they would keep adapting to the input distribution in region where the error is high. This idea is also part of our future work. Neural-gas \\\" network for vector quantization and its application to time-series prediction. The neurons tend to scatter over the input distribution The neurons tend to gather in regions of variations (contour lines) f(ξ) is constant in some regions and changes rapidly between these regions. Our …\",\"PeriodicalId\":186985,\"journal\":{\"name\":\"Risk Management and Education\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Risk Management and Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/9783110344578-006\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk Management and Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/9783110344578-006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Figure 3: Comparison between original Growing Neural-Gas (a) and Recruiting Growing Neural-Gas (b) algorithms. We have presented a new method for function approximation with Neural-Gas networks. This method combines closely both supervised and unsupervised learning to gather the neurons according to the error density made in the output space approximating a function, and to the input distribution density of the data. The resulting neural network is called a Recruiting Neural-Gas because the neurons which make a higher error tend to recruit their neighbors to help them in their approximation task. This method gives a better accuracy than the original NG on the presented test. It allows to reduce drastically the number of neurons needed to represent the input-output function at a given accuracy (33% in our experiment), and so to reduce the computation time. It also allows to avoid the scattering of the neurons in Growing Neural-Gas networks fully justifying its use in function approximation tasks. This approach is very promising because the neurons tend to gather in the interesting regions of the input data distribution, which are those associated to a change in the output function. However, the crucial point of the algorithm is certainly to choose a good " recruiting " parameter (here it is the error density). We intend to investigate this theoretical part of the algorithm and to apply it in time series predictions. Moreover, the fact the neurons are always moving in the input space to adapt to the input distribution seems to limit the approximation performances of the network. And to decrease the global input and output learning rates implies to make sure the function is correctly approximated everywhere and the distribution will not change anymore. That's why we think that the input learning rate could be attached to each neuron and locally controlled according to its output error. In that way, the neurons would be " frozen " in the input space when they make a low error giving a good approximation accuracy, but they would keep adapting to the input distribution in region where the error is high. This idea is also part of our future work. Neural-gas " network for vector quantization and its application to time-series prediction. The neurons tend to scatter over the input distribution The neurons tend to gather in regions of variations (contour lines) f(ξ) is constant in some regions and changes rapidly between these regions. Our …