{"title":"衰减势场神经网络:一种并行化拓扑指示性映射范例的方法","authors":"Clint Rogers, I. Valova","doi":"10.1109/ICMLA.2015.56","DOIUrl":null,"url":null,"abstract":"Mapping methodologies aim to make sense or connections from hard data. The human mind is able to efficiently and quickly process images through the visual cortex, in part due to its parallel nature. A basic Kohonen self-organizing feature map (SOFM) is one example of a mapping methodology in the class of neural networks that does this very well. Optimally the result is a nicely mapped neural network representative of the data set, however SOFMs do not translate to a parallelized architecture very well. The problem stems from the neighborhoods that are established between the neurons, creating race conditions for updating winning neurons. We propose a fully parallelized mapping architecture based loosely on SOFM called decaying potential fields neural network (DPFNN). We show that DPFNN uses neurons that are computationally uncoupled but symbolically linked. Through analysis we show this allows for the neurons to reach convergence with having only a passive data dependency on each other, as opposed to a hazard generating direct dependency. We have created this network to closely reflect the efficiency and speed of a parallel approach, with results that rival or exceed those of similar topological networks such as SOFM.","PeriodicalId":288427,"journal":{"name":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Decaying Potential Fields Neural Network: An Approach for Parallelizing Topologically Indicative Mapping Exemplars\",\"authors\":\"Clint Rogers, I. Valova\",\"doi\":\"10.1109/ICMLA.2015.56\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mapping methodologies aim to make sense or connections from hard data. The human mind is able to efficiently and quickly process images through the visual cortex, in part due to its parallel nature. A basic Kohonen self-organizing feature map (SOFM) is one example of a mapping methodology in the class of neural networks that does this very well. Optimally the result is a nicely mapped neural network representative of the data set, however SOFMs do not translate to a parallelized architecture very well. The problem stems from the neighborhoods that are established between the neurons, creating race conditions for updating winning neurons. We propose a fully parallelized mapping architecture based loosely on SOFM called decaying potential fields neural network (DPFNN). We show that DPFNN uses neurons that are computationally uncoupled but symbolically linked. Through analysis we show this allows for the neurons to reach convergence with having only a passive data dependency on each other, as opposed to a hazard generating direct dependency. We have created this network to closely reflect the efficiency and speed of a parallel approach, with results that rival or exceed those of similar topological networks such as SOFM.\",\"PeriodicalId\":288427,\"journal\":{\"name\":\"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA.2015.56\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2015.56","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Decaying Potential Fields Neural Network: An Approach for Parallelizing Topologically Indicative Mapping Exemplars
Mapping methodologies aim to make sense or connections from hard data. The human mind is able to efficiently and quickly process images through the visual cortex, in part due to its parallel nature. A basic Kohonen self-organizing feature map (SOFM) is one example of a mapping methodology in the class of neural networks that does this very well. Optimally the result is a nicely mapped neural network representative of the data set, however SOFMs do not translate to a parallelized architecture very well. The problem stems from the neighborhoods that are established between the neurons, creating race conditions for updating winning neurons. We propose a fully parallelized mapping architecture based loosely on SOFM called decaying potential fields neural network (DPFNN). We show that DPFNN uses neurons that are computationally uncoupled but symbolically linked. Through analysis we show this allows for the neurons to reach convergence with having only a passive data dependency on each other, as opposed to a hazard generating direct dependency. We have created this network to closely reflect the efficiency and speed of a parallel approach, with results that rival or exceed those of similar topological networks such as SOFM.