T. Furukawa, K. Tokunaga, Kenji Morishita, S. Yasui
{"title":"Modular network SOM (mnSOM): from vector space to function space","authors":"T. Furukawa, K. Tokunaga, Kenji Morishita, S. Yasui","doi":"10.1109/IJCNN.2005.1556114","DOIUrl":null,"url":null,"abstract":"Kohonen's self-organizing map (SOM), which performs topology-preserving transformation from a high dimensional data vector space to a low-dimensional map space, provides a powerful tool for data analysis, classification and visualization in many application fields. Despite its power, SOM can only deal with vectorized data, although many expansions have been proposed for various data-type cases. This study aims to develop a novel generalization of SOM called modular network SOM (mnSOM), which enables users to deal with general data classes in a consistent manner. mnSOM has an array structure consisting of function modules that are trainable neural networks, e.g. multi-layer perceptrons (MLPs), instead of the vector units of the conventional SOM family. In the case of MLP-modules, mnSOM learns a group of systems or functions in terms of the input-output relationships, and at the same time, mnSOM generates a feature map that shows distances between the learned systems. Thus, mnSOM with MLP modules is an SOM in function space rather than in vector space. From this point of view, the conventional SOM of Kohonen's can be regarded as a special case of mnSOM, the modules consisting of fixed-value bias units. In this paper, mnSOM with MLP modules is described along with some application examples.","PeriodicalId":365690,"journal":{"name":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","volume":"115 37","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2005.1556114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29
Abstract
Kohonen's self-organizing map (SOM), which performs topology-preserving transformation from a high dimensional data vector space to a low-dimensional map space, provides a powerful tool for data analysis, classification and visualization in many application fields. Despite its power, SOM can only deal with vectorized data, although many expansions have been proposed for various data-type cases. This study aims to develop a novel generalization of SOM called modular network SOM (mnSOM), which enables users to deal with general data classes in a consistent manner. mnSOM has an array structure consisting of function modules that are trainable neural networks, e.g. multi-layer perceptrons (MLPs), instead of the vector units of the conventional SOM family. In the case of MLP-modules, mnSOM learns a group of systems or functions in terms of the input-output relationships, and at the same time, mnSOM generates a feature map that shows distances between the learned systems. Thus, mnSOM with MLP modules is an SOM in function space rather than in vector space. From this point of view, the conventional SOM of Kohonen's can be regarded as a special case of mnSOM, the modules consisting of fixed-value bias units. In this paper, mnSOM with MLP modules is described along with some application examples.