{"title":"Supervised adaptive resonance networks","authors":"R. Baxter","doi":"10.1145/106965.126712","DOIUrl":null,"url":null,"abstract":"Adaptive Resonance Theory (ART) has been used to design a number of massively-parallel, unsupervised, pattern recognition machines. ART networks learn a set of recognition codes by ensuring that input vectors match or resonate with one of a learned set of template vectors. A novelty detector determines whether or not an input vector is new or familiar. Novel input vectors lead to the formation of new recognition codes. Most previous applications of ART networks involve unsupervised learning; i.e., no supervisory or teaching signals are used. However, in many applications it is desirable to have the network learn a mapping between input vectors and output vectors. Herein, extensions of ART networks to allow for supervised training are described. These extended networks can operate in a supervised or an unsupervised mode, and the networks autonomously switch between the two modes. h either mode, these networks develop a set of internal recognition codes in a self-organizing fashion. Since these net works are formulated aa a dynamical system, they are capable of operating in real time and it is not necessary to distinguish between learning and performance. When supervisory signals are absent, these networks predict the desired signal based on previous training. In this paper, in addition to reviewing several popular unsupervised ART networks, two types of extensions of ART networks into a supervised learning regime are discussed. The first type is applicable to problems in which only a unidirectional mapping from input vectors to output vectors is necessary. These supervised ART networks can solve nonlinear discrimination problems, and they can learn the exclusive-OR problem in a single trial. The second type of extension is designed to handle bidirectional mappings between pairs of vectors and is applicable to the more general bidirectional associative learning problem. Permission to copy without fee all or part of rhis material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publicatiort and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish requires a fee and/or specific permission. These extensions open applications of ART networks to a broad range of nonlinear mapping problems for which alternative networks, such aa multilayer perceptions trained via backpropagation, have been used in the past. The fact that these extended ART networks can learn nonlinearly-separable training sets in a single trial demonstrates that these networks are capable of much faster learning than other methods. Potential applications include optical character recognition, automatic target recognition, medical diagnosis, loan and insurance risk analysis, and learning associations between visual objects and their names. The application of supervised ART networks to two quite different classification problems, the categorization of mushrooms and sonar returns, is discussed herein.","PeriodicalId":359315,"journal":{"name":"conference on Analysis of Neural Network Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"conference on Analysis of Neural Network Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/106965.126712","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Adaptive Resonance Theory (ART) has been used to design a number of massively-parallel, unsupervised, pattern recognition machines. ART networks learn a set of recognition codes by ensuring that input vectors match or resonate with one of a learned set of template vectors. A novelty detector determines whether or not an input vector is new or familiar. Novel input vectors lead to the formation of new recognition codes. Most previous applications of ART networks involve unsupervised learning; i.e., no supervisory or teaching signals are used. However, in many applications it is desirable to have the network learn a mapping between input vectors and output vectors. Herein, extensions of ART networks to allow for supervised training are described. These extended networks can operate in a supervised or an unsupervised mode, and the networks autonomously switch between the two modes. h either mode, these networks develop a set of internal recognition codes in a self-organizing fashion. Since these net works are formulated aa a dynamical system, they are capable of operating in real time and it is not necessary to distinguish between learning and performance. When supervisory signals are absent, these networks predict the desired signal based on previous training. In this paper, in addition to reviewing several popular unsupervised ART networks, two types of extensions of ART networks into a supervised learning regime are discussed. The first type is applicable to problems in which only a unidirectional mapping from input vectors to output vectors is necessary. These supervised ART networks can solve nonlinear discrimination problems, and they can learn the exclusive-OR problem in a single trial. The second type of extension is designed to handle bidirectional mappings between pairs of vectors and is applicable to the more general bidirectional associative learning problem. Permission to copy without fee all or part of rhis material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publicatiort and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish requires a fee and/or specific permission. These extensions open applications of ART networks to a broad range of nonlinear mapping problems for which alternative networks, such aa multilayer perceptions trained via backpropagation, have been used in the past. The fact that these extended ART networks can learn nonlinearly-separable training sets in a single trial demonstrates that these networks are capable of much faster learning than other methods. Potential applications include optical character recognition, automatic target recognition, medical diagnosis, loan and insurance risk analysis, and learning associations between visual objects and their names. The application of supervised ART networks to two quite different classification problems, the categorization of mushrooms and sonar returns, is discussed herein.