{"title":"A network with multi-partitioning units","authors":"Y. Tan, T. Ejima","doi":"10.1109/IJCNN.1989.118279","DOIUrl":null,"url":null,"abstract":"The authors propose a fuzzy partition model (FPM), a multilayer feedforward perceptron-like network. The most important point of FPM is that it has multiple-input/output units which are upper compatible with the threshold units commonly used in the backpropagation (BP) model. The number of outputs is called the degree N of that unit, and an FPM unit can classify input patterns into N categories. Because the sum total of the output values of an FPM unit is always one, Kullback divergence is adopted as a network measure to derive its learning rule. The fact that the learning rule does not include the derivative of a sigmoid function, which causes the convergence of the network to be slow, contributes to its fast learning ability. The authors applied FPM to some basic problems, and the results indicated the high potential of this model.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International 1989 Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1989.118279","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The authors propose a fuzzy partition model (FPM), a multilayer feedforward perceptron-like network. The most important point of FPM is that it has multiple-input/output units which are upper compatible with the threshold units commonly used in the backpropagation (BP) model. The number of outputs is called the degree N of that unit, and an FPM unit can classify input patterns into N categories. Because the sum total of the output values of an FPM unit is always one, Kullback divergence is adopted as a network measure to derive its learning rule. The fact that the learning rule does not include the derivative of a sigmoid function, which causes the convergence of the network to be slow, contributes to its fast learning ability. The authors applied FPM to some basic problems, and the results indicated the high potential of this model.<>