{"title":"On the Structural Requirements of Mutlilayer Perceptronsin Binary Field","authors":"Sungkwon Park, A. Marston, Jung H. Kim","doi":"10.1109/SSST.1992.712262","DOIUrl":null,"url":null,"abstract":"This paper introduces several multilayer perceptron (MLP) existence theorems and discuses required numbers of neurons and hidden layers of MLP's for binary functions. Due to the convenience of analysis and its inherent classification ability, only MLP's with neurons using hardlimiter activation junctions are studied. Three different methods of deploying hyperplanes are discussed in this paper. Among them, the first two methods require only a single hidden layer to separate a given pattern. The last one requires two hidden layers. For a Boolean function, it is shown that the number of neurons in the hidden layer should be identical to the number of hyperplanes required to separate if one of the two methods is used. Similarly, the requirements for multiple output cases are discussed.","PeriodicalId":359363,"journal":{"name":"The 24th Southeastern Symposium on and The 3rd Annual Symposium on Communications, Signal Processing Expert Systems, and ASIC VLSI Design System Theory","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 24th Southeastern Symposium on and The 3rd Annual Symposium on Communications, Signal Processing Expert Systems, and ASIC VLSI Design System Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSST.1992.712262","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper introduces several multilayer perceptron (MLP) existence theorems and discuses required numbers of neurons and hidden layers of MLP's for binary functions. Due to the convenience of analysis and its inherent classification ability, only MLP's with neurons using hardlimiter activation junctions are studied. Three different methods of deploying hyperplanes are discussed in this paper. Among them, the first two methods require only a single hidden layer to separate a given pattern. The last one requires two hidden layers. For a Boolean function, it is shown that the number of neurons in the hidden layer should be identical to the number of hyperplanes required to separate if one of the two methods is used. Similarly, the requirements for multiple output cases are discussed.