{"title":"Feedforward neural networks to learn drawing lines","authors":"Yiwei Chen, F. Bastani","doi":"10.1109/ICNN.1994.374218","DOIUrl":null,"url":null,"abstract":"The paper examines the capability and performance of 1-hidden-layer feedforward neural networks with multi-activation product (MAP) units, through the application of drawing digital line segments. The MAP unit is a recently proposed multi-dendrite neuron model. The centroidal function is chosen as the MAP unit base activation function because it demonstrates a superior performance over the sigmoidal functions. The network with MAP units with more than one dendrite converges statistically faster during the learning phase with randomly selected training patterns. The generalization to the entire sample space is shown to be proportional to the size of the training patterns.<<ETX>>","PeriodicalId":209128,"journal":{"name":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNN.1994.374218","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The paper examines the capability and performance of 1-hidden-layer feedforward neural networks with multi-activation product (MAP) units, through the application of drawing digital line segments. The MAP unit is a recently proposed multi-dendrite neuron model. The centroidal function is chosen as the MAP unit base activation function because it demonstrates a superior performance over the sigmoidal functions. The network with MAP units with more than one dendrite converges statistically faster during the learning phase with randomly selected training patterns. The generalization to the entire sample space is shown to be proportional to the size of the training patterns.<>