{"title":"Deterministic neuron: a model for faster learning","authors":"F. Ahmed, A. Awwal","doi":"10.1109/NAECON.1993.290833","DOIUrl":null,"url":null,"abstract":"Training in most neural network architectures are currently being done by updating the weights of the network in a way to reduce some error measures. The well-known backpropagation algorithm and some other training algorithms use this approach. Obviously, this has been very successful in mimicking the way the biological neurons do their function. But the problem of slow learning and getting trapped in local minimas of error function domain deserve serious investigation. Various models are proposed with various levels of success to get rid of these two problems. In this work, we propose a deterministic model of the neuron, that guarantees faster learning by modifying the nonlinearity associated with each neuron. Only one such neuron is required to solve the generalized N-bit parity problem.<<ETX>>","PeriodicalId":183796,"journal":{"name":"Proceedings of the IEEE 1993 National Aerospace and Electronics Conference-NAECON 1993","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the IEEE 1993 National Aerospace and Electronics Conference-NAECON 1993","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAECON.1993.290833","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Training in most neural network architectures are currently being done by updating the weights of the network in a way to reduce some error measures. The well-known backpropagation algorithm and some other training algorithms use this approach. Obviously, this has been very successful in mimicking the way the biological neurons do their function. But the problem of slow learning and getting trapped in local minimas of error function domain deserve serious investigation. Various models are proposed with various levels of success to get rid of these two problems. In this work, we propose a deterministic model of the neuron, that guarantees faster learning by modifying the nonlinearity associated with each neuron. Only one such neuron is required to solve the generalized N-bit parity problem.<>