{"title":"Dissipation in neuromorphic computing: Fundamental bounds for feedforward networks","authors":"N. Ganesh, N. Anderson","doi":"10.1109/NANO.2017.8117441","DOIUrl":null,"url":null,"abstract":"We present the fundamental lower bound on dissipation in feedforward neural networks associated with the combined cost of the training and testing phases. Finite state automata descriptions of output generation and the weight updates during training, are used to derive the corresponding lower bounds in a physically grounded manner. The results are illustrated using a simple perceptron learning the AND classification task. The effects of the learning rate parameter and input probability distribution on the cost of dissipation are studied. Derivation of neural network learning algorithms that minimize the total dissipation cost of training are explored.","PeriodicalId":292399,"journal":{"name":"2017 IEEE 17th International Conference on Nanotechnology (IEEE-NANO)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 17th International Conference on Nanotechnology (IEEE-NANO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NANO.2017.8117441","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
We present the fundamental lower bound on dissipation in feedforward neural networks associated with the combined cost of the training and testing phases. Finite state automata descriptions of output generation and the weight updates during training, are used to derive the corresponding lower bounds in a physically grounded manner. The results are illustrated using a simple perceptron learning the AND classification task. The effects of the learning rate parameter and input probability distribution on the cost of dissipation are studied. Derivation of neural network learning algorithms that minimize the total dissipation cost of training are explored.