{"title":"一种基于激活函数导数逼近的环内训练算法","authors":"Jinming Yang, M. Ahmadi, G. Jullien, W. Miller","doi":"10.1109/MWSCAS.1998.759553","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an algorithm for the in-the-loop training of a VLSI implementation of a neural network with analog neurons and programmable digital weights. The difficulty in evaluating the derivative of nonideal activation functions has been the main obstacle to effectively training a VLSI neural network chip via the standard backpropagation (BP) algorithm. In the paper approximated derivatives have been used in BP algorithm incorporating an adaptive learning rate. An analysis from the viewpoint of optimization shows the proposed algorithm is advantageous. Experimental results indicate that the algorithm is superior to weight perturbation-based algorithms.","PeriodicalId":338994,"journal":{"name":"1998 Midwest Symposium on Circuits and Systems (Cat. No. 98CB36268)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An algorithm for in-the-loop training based on activation function derivative approximation\",\"authors\":\"Jinming Yang, M. Ahmadi, G. Jullien, W. Miller\",\"doi\":\"10.1109/MWSCAS.1998.759553\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose an algorithm for the in-the-loop training of a VLSI implementation of a neural network with analog neurons and programmable digital weights. The difficulty in evaluating the derivative of nonideal activation functions has been the main obstacle to effectively training a VLSI neural network chip via the standard backpropagation (BP) algorithm. In the paper approximated derivatives have been used in BP algorithm incorporating an adaptive learning rate. An analysis from the viewpoint of optimization shows the proposed algorithm is advantageous. Experimental results indicate that the algorithm is superior to weight perturbation-based algorithms.\",\"PeriodicalId\":338994,\"journal\":{\"name\":\"1998 Midwest Symposium on Circuits and Systems (Cat. No. 98CB36268)\",\"volume\":\"78 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"1998 Midwest Symposium on Circuits and Systems (Cat. No. 98CB36268)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MWSCAS.1998.759553\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"1998 Midwest Symposium on Circuits and Systems (Cat. No. 98CB36268)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MWSCAS.1998.759553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An algorithm for in-the-loop training based on activation function derivative approximation
In this paper, we propose an algorithm for the in-the-loop training of a VLSI implementation of a neural network with analog neurons and programmable digital weights. The difficulty in evaluating the derivative of nonideal activation functions has been the main obstacle to effectively training a VLSI neural network chip via the standard backpropagation (BP) algorithm. In the paper approximated derivatives have been used in BP algorithm incorporating an adaptive learning rate. An analysis from the viewpoint of optimization shows the proposed algorithm is advantageous. Experimental results indicate that the algorithm is superior to weight perturbation-based algorithms.