{"title":"深度信念网络训练中有限精度算法的动态点随机舍入算法","authors":"M. Essam, T. Tang, Eric Tatt Wei Ho, Hsin Chen","doi":"10.1109/NER.2017.8008430","DOIUrl":null,"url":null,"abstract":"This paper reports how to train a Deep Belief Network (DBN) using only 8-bit fixed-point parameters. We propose a dynamic-point stochastic rounding algorithm that provides enhanced results compared to the existing stochastic rounding. We show that by using a variable scaling factor, the fixed-point parameter updates are enhanced. To be more hardware amenable, the use of common scaling factor at each layer of DBN is further proposed. Using publicly available MNIST database, we show that the proposed algorithm can train a 3-layer DBN with an average accuracy of 98.49%, with a drop of 0.08% from the double floating-point average accuracy.","PeriodicalId":142883,"journal":{"name":"2017 8th International IEEE/EMBS Conference on Neural Engineering (NER)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Dynamic point stochastic rounding algorithm for limited precision arithmetic in Deep Belief Network training\",\"authors\":\"M. Essam, T. Tang, Eric Tatt Wei Ho, Hsin Chen\",\"doi\":\"10.1109/NER.2017.8008430\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper reports how to train a Deep Belief Network (DBN) using only 8-bit fixed-point parameters. We propose a dynamic-point stochastic rounding algorithm that provides enhanced results compared to the existing stochastic rounding. We show that by using a variable scaling factor, the fixed-point parameter updates are enhanced. To be more hardware amenable, the use of common scaling factor at each layer of DBN is further proposed. Using publicly available MNIST database, we show that the proposed algorithm can train a 3-layer DBN with an average accuracy of 98.49%, with a drop of 0.08% from the double floating-point average accuracy.\",\"PeriodicalId\":142883,\"journal\":{\"name\":\"2017 8th International IEEE/EMBS Conference on Neural Engineering (NER)\",\"volume\":\"136 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 8th International IEEE/EMBS Conference on Neural Engineering (NER)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NER.2017.8008430\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 8th International IEEE/EMBS Conference on Neural Engineering (NER)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NER.2017.8008430","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dynamic point stochastic rounding algorithm for limited precision arithmetic in Deep Belief Network training
This paper reports how to train a Deep Belief Network (DBN) using only 8-bit fixed-point parameters. We propose a dynamic-point stochastic rounding algorithm that provides enhanced results compared to the existing stochastic rounding. We show that by using a variable scaling factor, the fixed-point parameter updates are enhanced. To be more hardware amenable, the use of common scaling factor at each layer of DBN is further proposed. Using publicly available MNIST database, we show that the proposed algorithm can train a 3-layer DBN with an average accuracy of 98.49%, with a drop of 0.08% from the double floating-point average accuracy.