{"title":"The quantization effects of different probability distribution on multilayer feedforward neural networks","authors":"Minghu Jiang, G. Gielen, Beixong Deng, Xiaofang Tang, Q. Ruan, Baozong Yuan","doi":"10.1109/ICOSP.2002.1179999","DOIUrl":null,"url":null,"abstract":"A statistical model of quantization was used to analyze the effects of quantization in digital implementation, and the performance degradation caused by number of quantized bits in multilayer feedforward neural networks (MLFNN) of different probability distribution. The performance of the training was compared with and without clipping weights for MLFNN. We established and analyzed the relationships between inputs and outputs among bit resolution, network-layer number, and performance degradation of MLFNN which are based on statistical models on-chip and off-chip training.","PeriodicalId":159807,"journal":{"name":"6th International Conference on Signal Processing, 2002.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"6th International Conference on Signal Processing, 2002.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOSP.2002.1179999","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A statistical model of quantization was used to analyze the effects of quantization in digital implementation, and the performance degradation caused by number of quantized bits in multilayer feedforward neural networks (MLFNN) of different probability distribution. The performance of the training was compared with and without clipping weights for MLFNN. We established and analyzed the relationships between inputs and outputs among bit resolution, network-layer number, and performance degradation of MLFNN which are based on statistical models on-chip and off-chip training.