{"title":"Neural network training with constrained integer weights","authors":"V. Plagianakos, M. Vrahatis","doi":"10.1109/CEC.1999.785521","DOIUrl":null,"url":null,"abstract":"Presents neural network training algorithms which are based on the differential evolution (DE) strategies introduced by Storn and Price (J. of Global Optimization, vol. 11, pp. 341-59, 1997). These strategies are applied to train neural networks with small integer weights. Such neural networks are better suited for hardware implementation than the real weight ones. Furthermore, we constrain the weights and biases in the range [-2/sup k/+1, 2/sup k/-1], for k=3,4,5. Thus, they can be represented by just k bits. These algorithms have been designed keeping in mind that the resulting integer weights require less bits to be stored and the digital arithmetic operations between them are more easily implemented in hardware. Obviously, if the network is trained in a constrained weight space, smaller weights are found and less memory is required. On the other hand, the network training procedure can be more effective and efficient when large weights are allowed. Thus, for a given application, a trade-off between effectiveness and memory consumption has to be considered. We present the results of evolution algorithms for this difficult task. Based on the application of the proposed class of methods on classical neural network benchmarks, our experience is that these methods are effective and reliable.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"33","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.1999.785521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 33
Abstract
Presents neural network training algorithms which are based on the differential evolution (DE) strategies introduced by Storn and Price (J. of Global Optimization, vol. 11, pp. 341-59, 1997). These strategies are applied to train neural networks with small integer weights. Such neural networks are better suited for hardware implementation than the real weight ones. Furthermore, we constrain the weights and biases in the range [-2/sup k/+1, 2/sup k/-1], for k=3,4,5. Thus, they can be represented by just k bits. These algorithms have been designed keeping in mind that the resulting integer weights require less bits to be stored and the digital arithmetic operations between them are more easily implemented in hardware. Obviously, if the network is trained in a constrained weight space, smaller weights are found and less memory is required. On the other hand, the network training procedure can be more effective and efficient when large weights are allowed. Thus, for a given application, a trade-off between effectiveness and memory consumption has to be considered. We present the results of evolution algorithms for this difficult task. Based on the application of the proposed class of methods on classical neural network benchmarks, our experience is that these methods are effective and reliable.
介绍了基于由Storn和Price (J. of Global Optimization, vol. 11, pp. 341-59, 1997)引入的差分进化(DE)策略的神经网络训练算法。这些策略被用于训练具有小整数权值的神经网络。这样的神经网络比真正的权重神经网络更适合硬件实现。此外,对于k=3,4,5,我们将权重和偏差限制在[-2/sup k/+ 1,2 /sup k/-1]范围内。因此,它们可以只用k位表示。这些算法在设计时考虑到所得到的整数权重需要更少的比特来存储,并且它们之间的数字算术运算更容易在硬件中实现。显然,如果网络是在一个受限的权重空间中训练的,那么可以找到更小的权重,所需的内存也更少。另一方面,当允许较大的权重时,网络训练过程会更加有效和高效。因此,对于给定的应用程序,必须考虑有效性和内存消耗之间的权衡。我们提出了针对这一困难任务的进化算法的结果。基于在经典神经网络基准测试中的应用,我们的经验表明这些方法是有效和可靠的。