Ben Crulis, Barthélémy Serres, Cyril de Runz, G. Venturini
{"title":"Are alternatives to backpropagation useful for training Binary Neural Networks? An experimental study in image classification","authors":"Ben Crulis, Barthélémy Serres, Cyril de Runz, G. Venturini","doi":"10.1145/3555776.3577674","DOIUrl":null,"url":null,"abstract":"Current artificial neural networks are trained with parameters encoded as floating point numbers that occupy lots of memory space at inference time. Due to the increase in size of deep learning models, it is becoming very difficult to consider training and using artificial neural networks on edge devices such as smartphones. Binary neural networks promise to reduce the size of deep neural network models as well as increasing inference speed while decreasing energy consumption and so allow the deployment of more powerful models on edge devices. However, binary neural networks are still proven to be difficult to train using the backpropagation based gradient descent scheme. We propose to adapt to binary neural networks two training algorithms considered as promising alternatives to backpropagation but for continuous neural networks. We provide experimental comparative results for image classification including the backpropagation baseline on the MNIST, Fashion MNIST and CIFAR-10 datasets in both continuous and binary settings. The results demonstrate that binary neural networks can not only be trained using alternative algorithms to backpropagation but can also be shown to lead better performance and a higher tolerance to the presence or absence of batch normalization layers.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computing Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555776.3577674","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Current artificial neural networks are trained with parameters encoded as floating point numbers that occupy lots of memory space at inference time. Due to the increase in size of deep learning models, it is becoming very difficult to consider training and using artificial neural networks on edge devices such as smartphones. Binary neural networks promise to reduce the size of deep neural network models as well as increasing inference speed while decreasing energy consumption and so allow the deployment of more powerful models on edge devices. However, binary neural networks are still proven to be difficult to train using the backpropagation based gradient descent scheme. We propose to adapt to binary neural networks two training algorithms considered as promising alternatives to backpropagation but for continuous neural networks. We provide experimental comparative results for image classification including the backpropagation baseline on the MNIST, Fashion MNIST and CIFAR-10 datasets in both continuous and binary settings. The results demonstrate that binary neural networks can not only be trained using alternative algorithms to backpropagation but can also be shown to lead better performance and a higher tolerance to the presence or absence of batch normalization layers.