Marcus Rüb, Axel Sikora, Daniel Mueller-Gritschneder
{"title":"Advancing On-Device Neural Network Training with TinyPropv2: Dynamic, Sparse, and Efficient Backpropagation","authors":"Marcus Rüb, Axel Sikora, Daniel Mueller-Gritschneder","doi":"arxiv-2409.07109","DOIUrl":null,"url":null,"abstract":"This study introduces TinyPropv2, an innovative algorithm optimized for\non-device learning in deep neural networks, specifically designed for low-power\nmicrocontroller units. TinyPropv2 refines sparse backpropagation by dynamically\nadjusting the level of sparsity, including the ability to selectively skip\ntraining steps. This feature significantly lowers computational effort without\nsubstantially compromising accuracy. Our comprehensive evaluation across\ndiverse datasets CIFAR 10, CIFAR100, Flower, Food, Speech Command, MNIST, HAR,\nand DCASE2020 reveals that TinyPropv2 achieves near-parity with full training\nmethods, with an average accuracy drop of only around 1 percent in most cases.\nFor instance, against full training, TinyPropv2's accuracy drop is minimal, for\nexample, only 0.82 percent on CIFAR 10 and 1.07 percent on CIFAR100. In terms\nof computational effort, TinyPropv2 shows a marked reduction, requiring as\nlittle as 10 percent of the computational effort needed for full training in\nsome scenarios, and consistently outperforms other sparse training\nmethodologies. These findings underscore TinyPropv2's capacity to efficiently\nmanage computational resources while maintaining high accuracy, positioning it\nas an advantageous solution for advanced embedded device applications in the\nIoT ecosystem.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study introduces TinyPropv2, an innovative algorithm optimized for
on-device learning in deep neural networks, specifically designed for low-power
microcontroller units. TinyPropv2 refines sparse backpropagation by dynamically
adjusting the level of sparsity, including the ability to selectively skip
training steps. This feature significantly lowers computational effort without
substantially compromising accuracy. Our comprehensive evaluation across
diverse datasets CIFAR 10, CIFAR100, Flower, Food, Speech Command, MNIST, HAR,
and DCASE2020 reveals that TinyPropv2 achieves near-parity with full training
methods, with an average accuracy drop of only around 1 percent in most cases.
For instance, against full training, TinyPropv2's accuracy drop is minimal, for
example, only 0.82 percent on CIFAR 10 and 1.07 percent on CIFAR100. In terms
of computational effort, TinyPropv2 shows a marked reduction, requiring as
little as 10 percent of the computational effort needed for full training in
some scenarios, and consistently outperforms other sparse training
methodologies. These findings underscore TinyPropv2's capacity to efficiently
manage computational resources while maintaining high accuracy, positioning it
as an advantageous solution for advanced embedded device applications in the
IoT ecosystem.