B. Abreu, Guilherme Paim, Jorge Castro-Godínez, M. Grellert, S. Bampi
{"title":"On the Netlist Gate-level Pruning for Tree-based Machine Learning Accelerators","authors":"B. Abreu, Guilherme Paim, Jorge Castro-Godínez, M. Grellert, S. Bampi","doi":"10.1109/LASCAS53948.2022.9789043","DOIUrl":null,"url":null,"abstract":"The technology advances in the recent years have led to the spread use of Machine Learning (ML) models in embedded systems. Due to the battery limitations of such edge devices, energy consumption has become a major problem. Tree-based models, such as Decision Trees (DTs) and Random Forests (RFs), are well-known ML tools that provide higher than standard accuracy results for several tasks. These tools are convenient for battery-powered devices due to their simplicity, and they can be further optimized with approximate computing techniques. This paper explores gate-level pruning for DTs and RFs. By using a framework that generates VLSI descriptions of the ML models, we investigate gate-level pruning to the mapped netlist generated after logic synthesis for three case studies. Several analyses on the energy- and area-accuracy trade-offs were performed and we found that we can obtain significant energy and area savings for small or even negligible accuracy drops, which indicates that pruning techniques can be applied to optimize tree-based hardware implementations.","PeriodicalId":356481,"journal":{"name":"2022 IEEE 13th Latin America Symposium on Circuits and System (LASCAS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 13th Latin America Symposium on Circuits and System (LASCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LASCAS53948.2022.9789043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The technology advances in the recent years have led to the spread use of Machine Learning (ML) models in embedded systems. Due to the battery limitations of such edge devices, energy consumption has become a major problem. Tree-based models, such as Decision Trees (DTs) and Random Forests (RFs), are well-known ML tools that provide higher than standard accuracy results for several tasks. These tools are convenient for battery-powered devices due to their simplicity, and they can be further optimized with approximate computing techniques. This paper explores gate-level pruning for DTs and RFs. By using a framework that generates VLSI descriptions of the ML models, we investigate gate-level pruning to the mapped netlist generated after logic synthesis for three case studies. Several analyses on the energy- and area-accuracy trade-offs were performed and we found that we can obtain significant energy and area savings for small or even negligible accuracy drops, which indicates that pruning techniques can be applied to optimize tree-based hardware implementations.