On the Netlist Gate-level Pruning for Tree-based Machine Learning Accelerators

B. Abreu, Guilherme Paim, Jorge Castro-Godínez, M. Grellert, S. Bampi
{"title":"On the Netlist Gate-level Pruning for Tree-based Machine Learning Accelerators","authors":"B. Abreu, Guilherme Paim, Jorge Castro-Godínez, M. Grellert, S. Bampi","doi":"10.1109/LASCAS53948.2022.9789043","DOIUrl":null,"url":null,"abstract":"The technology advances in the recent years have led to the spread use of Machine Learning (ML) models in embedded systems. Due to the battery limitations of such edge devices, energy consumption has become a major problem. Tree-based models, such as Decision Trees (DTs) and Random Forests (RFs), are well-known ML tools that provide higher than standard accuracy results for several tasks. These tools are convenient for battery-powered devices due to their simplicity, and they can be further optimized with approximate computing techniques. This paper explores gate-level pruning for DTs and RFs. By using a framework that generates VLSI descriptions of the ML models, we investigate gate-level pruning to the mapped netlist generated after logic synthesis for three case studies. Several analyses on the energy- and area-accuracy trade-offs were performed and we found that we can obtain significant energy and area savings for small or even negligible accuracy drops, which indicates that pruning techniques can be applied to optimize tree-based hardware implementations.","PeriodicalId":356481,"journal":{"name":"2022 IEEE 13th Latin America Symposium on Circuits and System (LASCAS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 13th Latin America Symposium on Circuits and System (LASCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LASCAS53948.2022.9789043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The technology advances in the recent years have led to the spread use of Machine Learning (ML) models in embedded systems. Due to the battery limitations of such edge devices, energy consumption has become a major problem. Tree-based models, such as Decision Trees (DTs) and Random Forests (RFs), are well-known ML tools that provide higher than standard accuracy results for several tasks. These tools are convenient for battery-powered devices due to their simplicity, and they can be further optimized with approximate computing techniques. This paper explores gate-level pruning for DTs and RFs. By using a framework that generates VLSI descriptions of the ML models, we investigate gate-level pruning to the mapped netlist generated after logic synthesis for three case studies. Several analyses on the energy- and area-accuracy trade-offs were performed and we found that we can obtain significant energy and area savings for small or even negligible accuracy drops, which indicates that pruning techniques can be applied to optimize tree-based hardware implementations.
近年来的技术进步导致了机器学习(ML)模型在嵌入式系统中的广泛使用。由于这种边缘设备的电池限制,能量消耗已成为一个主要问题。基于树的模型,如决策树(dt)和随机森林(rf),是众所周知的机器学习工具,可以为一些任务提供高于标准精度的结果。由于这些工具的简单性,它们对于电池供电的设备很方便,并且它们可以通过近似计算技术进一步优化。本文探讨了DTs和RFs的门级剪枝。通过使用生成ML模型的VLSI描述的框架,我们研究了三个案例研究中逻辑合成后生成的映射网络列表的门级修剪。对能量和面积精度的权衡进行了一些分析,我们发现我们可以在很小甚至可以忽略不计的精度下降中获得显著的能量和面积节约,这表明修剪技术可以应用于优化基于树的硬件实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信