A Memory Representation of Random Forests Optimized for Resource-Limited Embedded Devices

IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
IEEE Embedded Systems Letters Pub Date : 2026-04-01 Epub Date: 2025-03-28 DOI:10.1109/LES.2025.3574563
Justin Beaurivage;Messaoud Ahmed Ouameur;Frédéric Domingue
{"title":"A Memory Representation of Random Forests Optimized for Resource-Limited Embedded Devices","authors":"Justin Beaurivage;Messaoud Ahmed Ouameur;Frédéric Domingue","doi":"10.1109/LES.2025.3574563","DOIUrl":null,"url":null,"abstract":"Random forests (RFs) are a versatile and effective machine learning technique widely applied across various tasks. With the increasing demand for deploying machine learning models on resource-constrained embedded devices, such as microcontrollers, challenges arise from the growing complexity of modern datasets. These challenges often result in models that are too large in memory and storage requirements to be feasibly implemented on small devices. In this letter, we propose a lossless memory representation of RFs that significantly limits the amount of random-access memory (RAM) required for prediction tasks, while also reducing the amount of nonvolatile memory needed to store the model. The approach achieves efficiency by embedding the data of leaf nodes within the decision nodes, thereby streamlining the tree structure. Additionally, it supports in-place prediction without requiring a decompression step. To evaluate our method, we implemented four RFs derived from real-world datasets onto four microcontroller platforms. Our results demonstrate that prediction tasks can be performed using at most 144 bytes of RAM for classification tasks, and at most 48 bytes for regression tasks, while memory accesses account for a maximum of 27.0% of the total CPU cycles. On the fastest platform, prediction times ranged between 59 and <inline-formula> <tex-math>$75~\\mu $ </tex-math></inline-formula>s, highlighting the suitability of this method for a variety of real-time applications.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"18 2","pages":"115-118"},"PeriodicalIF":2.0000,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Embedded Systems Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11016824/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/28 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Random forests (RFs) are a versatile and effective machine learning technique widely applied across various tasks. With the increasing demand for deploying machine learning models on resource-constrained embedded devices, such as microcontrollers, challenges arise from the growing complexity of modern datasets. These challenges often result in models that are too large in memory and storage requirements to be feasibly implemented on small devices. In this letter, we propose a lossless memory representation of RFs that significantly limits the amount of random-access memory (RAM) required for prediction tasks, while also reducing the amount of nonvolatile memory needed to store the model. The approach achieves efficiency by embedding the data of leaf nodes within the decision nodes, thereby streamlining the tree structure. Additionally, it supports in-place prediction without requiring a decompression step. To evaluate our method, we implemented four RFs derived from real-world datasets onto four microcontroller platforms. Our results demonstrate that prediction tasks can be performed using at most 144 bytes of RAM for classification tasks, and at most 48 bytes for regression tasks, while memory accesses account for a maximum of 27.0% of the total CPU cycles. On the fastest platform, prediction times ranged between 59 and $75~\mu $ s, highlighting the suitability of this method for a variety of real-time applications.
一种针对资源有限的嵌入式设备优化的随机森林内存表示
随机森林(RFs)是一种广泛应用于各种任务的通用而有效的机器学习技术。随着在资源受限的嵌入式设备(如微控制器)上部署机器学习模型的需求不断增加,现代数据集日益复杂的挑战也随之而来。这些挑战通常会导致模型的内存和存储需求太大,无法在小型设备上实现。在这封信中,我们提出了一种RFs的无损内存表示,该表示显着限制了预测任务所需的随机存取存储器(RAM)的数量,同时也减少了存储模型所需的非易失性存储器的数量。该方法通过在决策节点内嵌入叶节点的数据来提高效率,从而简化了树的结构。此外,它支持就地预测,而不需要解压缩步骤。为了评估我们的方法,我们在四个微控制器平台上实现了四个来自真实世界数据集的rf。我们的结果表明,分类任务最多可以使用144字节的RAM来执行预测任务,回归任务最多可以使用48字节的RAM,而内存访问最多占总CPU周期的27.0%。在最快的平台上,预测时间在59 ~ 75~ $ $ s之间,突出了该方法对各种实时应用的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Embedded Systems Letters
IEEE Embedded Systems Letters Engineering-Control and Systems Engineering
CiteScore
3.30
自引率
0.00%
发文量
65
期刊介绍: The IEEE Embedded Systems Letters (ESL), provides a forum for rapid dissemination of latest technical advances in embedded systems and related areas in embedded software. The emphasis is on models, methods, and tools that ensure secure, correct, efficient and robust design of embedded systems and their applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书