X-TIME:用模拟cam加速表格数据的大型树集合推断

IF 2 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Giacomo Pedretti;John Moon;Pedro Bruel;Sergey Serebryakov;Ron M. Roth;Luca Buonanno;Archit Gajjar;Lei Zhao;Tobias Ziegler;Cong Xu;Martin Foltin;Paolo Faraboschi;Jim Ignowski;Catherine E. Graves
{"title":"X-TIME:用模拟cam加速表格数据的大型树集合推断","authors":"Giacomo Pedretti;John Moon;Pedro Bruel;Sergey Serebryakov;Ron M. Roth;Luca Buonanno;Archit Gajjar;Lei Zhao;Tobias Ziegler;Cong Xu;Martin Foltin;Paolo Faraboschi;Jim Ignowski;Catherine E. Graves","doi":"10.1109/JXCDC.2024.3495634","DOIUrl":null,"url":null,"abstract":"Structured, or tabular, data are the most common format in data science. While deep learning models have proven formidable in learning from unstructured data such as images or speech, they are less accurate than simpler approaches when learning from tabular data. In contrast, modern tree-based machine learning (ML) models shine in extracting relevant information from structured data. An essential requirement in data science is to reduce model inference latency in cases where, for example, models are used in a closed loop with simulation to accelerate scientific discovery. However, the hardware acceleration community has mostly focused on deep neural networks and largely ignored other forms of ML. Previous work has described the use of an analog content addressable memory (CAM) component for efficiently mapping random forests (RFs). In this work, we develop an analog-digital architecture that implements a novel increased precision analog CAM and a programmable chip for inference of state-of-the-art tree-based ML models, such as eXtreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and others. Thanks to hardware-aware training, X-TIME reaches state-of-the-art accuracy and \n<inline-formula> <tex-math>$119\\times $ </tex-math></inline-formula>\n higher throughput at \n<inline-formula> <tex-math>$9740\\times $ </tex-math></inline-formula>\n lower latency with \n<inline-formula> <tex-math>${\\gt }150\\times $ </tex-math></inline-formula>\n improved energy efficiency compared with a state-of-the-art GPU for models with up to 4096 trees and depth of 8, with a 19-W peak power consumption.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"10 ","pages":"116-124"},"PeriodicalIF":2.0000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10753423","citationCount":"0","resultStr":"{\"title\":\"X-TIME: Accelerating Large Tree Ensembles Inference for Tabular Data With Analog CAMs\",\"authors\":\"Giacomo Pedretti;John Moon;Pedro Bruel;Sergey Serebryakov;Ron M. Roth;Luca Buonanno;Archit Gajjar;Lei Zhao;Tobias Ziegler;Cong Xu;Martin Foltin;Paolo Faraboschi;Jim Ignowski;Catherine E. Graves\",\"doi\":\"10.1109/JXCDC.2024.3495634\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Structured, or tabular, data are the most common format in data science. While deep learning models have proven formidable in learning from unstructured data such as images or speech, they are less accurate than simpler approaches when learning from tabular data. In contrast, modern tree-based machine learning (ML) models shine in extracting relevant information from structured data. An essential requirement in data science is to reduce model inference latency in cases where, for example, models are used in a closed loop with simulation to accelerate scientific discovery. However, the hardware acceleration community has mostly focused on deep neural networks and largely ignored other forms of ML. Previous work has described the use of an analog content addressable memory (CAM) component for efficiently mapping random forests (RFs). In this work, we develop an analog-digital architecture that implements a novel increased precision analog CAM and a programmable chip for inference of state-of-the-art tree-based ML models, such as eXtreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and others. Thanks to hardware-aware training, X-TIME reaches state-of-the-art accuracy and \\n<inline-formula> <tex-math>$119\\\\times $ </tex-math></inline-formula>\\n higher throughput at \\n<inline-formula> <tex-math>$9740\\\\times $ </tex-math></inline-formula>\\n lower latency with \\n<inline-formula> <tex-math>${\\\\gt }150\\\\times $ </tex-math></inline-formula>\\n improved energy efficiency compared with a state-of-the-art GPU for models with up to 4096 trees and depth of 8, with a 19-W peak power consumption.\",\"PeriodicalId\":54149,\"journal\":{\"name\":\"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits\",\"volume\":\"10 \",\"pages\":\"116-124\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10753423\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10753423/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10753423/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

结构化或表格数据是数据科学中最常见的格式。虽然深度学习模型在从图像或语音等非结构化数据中学习方面已经被证明是强大的,但在从表格数据中学习时,它们不如简单的方法准确。相比之下,现代基于树的机器学习(ML)模型在从结构化数据中提取相关信息方面表现出色。数据科学的一个基本要求是在某些情况下减少模型推理延迟,例如,将模型用于具有仿真的闭环中以加速科学发现。然而,硬件加速社区主要关注深度神经网络,而在很大程度上忽略了其他形式的机器学习。以前的工作描述了使用模拟内容可寻址存储器(CAM)组件来有效地映射随机森林(rf)。在这项工作中,我们开发了一种模拟数字架构,该架构实现了一种新型的提高精度的模拟CAM和可编程芯片,用于推断最先进的基于树的ML模型,如极限梯度增强(XGBoost),分类增强(CatBoost)等。由于硬件感知训练,X-TIME达到了最先进的精度,吞吐量提高了119倍,延迟降低了9740倍,能源效率提高了150倍,与最先进的GPU相比,可用于多达4096棵树和深度为8的模型,峰值功耗为19 w。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
X-TIME: Accelerating Large Tree Ensembles Inference for Tabular Data With Analog CAMs
Structured, or tabular, data are the most common format in data science. While deep learning models have proven formidable in learning from unstructured data such as images or speech, they are less accurate than simpler approaches when learning from tabular data. In contrast, modern tree-based machine learning (ML) models shine in extracting relevant information from structured data. An essential requirement in data science is to reduce model inference latency in cases where, for example, models are used in a closed loop with simulation to accelerate scientific discovery. However, the hardware acceleration community has mostly focused on deep neural networks and largely ignored other forms of ML. Previous work has described the use of an analog content addressable memory (CAM) component for efficiently mapping random forests (RFs). In this work, we develop an analog-digital architecture that implements a novel increased precision analog CAM and a programmable chip for inference of state-of-the-art tree-based ML models, such as eXtreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and others. Thanks to hardware-aware training, X-TIME reaches state-of-the-art accuracy and $119\times $ higher throughput at $9740\times $ lower latency with ${\gt }150\times $ improved energy efficiency compared with a state-of-the-art GPU for models with up to 4096 trees and depth of 8, with a 19-W peak power consumption.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.00
自引率
4.20%
发文量
11
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信