基于HLS的群体智能驱动的优化硬件IP核,用于基于线性回归的机器学习

A. Sengupta, Rahul Chaurasia, Mahendra Rathor
{"title":"基于HLS的群体智能驱动的优化硬件IP核,用于基于线性回归的机器学习","authors":"A. Sengupta, Rahul Chaurasia, Mahendra Rathor","doi":"10.1049/tje2.12299","DOIUrl":null,"url":null,"abstract":"Linear Regression (LR), as one of the essential Machine Learning (ML) models, incurs massive data crunching during the training phase based on many data points. Considering the computationally intensive nature in the LR models, an optimized dedicated hardware IP core design can be very effective. This paper proposes the following novelties: (a) an optimized hardware IP core design of linear regression‐based machine learning model using high‐level synthesis (HLS). More specifically, independent application specific datapath architectures of hardware IP for computing optimal bias and intercepts and cost function in LR‐ML are presented here; (b) an optimized hardware IP core design of LR based ML model by deducing dependency graph from its corresponding mathematical foundation; (c) register transfer level (RTL) design, using HLS, of the optimized LR based ML hardware IP core for computing cost function; (d) linear regression IP core design using multi‐layered tree‐height transformation (THT) and swarm intelligence based architectural exploration for optimized HLS design.","PeriodicalId":22858,"journal":{"name":"The Journal of Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HLS‐based swarm intelligence driven optimized hardware IP core for linear regression‐based machine learning\",\"authors\":\"A. Sengupta, Rahul Chaurasia, Mahendra Rathor\",\"doi\":\"10.1049/tje2.12299\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Linear Regression (LR), as one of the essential Machine Learning (ML) models, incurs massive data crunching during the training phase based on many data points. Considering the computationally intensive nature in the LR models, an optimized dedicated hardware IP core design can be very effective. This paper proposes the following novelties: (a) an optimized hardware IP core design of linear regression‐based machine learning model using high‐level synthesis (HLS). More specifically, independent application specific datapath architectures of hardware IP for computing optimal bias and intercepts and cost function in LR‐ML are presented here; (b) an optimized hardware IP core design of LR based ML model by deducing dependency graph from its corresponding mathematical foundation; (c) register transfer level (RTL) design, using HLS, of the optimized LR based ML hardware IP core for computing cost function; (d) linear regression IP core design using multi‐layered tree‐height transformation (THT) and swarm intelligence based architectural exploration for optimized HLS design.\",\"PeriodicalId\":22858,\"journal\":{\"name\":\"The Journal of Engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1049/tje2.12299\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1049/tje2.12299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

线性回归(LR)作为机器学习(ML)的基本模型之一,在基于许多数据点的训练阶段会产生大量的数据处理。考虑到LR模型的计算密集型性质,优化的专用硬件IP核设计可能非常有效。本文提出了以下新颖之处:(a)使用高层次综合(high - level synthesis, HLS)优化了基于线性回归的机器学习模型的硬件IP核心设计。更具体地说,本文介绍了用于计算LR - ML中最优偏差和截距以及成本函数的硬件IP的独立应用特定数据路径架构;(b)从相应的数学基础推导出基于LR的ML模型的依赖图,优化了基于LR的ML模型的硬件IP核设计;(c)基于LR的优化ML硬件IP核的寄存器传输电平(RTL)设计,使用HLS计算成本函数;(d)利用多层树高变换(THT)和基于群智能的建筑探索优化HLS设计的线性回归IP核心设计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
HLS‐based swarm intelligence driven optimized hardware IP core for linear regression‐based machine learning
Linear Regression (LR), as one of the essential Machine Learning (ML) models, incurs massive data crunching during the training phase based on many data points. Considering the computationally intensive nature in the LR models, an optimized dedicated hardware IP core design can be very effective. This paper proposes the following novelties: (a) an optimized hardware IP core design of linear regression‐based machine learning model using high‐level synthesis (HLS). More specifically, independent application specific datapath architectures of hardware IP for computing optimal bias and intercepts and cost function in LR‐ML are presented here; (b) an optimized hardware IP core design of LR based ML model by deducing dependency graph from its corresponding mathematical foundation; (c) register transfer level (RTL) design, using HLS, of the optimized LR based ML hardware IP core for computing cost function; (d) linear regression IP core design using multi‐layered tree‐height transformation (THT) and swarm intelligence based architectural exploration for optimized HLS design.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信