Algorithm-Hardware Co-Design for Efficient Brain-Inspired Hyperdimensional Learning on Edge

Yang Ni, Yeseong Kim, T. Simunic, M. Imani
{"title":"Algorithm-Hardware Co-Design for Efficient Brain-Inspired Hyperdimensional Learning on Edge","authors":"Yang Ni, Yeseong Kim, T. Simunic, M. Imani","doi":"10.23919/DATE54114.2022.9774524","DOIUrl":null,"url":null,"abstract":"Machine learning methods have been widely utilized to provide high quality for many cognitive tasks. Running sophisticated learning tasks requires high computational costs to process a large amount of learning data. Brain-inspired Hyperdimensional Computing (HDC) is introduced as an alternative solution for lightweight learning on edge devices. However, HDC models still rely on accelerators to ensure realtime and efficient learning. These hardware designs are not commercially available and need a relatively long period to synthesize and fabricate after deriving the new applications. In this paper, we propose an efficient framework for accelerating the HDC at the edge by fully utilizing the available computing power. We optimize the HDC through algorithm-hardware co-design of the host CPU and existing low-power machine learning accelerators, such as Edge TPU. We interpret the lightweight HDC learning model as a hyper-wide neural network to take advantage of the accelerator and machine learning platform. We further improve the runtime cost of training by employing a bootstrap aggregating algorithm called bagging while maintaining the learning quality. We evaluate the performance of the proposed framework with several applications. Joint experiments on mobile CPU and the Edge TPU show that our framework achieves 4.5 × faster training and 4.2 × faster inference compared to the baseline platform. In addition, our framework achieves 19.4 × faster training and 8.9 × faster inference as compared to embedded ARM CPU, Raspberry Pi, that consumes similar power consumption.","PeriodicalId":232583,"journal":{"name":"2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/DATE54114.2022.9774524","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Machine learning methods have been widely utilized to provide high quality for many cognitive tasks. Running sophisticated learning tasks requires high computational costs to process a large amount of learning data. Brain-inspired Hyperdimensional Computing (HDC) is introduced as an alternative solution for lightweight learning on edge devices. However, HDC models still rely on accelerators to ensure realtime and efficient learning. These hardware designs are not commercially available and need a relatively long period to synthesize and fabricate after deriving the new applications. In this paper, we propose an efficient framework for accelerating the HDC at the edge by fully utilizing the available computing power. We optimize the HDC through algorithm-hardware co-design of the host CPU and existing low-power machine learning accelerators, such as Edge TPU. We interpret the lightweight HDC learning model as a hyper-wide neural network to take advantage of the accelerator and machine learning platform. We further improve the runtime cost of training by employing a bootstrap aggregating algorithm called bagging while maintaining the learning quality. We evaluate the performance of the proposed framework with several applications. Joint experiments on mobile CPU and the Edge TPU show that our framework achieves 4.5 × faster training and 4.2 × faster inference compared to the baseline platform. In addition, our framework achieves 19.4 × faster training and 8.9 × faster inference as compared to embedded ARM CPU, Raspberry Pi, that consumes similar power consumption.
基于边缘的高效脑-超维学习算法-硬件协同设计
机器学习方法已被广泛应用于许多认知任务的高质量解决。运行复杂的学习任务需要高计算成本来处理大量的学习数据。基于大脑的超维计算(HDC)作为边缘设备上轻量级学习的替代解决方案被引入。然而,HDC模型仍然依赖于加速器来确保实时和高效的学习。这些硬件设计还没有商业化,并且在获得新的应用后需要相对较长的时间来合成和制造。在本文中,我们提出了一个有效的框架,通过充分利用可用的计算能力来加速边缘的HDC。我们将轻量级HDC学习模型解释为利用加速器和机器学习平台的超宽神经网络。在保持学习质量的同时,我们通过使用一种称为bagging的自举聚合算法进一步提高了训练的运行时成本。我们用几个应用程序来评估所提出的框架的性能。在移动CPU和Edge TPU上的联合实验表明,与基线平台相比,我们的框架的训练速度提高了4.5倍,推理速度提高了4.2倍。此外,与功耗相似的嵌入式ARM CPU Raspberry Pi相比,我们的框架实现了19.4倍的训练速度和8.9倍的推理速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信