Brain-Inspired Hyperdimensional Computing for Ultra-Efficient Edge AI

H. Amrouch, M. Imani, Xun Jiao, Y. Aloimonos, Cornelia Fermuller, Dehao Yuan, Dongning Ma, H. E. Barkam, P. Genssler, Peter Sutor
{"title":"Brain-Inspired Hyperdimensional Computing for Ultra-Efficient Edge AI","authors":"H. Amrouch, M. Imani, Xun Jiao, Y. Aloimonos, Cornelia Fermuller, Dehao Yuan, Dongning Ma, H. E. Barkam, P. Genssler, Peter Sutor","doi":"10.1109/CODES-ISSS55005.2022.00017","DOIUrl":null,"url":null,"abstract":"Hyperdimensional Computing (HDC) is rapidly emerging as an attractive alternative to traditional deep learning algorithms. Despite the profound success of Deep Neural Networks (DNNs) in many domains, the amount of computational power and storage that they demand during training makes deploying them in edge devices very challenging if not infeasible. This, in turn, inevitably necessitates streaming the data from the edge to the cloud which raises serious concerns when it comes to availability, scalability, security, and privacy. Further, the nature of data that edge devices often receive from sensors is inherently noisy. However, DNN algorithms are very sensitive to noise, which makes accomplishing the required learning tasks with high accuracy immensely difficult. In this paper, we aim at providing a comprehensive overview of the latest advances in HDC. HDC aims at realizing real-time performance and robustness through using strategies that more closely model the human brain. HDC is, in fact, motivated by the observation that the human brain operates on high-dimensional data representations. In HDC, objects are thereby encoded with high-dimensional vectors which have thousands of elements. In this paper, we will discuss the promising robustness of HDC algorithms against noise along with the ability to learn from little data. Further, we will present the outstanding synergy between HDC and beyond von Neumann architectures and how HDC opens doors for efficient learning at the edge due to the ultra-lightweight implementation that it needs, contrary to traditional DNNs.","PeriodicalId":129167,"journal":{"name":"2022 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CODES-ISSS55005.2022.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Hyperdimensional Computing (HDC) is rapidly emerging as an attractive alternative to traditional deep learning algorithms. Despite the profound success of Deep Neural Networks (DNNs) in many domains, the amount of computational power and storage that they demand during training makes deploying them in edge devices very challenging if not infeasible. This, in turn, inevitably necessitates streaming the data from the edge to the cloud which raises serious concerns when it comes to availability, scalability, security, and privacy. Further, the nature of data that edge devices often receive from sensors is inherently noisy. However, DNN algorithms are very sensitive to noise, which makes accomplishing the required learning tasks with high accuracy immensely difficult. In this paper, we aim at providing a comprehensive overview of the latest advances in HDC. HDC aims at realizing real-time performance and robustness through using strategies that more closely model the human brain. HDC is, in fact, motivated by the observation that the human brain operates on high-dimensional data representations. In HDC, objects are thereby encoded with high-dimensional vectors which have thousands of elements. In this paper, we will discuss the promising robustness of HDC algorithms against noise along with the ability to learn from little data. Further, we will present the outstanding synergy between HDC and beyond von Neumann architectures and how HDC opens doors for efficient learning at the edge due to the ultra-lightweight implementation that it needs, contrary to traditional DNNs.
基于大脑的超维计算,用于超高效边缘AI
超维计算(HDC)正迅速成为传统深度学习算法的一个有吸引力的替代方案。尽管深度神经网络(dnn)在许多领域取得了巨大的成功,但它们在训练期间所需的计算能力和存储量使得在边缘设备中部署它们即使不是不可行的,也是非常具有挑战性的。反过来,这不可避免地需要将数据从边缘流到云,这在可用性、可扩展性、安全性和隐私性方面引发了严重的担忧。此外,边缘设备通常从传感器接收到的数据本质上是嘈杂的。然而,深度神经网络算法对噪声非常敏感,这使得以高精度完成所需的学习任务非常困难。在本文中,我们旨在全面概述HDC的最新进展。HDC旨在通过使用更接近人类大脑模型的策略来实现实时性能和鲁棒性。事实上,HDC的灵感来自于对人类大脑处理高维数据表示的观察。在HDC中,对象因此被编码为具有数千个元素的高维向量。在本文中,我们将讨论HDC算法对噪声的鲁棒性以及从少量数据中学习的能力。此外,我们将介绍HDC和超越冯·诺伊曼架构之间的杰出协同作用,以及HDC如何在边缘打开高效学习的大门,因为它需要超轻量的实现,与传统的dnn相反。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信