DistriHD: A Memory Efficient Distributed Binary Hyperdimensional Computing Architecture for Image Classification

Dehua Liang, Jun Shiomi, Noriyuki Miura, H. Awano
{"title":"DistriHD: A Memory Efficient Distributed Binary Hyperdimensional Computing Architecture for Image Classification","authors":"Dehua Liang, Jun Shiomi, Noriyuki Miura, H. Awano","doi":"10.1109/ASP-DAC52403.2022.9712589","DOIUrl":null,"url":null,"abstract":"Hyper-Dimensional (HD) computing is a brain-inspired learning approach for efficient and fast learning on today's embedded devices. HD computing first encodes all data points to high-dimensional vectors called hypervectors and then efficiently performs the classification task using a well-defined set of operations. Although HD computing achieved reasonable performances in several practical tasks, it comes with huge memory requirements since the data point should be stored in a very long vector having thousands of bits. To alleviate this problem, we propose a novel HD computing architecture, called DistriHD which enables HD computing to be trained and tested using binary hypervectors and achieves high accuracy in single-pass training mode with significantly low hardware resources. DistriHD encodes data points to distributed binary hypervectors and eliminates the expensive item memory in the encoder, which significantly reduces the required hardware cost for inference. Our evaluation also shows that our model can achieve a $27.6\\times$ reduction in memory cost without hurting the classification accuracy. The hardware implementation also demonstrates that DistriHD achieves over $9.9\\times$ and $28.8\\times$ reduction in area and power, respectively.","PeriodicalId":239260,"journal":{"name":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASP-DAC52403.2022.9712589","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Hyper-Dimensional (HD) computing is a brain-inspired learning approach for efficient and fast learning on today's embedded devices. HD computing first encodes all data points to high-dimensional vectors called hypervectors and then efficiently performs the classification task using a well-defined set of operations. Although HD computing achieved reasonable performances in several practical tasks, it comes with huge memory requirements since the data point should be stored in a very long vector having thousands of bits. To alleviate this problem, we propose a novel HD computing architecture, called DistriHD which enables HD computing to be trained and tested using binary hypervectors and achieves high accuracy in single-pass training mode with significantly low hardware resources. DistriHD encodes data points to distributed binary hypervectors and eliminates the expensive item memory in the encoder, which significantly reduces the required hardware cost for inference. Our evaluation also shows that our model can achieve a $27.6\times$ reduction in memory cost without hurting the classification accuracy. The hardware implementation also demonstrates that DistriHD achieves over $9.9\times$ and $28.8\times$ reduction in area and power, respectively.
DistriHD:用于图像分类的高效内存分布式二进制超维计算架构
超高维(HD)计算是一种大脑启发的学习方法,用于在当今的嵌入式设备上高效快速地学习。HD计算首先将所有数据点编码为称为超向量的高维向量,然后使用一组定义良好的操作有效地执行分类任务。尽管高清计算在一些实际任务中取得了合理的性能,但由于数据点应该存储在具有数千位的非常长的矢量中,因此它带来了巨大的内存需求。为了缓解这一问题,我们提出了一种新的高清计算架构,称为DistriHD,它使高清计算能够使用二进制超向量进行训练和测试,并在单次训练模式下以极低的硬件资源实现高精度。DistriHD将数据点编码为分布式二进制超向量,并消除了编码器中昂贵的项目内存,从而显着降低了推理所需的硬件成本。我们的评估还表明,我们的模型可以在不影响分类准确性的情况下将内存成本降低27.6倍。硬件实现还表明,DistriHD在面积和功耗方面分别减少了9.9倍和28.8倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信