ApprOchs: A Memristor-Based In-Memory Adaptive Approximate Adder

IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Dominik Ochs;Lukas Rapp;Leandro Borzyk;Nima Amirafshar;Nima TaheriNejad
{"title":"ApprOchs: A Memristor-Based In-Memory Adaptive Approximate Adder","authors":"Dominik Ochs;Lukas Rapp;Leandro Borzyk;Nima Amirafshar;Nima TaheriNejad","doi":"10.1109/JETCAS.2025.3537328","DOIUrl":null,"url":null,"abstract":"As silicon scaling nears its limits and the <italic>Big Data</i> era unfolds, in-memory computing is increasingly important for overcoming the <italic>Von Neumann</i> bottleneck and thus enhancing modern computing performance. One of the rising in-memory technologies are <italic>Memristors</i>, which are resistors capable of memorizing state based on an applied voltage, making them useful for storage and computation. Another emerging computing paradigm is <italic>Approximate Computing</i>, which allows for errors in calculations to in turn reduce die area, processing time and energy consumption. In an attempt to combine both concepts and leverage their benefits, we propose the memristor-based adaptive approximate adder <italic>ApprOchs</i> - which is able to selectively compute segments of an addition either approximately or exactly. ApprOchs is designed to adapt to the input data given and thus only compute as much as is needed, a quality current State-of-the-Art (SoA) in-memory adders lack. Despite also using OR-based approximation in the lower k bit, ApprOchs has the edge over S-SINC because ApprOchs can skip the computation of the upper n-k bit for a small number of possible input combinations (22k of 22n possible combinations skip the upper bits). Compared to SoA in-memory approximate adders, ApprOchs outperforms them in terms of energy consumption while being highly competitive in terms of error behavior, with moderate speed and area efficiency. In application use cases, ApprOchs demonstrates its energy efficiency, particularly in machine learning applications. In MNIST classification using Deep Convolutional Neural Networks, we achieve 78.4% energy savings compared to SoA approximate adders with the same accuracy as exact adders at 98.9%, while for k-means clustering, we observed a 69% reduction in energy consumption with no quality drop in clustering results compared to the exact computation. For image blurring, we achieve up to 32.7% energy reduction over the exact computation and in its most promising configuration (<inline-formula> <tex-math>$k=3$ </tex-math></inline-formula>), the ApprOchs adder consumes 13.4% less energy than the most energy-efficient competing SoA design (S-SINC+), while achieving a similarly excellent median image quality at 43.74dB PSNR and 0.995 SSIM.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 1","pages":"105-119"},"PeriodicalIF":3.7000,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10859167/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

As silicon scaling nears its limits and the Big Data era unfolds, in-memory computing is increasingly important for overcoming the Von Neumann bottleneck and thus enhancing modern computing performance. One of the rising in-memory technologies are Memristors, which are resistors capable of memorizing state based on an applied voltage, making them useful for storage and computation. Another emerging computing paradigm is Approximate Computing, which allows for errors in calculations to in turn reduce die area, processing time and energy consumption. In an attempt to combine both concepts and leverage their benefits, we propose the memristor-based adaptive approximate adder ApprOchs - which is able to selectively compute segments of an addition either approximately or exactly. ApprOchs is designed to adapt to the input data given and thus only compute as much as is needed, a quality current State-of-the-Art (SoA) in-memory adders lack. Despite also using OR-based approximation in the lower k bit, ApprOchs has the edge over S-SINC because ApprOchs can skip the computation of the upper n-k bit for a small number of possible input combinations (22k of 22n possible combinations skip the upper bits). Compared to SoA in-memory approximate adders, ApprOchs outperforms them in terms of energy consumption while being highly competitive in terms of error behavior, with moderate speed and area efficiency. In application use cases, ApprOchs demonstrates its energy efficiency, particularly in machine learning applications. In MNIST classification using Deep Convolutional Neural Networks, we achieve 78.4% energy savings compared to SoA approximate adders with the same accuracy as exact adders at 98.9%, while for k-means clustering, we observed a 69% reduction in energy consumption with no quality drop in clustering results compared to the exact computation. For image blurring, we achieve up to 32.7% energy reduction over the exact computation and in its most promising configuration ( $k=3$ ), the ApprOchs adder consumes 13.4% less energy than the most energy-efficient competing SoA design (S-SINC+), while achieving a similarly excellent median image quality at 43.74dB PSNR and 0.995 SSIM.
一种基于忆阻器的内存自适应近似加法器
随着芯片规模接近极限和大数据时代的到来,内存计算对于克服冯·诺伊曼瓶颈从而提高现代计算性能变得越来越重要。记忆电阻器是一种新兴的内存技术,它是一种能够根据施加的电压记忆状态的电阻器,可用于存储和计算。另一种新兴的计算范式是近似计算,它允许计算中的错误,从而减少模具面积、处理时间和能耗。为了结合这两个概念并利用它们的优点,我们提出了基于忆阻器的自适应近似加法器方法-它能够有选择地近似或精确地计算加法的部分。方法被设计为适应给定的输入数据,因此只计算所需的数据,这是当前最先进的(SoA)内存加法器所缺乏的质量。尽管在较低的k位也使用基于or的近似,但ApprOchs比S-SINC有优势,因为对于少量可能的输入组合,ApprOchs可以跳过较高的n-k位的计算(22n种可能的组合中有22k种会跳过较高的位)。与SoA内存中的近似加法器相比,方法在能耗方面优于它们,同时在错误行为方面具有很强的竞争力,具有中等的速度和面积效率。在应用用例中,ApprOchs展示了其能源效率,特别是在机器学习应用中。在使用深度卷积神经网络的MNIST分类中,与SoA近似加法器相比,我们实现了78.4%的节能,而精确加法器的准确率为98.9%,而对于k-means聚类,我们观察到与精确计算相比,能耗降低了69%,聚类结果的质量没有下降。对于图像模糊,我们在精确计算中实现了高达32.7%的能量减少,并且在其最有前途的配置($k=3$)中,ApprOchs加器比最节能的SoA设计(S-SINC+)消耗的能量少13.4%,同时在43.74dB PSNR和0.995 SSIM上实现了同样出色的中位数图像质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.50
自引率
2.20%
发文量
86
期刊介绍: The IEEE Journal on Emerging and Selected Topics in Circuits and Systems is published quarterly and solicits, with particular emphasis on emerging areas, special issues on topics that cover the entire scope of the IEEE Circuits and Systems (CAS) Society, namely the theory, analysis, design, tools, and implementation of circuits and systems, spanning their theoretical foundations, applications, and architectures for signal and information processing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信