Approximated 2-Bit Adders for Parallel In-Memristor Computing With a Novel Sum-of-Product Architecture

IF 2 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Christian Simonides;Dominik Gausepohl;Peter M. Hinkel;Fabian Seiler;Nima Taherinejad
{"title":"Approximated 2-Bit Adders for Parallel In-Memristor Computing With a Novel Sum-of-Product Architecture","authors":"Christian Simonides;Dominik Gausepohl;Peter M. Hinkel;Fabian Seiler;Nima Taherinejad","doi":"10.1109/JXCDC.2024.3497720","DOIUrl":null,"url":null,"abstract":"Conventional computing methods struggle with the exponentially increasing demand for computational power, caused by applications including image processing and machine learning (ML). Novel computing paradigms such as in-memory computing (IMC) and approximate computing (AxC) provide promising solutions to this problem. Due to their low energy consumption and inherent ability to store data in a nonvolatile fashion, memristors are an increasingly popular choice in these fields. There is a wide range of logic forms compatible with memristive IMC, each offering different advantages. We present a novel mixed-logic solution that utilizes properties of the sum-of-product (SOP) representation and propose a full-adder circuit that works efficiently in 2-bit units. To further improve the speed, area usage, and energy consumption, we propose two additional approximate (Ax) 2-bit adders that exhibit inherent parallelization capabilities. We apply the proposed adders in selected image processing applications, where our Ax approach reduces the energy consumption by \n<inline-formula> <tex-math>$\\mathrm {31~\\!\\%}$ </tex-math></inline-formula>\n–\n<inline-formula> <tex-math>$\\mathrm {40~\\!\\%}$ </tex-math></inline-formula>\n and improves the speed by \n<inline-formula> <tex-math>$\\mathrm {50~\\!\\%}$ </tex-math></inline-formula>\n. To demonstrate the potential gains of our approximations in more complex applications, we applied them in ML. Our experiments indicate that with up to \n<inline-formula> <tex-math>$6/16$ </tex-math></inline-formula>\n Ax adders, there is no accuracy degradation when applied in a convolutional neural network (CNN) that is evaluated on MNIST. Our approach can save up to 125.6 mJ of energy and 505 million steps compared to our exact approach.","PeriodicalId":54149,"journal":{"name":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","volume":"10 ","pages":"135-143"},"PeriodicalIF":2.0000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10752571","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Exploratory Solid-State Computational Devices and Circuits","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10752571/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Conventional computing methods struggle with the exponentially increasing demand for computational power, caused by applications including image processing and machine learning (ML). Novel computing paradigms such as in-memory computing (IMC) and approximate computing (AxC) provide promising solutions to this problem. Due to their low energy consumption and inherent ability to store data in a nonvolatile fashion, memristors are an increasingly popular choice in these fields. There is a wide range of logic forms compatible with memristive IMC, each offering different advantages. We present a novel mixed-logic solution that utilizes properties of the sum-of-product (SOP) representation and propose a full-adder circuit that works efficiently in 2-bit units. To further improve the speed, area usage, and energy consumption, we propose two additional approximate (Ax) 2-bit adders that exhibit inherent parallelization capabilities. We apply the proposed adders in selected image processing applications, where our Ax approach reduces the energy consumption by $\mathrm {31~\!\%}$ $\mathrm {40~\!\%}$ and improves the speed by $\mathrm {50~\!\%}$ . To demonstrate the potential gains of our approximations in more complex applications, we applied them in ML. Our experiments indicate that with up to $6/16$ Ax adders, there is no accuracy degradation when applied in a convolutional neural network (CNN) that is evaluated on MNIST. Our approach can save up to 125.6 mJ of energy and 505 million steps compared to our exact approach.
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.00
自引率
4.20%
发文量
11
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信