Error Diluted Approximate Multipliers Using Positive And Negative Compressors

Bindu G. Gowda, C. PrashanthH., M. Rao
{"title":"Error Diluted Approximate Multipliers Using Positive And Negative Compressors","authors":"Bindu G. Gowda, C. PrashanthH., M. Rao","doi":"10.1109/ISQED57927.2023.10129376","DOIUrl":null,"url":null,"abstract":"Introducing approximation has shown significant benefits in the performance and throughput, besides lowering on-chip power consumption and silicon footprint requirement. Approximation in digital computing was designed and targeted towards error-resilient applications primarily involving image or signal processing modules. Previous works focus on approximating various arithmetic operator designs, including dividers, multipliers, adders, subtractors and multiply-and-accumulate units. Approximating compressor designs for multipliers was found to improve performance, power and area effectively. In addition, they offer regularity in cascading the partial product bits. Conventional multiplier designs employ compressors of the same kind throughout the partial product reduction stages, leading to the accumulation of errors. This paper proposes to utilize two different types of compressors: positive and negative compressors, subsequently in partial product reduction stages, with the intention to reduce the accumulated error. The proposed multiplier designs with appropriately placed positive and negative compressors along the stages and columns of the Partial Product Matrix (PPM) are investigated and characterized for hardware and error metrics. These designs were further evaluated for Image smoothing and Convolutional Neural Network (CNN) applications. The CNN built for four datasets using proposed approximate multipliers demonstrated comparable accuracy to that of exact multiplier-based CNN in the Lenet-5 architecture.","PeriodicalId":315053,"journal":{"name":"2023 24th International Symposium on Quality Electronic Design (ISQED)","volume":"107 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 24th International Symposium on Quality Electronic Design (ISQED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISQED57927.2023.10129376","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Introducing approximation has shown significant benefits in the performance and throughput, besides lowering on-chip power consumption and silicon footprint requirement. Approximation in digital computing was designed and targeted towards error-resilient applications primarily involving image or signal processing modules. Previous works focus on approximating various arithmetic operator designs, including dividers, multipliers, adders, subtractors and multiply-and-accumulate units. Approximating compressor designs for multipliers was found to improve performance, power and area effectively. In addition, they offer regularity in cascading the partial product bits. Conventional multiplier designs employ compressors of the same kind throughout the partial product reduction stages, leading to the accumulation of errors. This paper proposes to utilize two different types of compressors: positive and negative compressors, subsequently in partial product reduction stages, with the intention to reduce the accumulated error. The proposed multiplier designs with appropriately placed positive and negative compressors along the stages and columns of the Partial Product Matrix (PPM) are investigated and characterized for hardware and error metrics. These designs were further evaluated for Image smoothing and Convolutional Neural Network (CNN) applications. The CNN built for four datasets using proposed approximate multipliers demonstrated comparable accuracy to that of exact multiplier-based CNN in the Lenet-5 architecture.
使用正负压缩器的误差稀释近似乘法器
除了降低片上功耗和硅足迹需求外,引入近似法在性能和吞吐量方面显示出显著的好处。数字计算中的近似是针对主要涉及图像或信号处理模块的容错应用而设计的。以前的工作重点是近似各种算术运算符设计,包括除法,乘数,加法器,减法器和乘累加单元。发现乘法器的近似压缩机设计可以有效地提高性能、功率和面积。此外,它们还提供了部分积位级联的规律性。传统的乘法器设计在整个部分乘积缩减阶段使用同类的压缩机,导致误差的积累。本文提出使用两种不同类型的压缩机:正压机和负压机,随后在部分产品缩减阶段,旨在减少累积误差。提出的乘法器设计与适当放置的正、负压缩机沿级和列的部分积矩阵(PPM)进行了调查和表征的硬件和误差指标。这些设计进一步评估了图像平滑和卷积神经网络(CNN)的应用。使用提出的近似乘法器为四个数据集构建的CNN显示出与Lenet-5架构中基于精确乘法器的CNN相当的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信