PosAx-O: Exploring Operator-level Approximations for Posit Arithmetic in Embedded AI/ML

Amritha Immaneni, Salim Ullah, S. Nambi, Siva Satyendra Sahoo, Akash Kumar
{"title":"PosAx-O: Exploring Operator-level Approximations for Posit Arithmetic in Embedded AI/ML","authors":"Amritha Immaneni, Salim Ullah, S. Nambi, Siva Satyendra Sahoo, Akash Kumar","doi":"10.1109/DSD57027.2022.00037","DOIUrl":null,"url":null,"abstract":"The quest for low-cost embedded AI/ML applications has motivated innovations across multiple abstractions of the computation stack. Novel approaches for arithmetic operations have primarily involved quantization, precision-scaling, approximations, and modified data representation. In this context, Posit has emerged as an alternative to the IEEE-754 standard as it offers multiple benefits, primarily due to its dynamic range and tapered precision. However, the implementation of Posit arithmetic operations tends to result in high resource utilization and power dissipation. Consequently, recent works have delved into the idea of exploiting the error resilience of machine learning algorithms by using low-precision Posit arithmetic. However, limiting the exploration to precision-scaling limits the scope for application-specific optimizations for embedded AI/ML applications. To this end, we explore operator-level optimizations and approximations for low-precision Posit numbers. Specifically, we identify and eliminate redundant operations in state-of-the-art Posit arithmetic operator designs and provide a modular framework for exploring approximations in various stages of the computation. We also present a novel framework for behaviorally testing the corresponding Posit approximate designs in Artificial Neural Networks. The proposed optimizations and approximations exhibit considerable resource improvements with a small error in many cases. For instance, a Posit-based multiplier with 1-bit reduced precision shows a 33% improvement in power and utilization, with only a 0.2% degradation in overall accuracy.","PeriodicalId":211723,"journal":{"name":"2022 25th Euromicro Conference on Digital System Design (DSD)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th Euromicro Conference on Digital System Design (DSD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSD57027.2022.00037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The quest for low-cost embedded AI/ML applications has motivated innovations across multiple abstractions of the computation stack. Novel approaches for arithmetic operations have primarily involved quantization, precision-scaling, approximations, and modified data representation. In this context, Posit has emerged as an alternative to the IEEE-754 standard as it offers multiple benefits, primarily due to its dynamic range and tapered precision. However, the implementation of Posit arithmetic operations tends to result in high resource utilization and power dissipation. Consequently, recent works have delved into the idea of exploiting the error resilience of machine learning algorithms by using low-precision Posit arithmetic. However, limiting the exploration to precision-scaling limits the scope for application-specific optimizations for embedded AI/ML applications. To this end, we explore operator-level optimizations and approximations for low-precision Posit numbers. Specifically, we identify and eliminate redundant operations in state-of-the-art Posit arithmetic operator designs and provide a modular framework for exploring approximations in various stages of the computation. We also present a novel framework for behaviorally testing the corresponding Posit approximate designs in Artificial Neural Networks. The proposed optimizations and approximations exhibit considerable resource improvements with a small error in many cases. For instance, a Posit-based multiplier with 1-bit reduced precision shows a 33% improvement in power and utilization, with only a 0.2% degradation in overall accuracy.
PosAx-O:探索嵌入式AI/ML中位置算法的算子级近似
对低成本嵌入式AI/ML应用程序的追求激发了跨计算堆栈多个抽象的创新。新的算术运算方法主要涉及量化、精度缩放、近似值和修改的数据表示。在这种情况下,Posit已经成为IEEE-754标准的替代方案,因为它提供了多种优势,主要是由于它的动态范围和锥形精度。然而,Posit算术运算的实现往往会导致较高的资源利用率和功耗。因此,最近的工作已经深入研究了通过使用低精度Posit算法来利用机器学习算法的错误弹性的想法。然而,将探索局限于精度缩放限制了嵌入式AI/ML应用程序特定于应用程序的优化范围。为此,我们探索了低精度正数的算子级优化和近似。具体来说,我们在最先进的Posit算术运算符设计中识别和消除冗余操作,并提供一个模块化框架,用于探索计算各个阶段的近似。我们还提出了一种新的框架,用于在人工神经网络中对相应的Posit近似设计进行行为测试。所提出的优化和近似在许多情况下以很小的误差显示了相当大的资源改进。例如,精度降低1位的基于正数的乘法器在功率和利用率方面提高了33%,而总体精度仅降低了0.2%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信