TS-EFA: Resource-efficient High-precision Approximation of Exponential Functions Based on Template-scaling Method

Jeeson Kim, V. Kornijcuk, D. Jeong
{"title":"TS-EFA: Resource-efficient High-precision Approximation of Exponential Functions Based on Template-scaling Method","authors":"Jeeson Kim, V. Kornijcuk, D. Jeong","doi":"10.1109/ISQED48828.2020.9137012","DOIUrl":null,"url":null,"abstract":"Spiking neural network (SNN) utilizes a number of temporal kernels that follow exponential functions with characteristic time-constants $\\tau$. The digital-hardware implementation of SNN-referred to as digital neuromorphic processor-suffers from the heavy workload caused by the exponential function approximation. The challenge is to reconcile the approximation accuracy with hardware resource cost optimally. To this end, we propose an exponential function approximation (EFA) method that reconciles its approximation precision with circuit overhead and calculation speed. This EFA is based on a template-scaling (TS) method; a segment of a full exponential function is taken as a template, and the template is repeatedly scaled to approximate the entire function. Therefore, we refer to our EFA as TS-EFA. The TS-EFA needs two lookup tables (LUT): template and scaling LUTs. The former is allocated to the template, whereas the latter is allocated to the scaling factors for the total bins. For experimental verification, we implemented the TS-EFA in a Xilinx Virtex-7 field-programmable gate array at 500 MHz clock speed. Two types of TS-EFA modules were considered: (i) module with a single time-constant and (ii) multiple time-constants. The module (i) successfully approximates the exponential function with a maximum absolute error of $1.3\\times 10^{-5}$ and a latency of four clock cycles. The module (ii) can be shared among different temporal kernels with different time-constants unlike the module (i). This module performs the approximation with the identical precision but an additional latency of four clock cycles, i.e., total eight clock cycles.","PeriodicalId":225828,"journal":{"name":"2020 21st International Symposium on Quality Electronic Design (ISQED)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 21st International Symposium on Quality Electronic Design (ISQED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISQED48828.2020.9137012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Spiking neural network (SNN) utilizes a number of temporal kernels that follow exponential functions with characteristic time-constants $\tau$. The digital-hardware implementation of SNN-referred to as digital neuromorphic processor-suffers from the heavy workload caused by the exponential function approximation. The challenge is to reconcile the approximation accuracy with hardware resource cost optimally. To this end, we propose an exponential function approximation (EFA) method that reconciles its approximation precision with circuit overhead and calculation speed. This EFA is based on a template-scaling (TS) method; a segment of a full exponential function is taken as a template, and the template is repeatedly scaled to approximate the entire function. Therefore, we refer to our EFA as TS-EFA. The TS-EFA needs two lookup tables (LUT): template and scaling LUTs. The former is allocated to the template, whereas the latter is allocated to the scaling factors for the total bins. For experimental verification, we implemented the TS-EFA in a Xilinx Virtex-7 field-programmable gate array at 500 MHz clock speed. Two types of TS-EFA modules were considered: (i) module with a single time-constant and (ii) multiple time-constants. The module (i) successfully approximates the exponential function with a maximum absolute error of $1.3\times 10^{-5}$ and a latency of four clock cycles. The module (ii) can be shared among different temporal kernels with different time-constants unlike the module (i). This module performs the approximation with the identical precision but an additional latency of four clock cycles, i.e., total eight clock cycles.
TS-EFA:基于模板缩放法的指数函数资源高效高精度逼近
脉冲神经网络(SNN)利用许多时间核,这些时间核遵循具有特征时间常数$\tau$的指数函数。snn的数字硬件实现(即数字神经形态处理器)受到指数函数逼近带来的繁重工作负荷的困扰。挑战在于如何将逼近精度与硬件资源成本最优地协调起来。为此,我们提出了一种指数函数近似(EFA)方法,该方法将其近似精度与电路开销和计算速度相协调。该EFA基于模板缩放(TS)方法;以全指数函数的一段为模板,对模板进行多次缩放以逼近整个函数。因此,我们将我们的EFA称为TS-EFA。TS-EFA需要两个查找表(LUT):模板和缩放LUT。前者分配给模板,而后者分配给总箱的缩放因子。为了进行实验验证,我们在Xilinx Virtex-7现场可编程门阵列中以500 MHz时钟速度实现了TS-EFA。考虑了两种类型的TS-EFA模块:(i)具有单个时间常数的模块和(ii)具有多个时间常数的模块。该模块(i)成功地逼近了指数函数,最大绝对误差为1.3\乘以10^{-5}$,延迟为4个时钟周期。与模块(i)不同,模块(ii)可以在具有不同时间常数的不同时间内核之间共享。该模块以相同的精度执行近似,但增加了四个时钟周期的延迟,即总共八个时钟周期。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信