{"title":"基于神经网络的超越函数逼近加速器","authors":"Schuyler Eldridge, F. Raudies, D. Zou, A. Joshi","doi":"10.1145/2591513.2591534","DOIUrl":null,"url":null,"abstract":"The general-purpose approximate nature of neural network (NN) based accelerators has the potential to sustain the historic energy and performance improvements of computing systems. We propose the use of NN-based accelerators to approximate mathematical functions in the GNU C Library (glibc) that commonly occur in application benchmarks. Using our NN-based approach to approximate cos, exp, log, pow, and sin we achieve an average energy-delay product (EDP) that is 68x lower than that of traditional glibc execution. In applications, our NN-based approach has an EDP 78% of that of traditional execution at the cost of an average mean squared error (MSE) of 1.56.","PeriodicalId":272619,"journal":{"name":"ACM Great Lakes Symposium on VLSI","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":"{\"title\":\"Neural network-based accelerators for transcendental function approximation\",\"authors\":\"Schuyler Eldridge, F. Raudies, D. Zou, A. Joshi\",\"doi\":\"10.1145/2591513.2591534\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The general-purpose approximate nature of neural network (NN) based accelerators has the potential to sustain the historic energy and performance improvements of computing systems. We propose the use of NN-based accelerators to approximate mathematical functions in the GNU C Library (glibc) that commonly occur in application benchmarks. Using our NN-based approach to approximate cos, exp, log, pow, and sin we achieve an average energy-delay product (EDP) that is 68x lower than that of traditional glibc execution. In applications, our NN-based approach has an EDP 78% of that of traditional execution at the cost of an average mean squared error (MSE) of 1.56.\",\"PeriodicalId\":272619,\"journal\":{\"name\":\"ACM Great Lakes Symposium on VLSI\",\"volume\":\"67 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Great Lakes Symposium on VLSI\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2591513.2591534\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Great Lakes Symposium on VLSI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2591513.2591534","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Neural network-based accelerators for transcendental function approximation
The general-purpose approximate nature of neural network (NN) based accelerators has the potential to sustain the historic energy and performance improvements of computing systems. We propose the use of NN-based accelerators to approximate mathematical functions in the GNU C Library (glibc) that commonly occur in application benchmarks. Using our NN-based approach to approximate cos, exp, log, pow, and sin we achieve an average energy-delay product (EDP) that is 68x lower than that of traditional glibc execution. In applications, our NN-based approach has an EDP 78% of that of traditional execution at the cost of an average mean squared error (MSE) of 1.56.