Adaptive-Precision Framework for SGD Using Deep Q-Learning

Wentai Zhang, Hanxian Huang, Jiaxi Zhang, M. Jiang, Guojie Luo
{"title":"Adaptive-Precision Framework for SGD Using Deep Q-Learning","authors":"Wentai Zhang, Hanxian Huang, Jiaxi Zhang, M. Jiang, Guojie Luo","doi":"10.1145/3240765.3240774","DOIUrl":null,"url":null,"abstract":"Stochastic gradient descent (SGD) is a widely-used algorithm in many applications, especially in the training process of deep learning models. Low-precision implementation for SGD has been studied as a major acceleration approach. However, if not appropriately used, low-precision implementation can deteriorate its convergence because of the rounding error when gradients become small near a local optimum. In this work, to balance throughput and algorithmic accuracy, we apply the Q-learning technique to adjust the precision of SGD automatically by designing an appropriate decision function. The proposed decision function for Q-learning takes the error rate of the objective function, its gradients, and the current precision configuration as the inputs. Q-learning then chooses proper precision adaptively for hardware efficiency and algorithmic accuracy. We use reconfigurable devices such as FPGAs to evaluate the adaptive precision configurations generated by the proposed Q-learning method. We prototype the framework using LeNet-5 model with MNIST and CIFAR10 datasets and implement it on a Xilinx KCU1500 FPGA board. In the experiments, we analyze the throughput of different precision representations and the precision-selection of our framework. The results show that the proposed framework with adapative precision increases the throughput by up to 4.3× compared to the conventional 32-bit floating point setting, and it achieves both the best hardware efficiency and algorithmic accuracy.","PeriodicalId":413037,"journal":{"name":"2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3240765.3240774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Stochastic gradient descent (SGD) is a widely-used algorithm in many applications, especially in the training process of deep learning models. Low-precision implementation for SGD has been studied as a major acceleration approach. However, if not appropriately used, low-precision implementation can deteriorate its convergence because of the rounding error when gradients become small near a local optimum. In this work, to balance throughput and algorithmic accuracy, we apply the Q-learning technique to adjust the precision of SGD automatically by designing an appropriate decision function. The proposed decision function for Q-learning takes the error rate of the objective function, its gradients, and the current precision configuration as the inputs. Q-learning then chooses proper precision adaptively for hardware efficiency and algorithmic accuracy. We use reconfigurable devices such as FPGAs to evaluate the adaptive precision configurations generated by the proposed Q-learning method. We prototype the framework using LeNet-5 model with MNIST and CIFAR10 datasets and implement it on a Xilinx KCU1500 FPGA board. In the experiments, we analyze the throughput of different precision representations and the precision-selection of our framework. The results show that the proposed framework with adapative precision increases the throughput by up to 4.3× compared to the conventional 32-bit floating point setting, and it achieves both the best hardware efficiency and algorithmic accuracy.
基于深度q -学习的SGD自适应精度框架
随机梯度下降(SGD)是一种广泛应用的算法,特别是在深度学习模型的训练过程中。SGD的低精度实现作为一种主要的加速方法已经被研究过。然而,如果使用不当,当梯度在局部最优附近变小时,由于舍入误差,低精度实现可能会降低其收敛性。在这项工作中,为了平衡吞吐量和算法精度,我们应用q -学习技术通过设计适当的决策函数来自动调整SGD的精度。所提出的Q-learning决策函数以目标函数的错误率、其梯度和当前精度配置作为输入。然后根据硬件效率和算法精度自适应选择合适的精度。我们使用fpga等可重构器件来评估所提出的q学习方法产生的自适应精度配置。我们使用LeNet-5模型和MNIST和CIFAR10数据集对框架进行原型设计,并在Xilinx KCU1500 FPGA板上实现。在实验中,我们分析了不同精度表示的吞吐量和我们的框架的精度选择。结果表明,与传统的32位浮点设置相比,该自适应精度框架的吞吐量提高了4.3倍,实现了最佳的硬件效率和算法精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信