Real-time fast learning hardware implementation

Q3 Mathematics
M. Zhang, Samuel Garcia, M. Terré
{"title":"Real-time fast learning hardware implementation","authors":"M. Zhang, Samuel Garcia, M. Terré","doi":"10.1051/smdo/2023001","DOIUrl":null,"url":null,"abstract":"Machine learning algorithms are widely used in many intelligent applications and cloud services. Currently, the hottest topic in this field is Deep Learning represented often by neural network structures. Deep learning is fully known as deep neural network, and artificial neural network is a typical machine learning method and an important way of deep learning. With the massive growth of data, deep learning research has made significant achievements and is widely used in natural language processing (NLP), image recognition, and autonomous driving. However, there are still many breakthroughs needed in the training time and energy consumption of deep learning. Based on our previous research on fast learning architecture for neural network, in this paper, a solution to minimize the learning time of a fully connected neural network is analysed theoretically. Therefore, we propose a new parallel algorithm structure and a training method with over-tuned parameters. This strategy finally leads to an adaptation delay and the impact of this delay on the learning performance is analyzed using a simple benchmark case study. It is shown that a reduction of the adaptation step size could be proposed to compensate errors due to the delayed adaptation, then the gain in processing time for the learning phase is analysed as a function of the network parameters chosen in this study. Finally, to realize the real-time learning, this solution is implemented with a FPGA due to the parallelism architecture and flexibility, this integration shows a good performance and low power consumption.","PeriodicalId":37601,"journal":{"name":"International Journal for Simulation and Multidisciplinary Design Optimization","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal for Simulation and Multidisciplinary Design Optimization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1051/smdo/2023001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning algorithms are widely used in many intelligent applications and cloud services. Currently, the hottest topic in this field is Deep Learning represented often by neural network structures. Deep learning is fully known as deep neural network, and artificial neural network is a typical machine learning method and an important way of deep learning. With the massive growth of data, deep learning research has made significant achievements and is widely used in natural language processing (NLP), image recognition, and autonomous driving. However, there are still many breakthroughs needed in the training time and energy consumption of deep learning. Based on our previous research on fast learning architecture for neural network, in this paper, a solution to minimize the learning time of a fully connected neural network is analysed theoretically. Therefore, we propose a new parallel algorithm structure and a training method with over-tuned parameters. This strategy finally leads to an adaptation delay and the impact of this delay on the learning performance is analyzed using a simple benchmark case study. It is shown that a reduction of the adaptation step size could be proposed to compensate errors due to the delayed adaptation, then the gain in processing time for the learning phase is analysed as a function of the network parameters chosen in this study. Finally, to realize the real-time learning, this solution is implemented with a FPGA due to the parallelism architecture and flexibility, this integration shows a good performance and low power consumption.
实时快速学习硬件实现
机器学习算法被广泛应用于许多智能应用和云服务中。目前,该领域最热门的话题是深度学习,通常以神经网络结构为代表。深度学习全称深度神经网络,人工神经网络是一种典型的机器学习方法,是深度学习的重要途径。随着数据的大量增长,深度学习研究取得了显著的成果,在自然语言处理(NLP)、图像识别、自动驾驶等领域得到了广泛的应用。但是,深度学习在训练时间和能量消耗方面仍有很多需要突破的地方。本文在前人对神经网络快速学习体系结构研究的基础上,从理论上分析了一种最小化全连接神经网络学习时间的解决方案。为此,我们提出了一种新的并行算法结构和参数过调的训练方法。这种策略最终会导致适应延迟,并通过一个简单的基准案例分析了这种延迟对学习性能的影响。研究表明,可以通过减小自适应步长来补偿由于延迟自适应而产生的误差,然后分析了学习阶段处理时间的增益作为研究中选择的网络参数的函数。最后,为了实现实时学习,该方案在FPGA上实现,由于其并行性和灵活性,这种集成表现出良好的性能和低功耗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.00
自引率
0.00%
发文量
19
审稿时长
16 weeks
期刊介绍: The International Journal for Simulation and Multidisciplinary Design Optimization is a peer-reviewed journal covering all aspects related to the simulation and multidisciplinary design optimization. It is devoted to publish original work related to advanced design methodologies, theoretical approaches, contemporary computers and their applications to different fields such as engineering software/hardware developments, science, computing techniques, aerospace, automobile, aeronautic, business, management, manufacturing,... etc. Front-edge research topics related to topology optimization, composite material design, numerical simulation of manufacturing process, advanced optimization algorithms, industrial applications of optimization methods are highly suggested. The scope includes, but is not limited to original research contributions, reviews in the following topics: Parameter identification & Surface Response (all aspects of characterization and modeling of materials and structural behaviors, Artificial Neural Network, Parametric Programming, approximation methods,…etc.) Optimization Strategies (optimization methods that involve heuristic or Mathematics approaches, Control Theory, Linear & Nonlinear Programming, Stochastic Programming, Discrete & Dynamic Programming, Operational Research, Algorithms in Optimization based on nature behaviors,….etc.) Structural Optimization (sizing, shape and topology optimizations with or without external constraints for materials and structures) Dynamic and Vibration (cover modelling and simulation for dynamic and vibration analysis, shape and topology optimizations with or without external constraints for materials and structures) Industrial Applications (Applications Related to Optimization, Modelling for Engineering applications are very welcome. Authors should underline the technological, numerical or integration of the mentioned scopes.).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信