Deep Q-learning for Control: Technique and Implementation Considerations on a Physical System: Active Automotive Rear Spoiler Case

Velazquez Espitia Victor Miguel, Gonzalez Gonzalez Jose Angel, Mat Omar, Ponce Pedro, Molina G. Arturo
{"title":"Deep Q-learning for Control: Technique and Implementation Considerations on a Physical System: Active Automotive Rear Spoiler Case","authors":"Velazquez Espitia Victor Miguel, Gonzalez Gonzalez Jose Angel, Mat Omar, Ponce Pedro, Molina G. Arturo","doi":"10.1109/ICMA52036.2021.9512669","DOIUrl":null,"url":null,"abstract":"Deep Q-learning is the combination of artificial neural networks advantages (ANNs) with Q-learning. ANNs have expanded the possibilities on a variety of algorithms by enhancing their capabilities and surpassing their limitations. This is the case of reinforcement learning. Nowadays, Deep Q-learning is used in a variety of applications in different fields, including the development of intelligent algorithms to control physical systems. Deep Q-learning has demonstrated the possibility of achieving effective results by solving specific tasks that are highly complex to model through classical approaches. An important drawback is that these models require an elaborated implementation process, and several design decisions must be taken in order to achieve reliable results. Often, developers might find the design process mostly experimental rather than ruled-based. Addressing this problem, the present work describes in detail the implementation process of Deep Q-learning to control a physical system, proposes considerations and analysis parameters for each of the main steps. Demonstrated in the development of the “Active automotive rear spoiler”, the results present a methodology that successfully guides towards a proper implementation of Deep Q-learning. The knowledge of this paper should not be taken as a recipe, but rather as an evaluation reference to equip the reinforcement learning developers with tools for the development of projects.","PeriodicalId":339025,"journal":{"name":"2021 IEEE International Conference on Mechatronics and Automation (ICMA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Mechatronics and Automation (ICMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMA52036.2021.9512669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep Q-learning is the combination of artificial neural networks advantages (ANNs) with Q-learning. ANNs have expanded the possibilities on a variety of algorithms by enhancing their capabilities and surpassing their limitations. This is the case of reinforcement learning. Nowadays, Deep Q-learning is used in a variety of applications in different fields, including the development of intelligent algorithms to control physical systems. Deep Q-learning has demonstrated the possibility of achieving effective results by solving specific tasks that are highly complex to model through classical approaches. An important drawback is that these models require an elaborated implementation process, and several design decisions must be taken in order to achieve reliable results. Often, developers might find the design process mostly experimental rather than ruled-based. Addressing this problem, the present work describes in detail the implementation process of Deep Q-learning to control a physical system, proposes considerations and analysis parameters for each of the main steps. Demonstrated in the development of the “Active automotive rear spoiler”, the results present a methodology that successfully guides towards a proper implementation of Deep Q-learning. The knowledge of this paper should not be taken as a recipe, but rather as an evaluation reference to equip the reinforcement learning developers with tools for the development of projects.
用于控制的深度q -学习:物理系统的技术和实现考虑:主动汽车后扰流板案例
深度q -学习是人工神经网络优势与q -学习的结合。人工神经网络通过增强其能力和超越其局限性,扩展了各种算法的可能性。这就是强化学习的例子。如今,深度Q-learning被用于不同领域的各种应用,包括开发控制物理系统的智能算法。深度q -学习已经证明了通过解决通过经典方法建模非常复杂的特定任务来获得有效结果的可能性。一个重要的缺点是,这些模型需要一个详细的实现过程,并且为了获得可靠的结果,必须采取几个设计决策。通常,开发者可能会发现设计过程大多是实验性的,而不是基于规则的。为了解决这个问题,本文详细描述了Deep Q-learning控制物理系统的实现过程,提出了每个主要步骤的考虑因素和分析参数。在“主动汽车后扰流板”的开发中,结果提出了一种成功指导正确实施深度q -学习的方法。本文的知识不应被视为一个配方,而是作为一个评估参考,为强化学习开发人员提供开发项目的工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信