Reinforcement learning applied to dilute combustion control for increased fuel efficiency

IF 2.2 4区 工程技术 Q2 ENGINEERING, MECHANICAL
Bryan P Maldonado, Brian C Kaul, Catherine D Schuman, Steven R Young
{"title":"Reinforcement learning applied to dilute combustion control for increased fuel efficiency","authors":"Bryan P Maldonado, Brian C Kaul, Catherine D Schuman, Steven R Young","doi":"10.1177/14680874241226580","DOIUrl":null,"url":null,"abstract":"To reduce the modeling burden for control of spark-ignition engines, reinforcement learning (RL) has been applied to solve the dilute combustion limit problem. Q-learning was used to identify an optimal control policy to adjust the fuel injection quantity in each combustion cycle. A physics-based model was used to determine the relevant states of the system used for training the control policy in a data-efficient manner. The cost function was chosen such that high cycle-to-cycle variability (CCV) at the dilute limit was minimized while maintaining stoichiometric combustion as much as possible. Experimental results demonstrated a reduction of CCV after the training period with slightly lean combustion, contributing to a net increase in fuel conversion efficiency of 1.33%. To ensure stoichiometric combustion for three-way catalyst compatibility, a second feedback loop based on an exhaust oxygen sensor was incorporated into the fuel quantity controller using a slow proportional-integral (PI) controller. The closed-loop experiments showed that both feedback loops can cooperate effectively, maintaining stoichiometric combustion while reducing combustion CCV and increasing fuel conversion efficiency by 1.09%. Finally, a modified cost function was proposed to ensure stoichiometric combustion with a single controller. In addition, the learning period was shortened by half to evaluate the RL algorithm performance on limited training time. Experimental results showed that the modified cost function could achieve the desired CCV targets, however, the learning time was reduced by half and the fuel conversion efficiency increased only by 0.30%.","PeriodicalId":14034,"journal":{"name":"International Journal of Engine Research","volume":"9 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Engine Research","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1177/14680874241226580","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
引用次数: 0

Abstract

To reduce the modeling burden for control of spark-ignition engines, reinforcement learning (RL) has been applied to solve the dilute combustion limit problem. Q-learning was used to identify an optimal control policy to adjust the fuel injection quantity in each combustion cycle. A physics-based model was used to determine the relevant states of the system used for training the control policy in a data-efficient manner. The cost function was chosen such that high cycle-to-cycle variability (CCV) at the dilute limit was minimized while maintaining stoichiometric combustion as much as possible. Experimental results demonstrated a reduction of CCV after the training period with slightly lean combustion, contributing to a net increase in fuel conversion efficiency of 1.33%. To ensure stoichiometric combustion for three-way catalyst compatibility, a second feedback loop based on an exhaust oxygen sensor was incorporated into the fuel quantity controller using a slow proportional-integral (PI) controller. The closed-loop experiments showed that both feedback loops can cooperate effectively, maintaining stoichiometric combustion while reducing combustion CCV and increasing fuel conversion efficiency by 1.09%. Finally, a modified cost function was proposed to ensure stoichiometric combustion with a single controller. In addition, the learning period was shortened by half to evaluate the RL algorithm performance on limited training time. Experimental results showed that the modified cost function could achieve the desired CCV targets, however, the learning time was reduced by half and the fuel conversion efficiency increased only by 0.30%.
强化学习应用于稀释燃烧控制以提高燃油效率
为了减轻火花点火发动机控制的建模负担,强化学习(RL)被用于解决稀释燃烧极限问题。Q-learning 用于确定最佳控制策略,以调整每个燃烧循环中的燃料喷射量。基于物理学的模型被用来确定系统的相关状态,以数据高效的方式训练控制策略。成本函数的选择是,在尽可能保持均匀燃烧的情况下,将稀释极限时周期间的高变化率(CCV)降到最低。实验结果表明,在略微稀薄燃烧的训练期后,CCV 有所降低,从而使燃料转换效率净提高了 1.33%。为确保三元催化器兼容性的协调燃烧,使用慢速比例积分(PI)控制器将基于排气氧传感器的第二个反馈回路纳入燃料量控制器。闭环实验表明,两个反馈环路都能有效合作,在降低燃烧 CCV 的同时保持了均匀燃烧,并将燃料转化效率提高了 1.09%。最后,还提出了一个改进的成本函数,以确保使用单一控制器进行定量燃烧。此外,还将学习周期缩短了一半,以便在有限的训练时间内评估 RL 算法的性能。实验结果表明,修改后的成本函数可以实现预期的 CCV 目标,但学习时间缩短了一半,燃料转换效率仅提高了 0.30%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Engine Research
International Journal of Engine Research 工程技术-工程:机械
CiteScore
6.50
自引率
16.00%
发文量
130
审稿时长
>12 weeks
期刊介绍: The International Journal of Engine Research publishes high quality papers on experimental and analytical studies of engine technology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信