利用强化学习和小口径柴油发动机的功能模拟装置优化多次喷射的燃油喷射时机

IF 16.4 1区 化学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Abhijeet Vaze, Pramod S. Mehta, Anand Krishnasamy
{"title":"利用强化学习和小口径柴油发动机的功能模拟装置优化多次喷射的燃油喷射时机","authors":"Abhijeet Vaze, Pramod S. Mehta, Anand Krishnasamy","doi":"10.4271/03-17-06-0041","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) is a computational approach to understanding and\n automating goal-directed learning and decision-making. The difference from other\n computational approaches is the emphasis on learning by an agent from direct\n interaction with its environment to achieve long-term goals [1]. In this work, the RL algorithm was\n implemented using Python. This then enables the RL algorithm to make decisions\n to optimize the output from the system and provide real-time adaptation to\n changes and their retention for future usage. A diesel engine is a complex\n system where a RL algorithm can address the NOx–soot emissions\n trade-off by controlling fuel injection quantity and timing. This study used RL\n to optimize the fuel injection timing to get a better NO–soot trade-off for a\n common rail diesel engine. The diesel engine utilizes a pilot–main and a\n pilot–main–post-fuel injection strategy. Change of fuel injection quantity was\n not attempted in this study as the main objective was to demonstrate the use of\n RL algorithms while maintaining a constant indicated mean effective pressure. A\n change in fuel quantity has a larger influence on the indicated mean effective\n pressure than a change in fuel injection timing. The focus of this work was to\n present a novel methodology of using the 3D combustion data from analysis\n software in the form of a functional mock-up unit (FMU) and showcasing the\n implementation of a RL algorithm in Python language to interact with the FMU to\n reduce the NO and soot emissions by suggesting changes to the main injection\n timing in a pilot–main and pilot–main–post-injection strategy. RL algorithms\n identified the operating injection strategy, i.e., main injection timing for a\n pilot–main and pilot–main–post-injection strategy, reducing NO emissions from\n 38% to 56% and soot emissions from 10% to 90% for a range of fuel injection\n strategies.","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optimizing Fuel Injection Timing for Multiple Injection Using\\n Reinforcement Learning and Functional Mock-up Unit for a Small-bore Diesel\\n Engine\",\"authors\":\"Abhijeet Vaze, Pramod S. Mehta, Anand Krishnasamy\",\"doi\":\"10.4271/03-17-06-0041\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning (RL) is a computational approach to understanding and\\n automating goal-directed learning and decision-making. The difference from other\\n computational approaches is the emphasis on learning by an agent from direct\\n interaction with its environment to achieve long-term goals [1]. In this work, the RL algorithm was\\n implemented using Python. This then enables the RL algorithm to make decisions\\n to optimize the output from the system and provide real-time adaptation to\\n changes and their retention for future usage. A diesel engine is a complex\\n system where a RL algorithm can address the NOx–soot emissions\\n trade-off by controlling fuel injection quantity and timing. This study used RL\\n to optimize the fuel injection timing to get a better NO–soot trade-off for a\\n common rail diesel engine. The diesel engine utilizes a pilot–main and a\\n pilot–main–post-fuel injection strategy. Change of fuel injection quantity was\\n not attempted in this study as the main objective was to demonstrate the use of\\n RL algorithms while maintaining a constant indicated mean effective pressure. A\\n change in fuel quantity has a larger influence on the indicated mean effective\\n pressure than a change in fuel injection timing. The focus of this work was to\\n present a novel methodology of using the 3D combustion data from analysis\\n software in the form of a functional mock-up unit (FMU) and showcasing the\\n implementation of a RL algorithm in Python language to interact with the FMU to\\n reduce the NO and soot emissions by suggesting changes to the main injection\\n timing in a pilot–main and pilot–main–post-injection strategy. RL algorithms\\n identified the operating injection strategy, i.e., main injection timing for a\\n pilot–main and pilot–main–post-injection strategy, reducing NO emissions from\\n 38% to 56% and soot emissions from 10% to 90% for a range of fuel injection\\n strategies.\",\"PeriodicalId\":1,\"journal\":{\"name\":\"Accounts of Chemical Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.4000,\"publicationDate\":\"2024-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accounts of Chemical Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4271/03-17-06-0041\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4271/03-17-06-0041","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

强化学习(RL)是一种理解目标导向学习和决策并使之自动化的计算方法。它与其他计算方法的不同之处在于,它强调代理从与环境的直接交互中学习,以实现长期目标[1]。在这项工作中,RL 算法是用 Python 实现的。这样,RL 算法就能做出优化系统输出的决策,并能实时适应变化并将其保留到未来使用中。柴油发动机是一个复杂的系统,RL 算法可以通过控制燃油喷射量和时间来解决氮氧化物和烟尘排放的权衡问题。本研究使用 RL 来优化燃油喷射时机,以实现共轨柴油发动机更好的氮氧化物和烟尘权衡。该柴油发动机采用先导-主燃油喷射和先导-主-后燃油喷射策略。本研究没有尝试改变燃油喷射量,因为主要目的是在保持恒定的指示平均有效压力的情况下演示 RL 算法的使用。燃油量的变化比燃油喷射时间的变化对指示平均有效压力的影响更大。这项工作的重点是介绍一种新颖的方法,即以功能模拟装置(FMU)的形式使用来自分析软件的三维燃烧数据,并展示用 Python 语言实施的 RL 算法与 FMU 的互动,通过建议改变先导-主要和先导-主要-后喷射策略中的主要喷射时机,减少氮氧化物和烟尘的排放。RL 算法确定了工作喷射策略,即先导-主喷射和先导-主-后喷射策略的主喷射时机,在一系列燃料喷射策略中,氮氧化物排放量从 38% 减少到 56%,烟尘排放量从 10% 减少到 90%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimizing Fuel Injection Timing for Multiple Injection Using Reinforcement Learning and Functional Mock-up Unit for a Small-bore Diesel Engine
Reinforcement learning (RL) is a computational approach to understanding and automating goal-directed learning and decision-making. The difference from other computational approaches is the emphasis on learning by an agent from direct interaction with its environment to achieve long-term goals [1]. In this work, the RL algorithm was implemented using Python. This then enables the RL algorithm to make decisions to optimize the output from the system and provide real-time adaptation to changes and their retention for future usage. A diesel engine is a complex system where a RL algorithm can address the NOx–soot emissions trade-off by controlling fuel injection quantity and timing. This study used RL to optimize the fuel injection timing to get a better NO–soot trade-off for a common rail diesel engine. The diesel engine utilizes a pilot–main and a pilot–main–post-fuel injection strategy. Change of fuel injection quantity was not attempted in this study as the main objective was to demonstrate the use of RL algorithms while maintaining a constant indicated mean effective pressure. A change in fuel quantity has a larger influence on the indicated mean effective pressure than a change in fuel injection timing. The focus of this work was to present a novel methodology of using the 3D combustion data from analysis software in the form of a functional mock-up unit (FMU) and showcasing the implementation of a RL algorithm in Python language to interact with the FMU to reduce the NO and soot emissions by suggesting changes to the main injection timing in a pilot–main and pilot–main–post-injection strategy. RL algorithms identified the operating injection strategy, i.e., main injection timing for a pilot–main and pilot–main–post-injection strategy, reducing NO emissions from 38% to 56% and soot emissions from 10% to 90% for a range of fuel injection strategies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Accounts of Chemical Research
Accounts of Chemical Research 化学-化学综合
CiteScore
31.40
自引率
1.10%
发文量
312
审稿时长
2 months
期刊介绍: Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance. Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信