Vasu Sharma , Alexander Winkler , Armin Norouzi , Hongsheng Guo , Jakob Andert , David Gordon
{"title":"基于强化学习的氢柴油双燃料发动机安全控制","authors":"Vasu Sharma , Alexander Winkler , Armin Norouzi , Hongsheng Guo , Jakob Andert , David Gordon","doi":"10.1016/j.ifacol.2025.07.075","DOIUrl":null,"url":null,"abstract":"<div><div>The urgent energy transition requirements towards a sustainable future span multiple industries and are a significant challenge facing humanity. Hydrogen promises a clean, carbon-free future, with the potential to integrate into existing transportation technologies. However, adding hydrogen to existing technologies such as diesel engines requires additional modeling effort. Reinforcement Learning (RL) enables interactive data-driven learning that eliminates the need for mathematical modeling for controller synthesis. The algorithms, however, may not be real-time capable and need large amounts of data to work in practice. This paper presents a novel approach which uses offline model learning with RL to demonstrate safe control of a 4.5 L Hydrogen Diesel Dual-Fuel (H2DF) engine. An offline H2DF model learning step facilitates the policy search in a simulated environment. The controllers are demonstrated to be constraint-compliant and can leverage a novel state-augmentation approach for sample-efficient learning. The offline policy is subsequently experimentally validated on the real engine where the control algorithm is executed on a Raspberry Pi controller and requires 6 times less computation time compared to online model predictive control optimization.</div></div>","PeriodicalId":37894,"journal":{"name":"IFAC-PapersOnLine","volume":"59 5","pages":"Pages 19-24"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Safe Reinforcement Learning-Based Control for Hydrogen Diesel Dual-Fuel Engines\",\"authors\":\"Vasu Sharma , Alexander Winkler , Armin Norouzi , Hongsheng Guo , Jakob Andert , David Gordon\",\"doi\":\"10.1016/j.ifacol.2025.07.075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The urgent energy transition requirements towards a sustainable future span multiple industries and are a significant challenge facing humanity. Hydrogen promises a clean, carbon-free future, with the potential to integrate into existing transportation technologies. However, adding hydrogen to existing technologies such as diesel engines requires additional modeling effort. Reinforcement Learning (RL) enables interactive data-driven learning that eliminates the need for mathematical modeling for controller synthesis. The algorithms, however, may not be real-time capable and need large amounts of data to work in practice. This paper presents a novel approach which uses offline model learning with RL to demonstrate safe control of a 4.5 L Hydrogen Diesel Dual-Fuel (H2DF) engine. An offline H2DF model learning step facilitates the policy search in a simulated environment. The controllers are demonstrated to be constraint-compliant and can leverage a novel state-augmentation approach for sample-efficient learning. The offline policy is subsequently experimentally validated on the real engine where the control algorithm is executed on a Raspberry Pi controller and requires 6 times less computation time compared to online model predictive control optimization.</div></div>\",\"PeriodicalId\":37894,\"journal\":{\"name\":\"IFAC-PapersOnLine\",\"volume\":\"59 5\",\"pages\":\"Pages 19-24\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IFAC-PapersOnLine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2405896325004288\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IFAC-PapersOnLine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2405896325004288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
Safe Reinforcement Learning-Based Control for Hydrogen Diesel Dual-Fuel Engines
The urgent energy transition requirements towards a sustainable future span multiple industries and are a significant challenge facing humanity. Hydrogen promises a clean, carbon-free future, with the potential to integrate into existing transportation technologies. However, adding hydrogen to existing technologies such as diesel engines requires additional modeling effort. Reinforcement Learning (RL) enables interactive data-driven learning that eliminates the need for mathematical modeling for controller synthesis. The algorithms, however, may not be real-time capable and need large amounts of data to work in practice. This paper presents a novel approach which uses offline model learning with RL to demonstrate safe control of a 4.5 L Hydrogen Diesel Dual-Fuel (H2DF) engine. An offline H2DF model learning step facilitates the policy search in a simulated environment. The controllers are demonstrated to be constraint-compliant and can leverage a novel state-augmentation approach for sample-efficient learning. The offline policy is subsequently experimentally validated on the real engine where the control algorithm is executed on a Raspberry Pi controller and requires 6 times less computation time compared to online model predictive control optimization.
期刊介绍:
All papers from IFAC meetings are published, in partnership with Elsevier, the IFAC Publisher, in theIFAC-PapersOnLine proceedings series hosted at the ScienceDirect web service. This series includes papers previously published in the IFAC website.The main features of the IFAC-PapersOnLine series are: -Online archive including papers from IFAC Symposia, Congresses, Conferences, and most Workshops. -All papers accepted at the meeting are published in PDF format - searchable and citable. -All papers published on the web site can be cited using the IFAC PapersOnLine ISSN and the individual paper DOI (Digital Object Identifier). The site is Open Access in nature - no charge is made to individuals for reading or downloading. Copyright of all papers belongs to IFAC and must be referenced if derivative journal papers are produced from the conference papers. All papers published in IFAC-PapersOnLine have undergone a peer review selection process according to the IFAC rules.