Mingyu Cai , Zhangli Zhou , Lin Li , Shaoping Xiao , Zhen Kan
{"title":"基于极限确定性广义b<s:1> chi自动机的软时间逻辑约束强化学习","authors":"Mingyu Cai , Zhangli Zhou , Lin Li , Shaoping Xiao , Zhen Kan","doi":"10.1016/j.jai.2024.12.005","DOIUrl":null,"url":null,"abstract":"<div><div>This paper investigates control synthesis for motion planning under conditions of uncertainty, specifically in robot motion and environmental properties, which are modeled using a probabilistic labeled Markov decision process (PL-MDP). To address this, a model-free reinforcement learning (RL) approach is designed to produce a finite-memory control policy that meets complex tasks specified by linear temporal logic (LTL) formulas. Recognizing the presence of uncertainties and potentially conflicting objectives, this study centers on addressing infeasible LTL specifications. A relaxed LTL constraint enables the agent to adapt its motion plan, allowing for partial satisfaction by accounting for necessary task violations. Additionally, a new automaton structure is introduced to increase the density of accepting rewards, facilitating deterministic policy outcomes. The proposed RL framework is rigorously analyzed and prioritizes two key objectives: (1) satisfying the acceptance condition of the relaxed product MDP, and (2) minimizing long-term violation costs. Simulation and experimental results are presented to demonstrate the framework’s effectiveness and robustness.</div></div>","PeriodicalId":100755,"journal":{"name":"Journal of Automation and Intelligence","volume":"4 1","pages":"Pages 39-51"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning with soft temporal logic constraints using limit-deterministic generalized Büchi automaton\",\"authors\":\"Mingyu Cai , Zhangli Zhou , Lin Li , Shaoping Xiao , Zhen Kan\",\"doi\":\"10.1016/j.jai.2024.12.005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper investigates control synthesis for motion planning under conditions of uncertainty, specifically in robot motion and environmental properties, which are modeled using a probabilistic labeled Markov decision process (PL-MDP). To address this, a model-free reinforcement learning (RL) approach is designed to produce a finite-memory control policy that meets complex tasks specified by linear temporal logic (LTL) formulas. Recognizing the presence of uncertainties and potentially conflicting objectives, this study centers on addressing infeasible LTL specifications. A relaxed LTL constraint enables the agent to adapt its motion plan, allowing for partial satisfaction by accounting for necessary task violations. Additionally, a new automaton structure is introduced to increase the density of accepting rewards, facilitating deterministic policy outcomes. The proposed RL framework is rigorously analyzed and prioritizes two key objectives: (1) satisfying the acceptance condition of the relaxed product MDP, and (2) minimizing long-term violation costs. Simulation and experimental results are presented to demonstrate the framework’s effectiveness and robustness.</div></div>\",\"PeriodicalId\":100755,\"journal\":{\"name\":\"Journal of Automation and Intelligence\",\"volume\":\"4 1\",\"pages\":\"Pages 39-51\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Automation and Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949855424000637\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Automation and Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949855424000637","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reinforcement learning with soft temporal logic constraints using limit-deterministic generalized Büchi automaton
This paper investigates control synthesis for motion planning under conditions of uncertainty, specifically in robot motion and environmental properties, which are modeled using a probabilistic labeled Markov decision process (PL-MDP). To address this, a model-free reinforcement learning (RL) approach is designed to produce a finite-memory control policy that meets complex tasks specified by linear temporal logic (LTL) formulas. Recognizing the presence of uncertainties and potentially conflicting objectives, this study centers on addressing infeasible LTL specifications. A relaxed LTL constraint enables the agent to adapt its motion plan, allowing for partial satisfaction by accounting for necessary task violations. Additionally, a new automaton structure is introduced to increase the density of accepting rewards, facilitating deterministic policy outcomes. The proposed RL framework is rigorously analyzed and prioritizes two key objectives: (1) satisfying the acceptance condition of the relaxed product MDP, and (2) minimizing long-term violation costs. Simulation and experimental results are presented to demonstrate the framework’s effectiveness and robustness.