{"title":"A data-ensemble-based approach for sample-efficient LQ control of linear time-varying systems","authors":"Sahel Vahedi Noori , Maryam Babazadeh","doi":"10.1016/j.jfranklin.2025.108118","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents a sample-efficient, data-driven control framework for finite-horizon linear quadratic (LQ) control of linear time-varying (LTV) systems. In contrast to the time-invariant case, the time-varying LQ problem involves a differential Riccati equation (DRE) with time-dependent parameters and terminal boundary constraints, complicating data-driven control. Additionally, the time-varying dynamics invalidate the use of the Fundamental Lemma. To overcome these challenges, we formulate the LQ problem as a nonconvex optimization problem and conduct a rigorous analysis of its dual structure. By exploiting the inherent convexity of the dual problem and analyzing the KKT conditions, we derive an explicit relationship between the optimal dual solution and the parameters of the associated Q-function in time-varying case. This theoretical insight supports the development of a novel, sample-efficient, non-iterative semidefinite programming (SDP) algorithm that directly computes the optimal sequence of feedback gains from an ensemble of input-state data sequences without requiring model identification or a stabilizing controller. The resulting convex, data-dependent framework provides global optimality guarantees for completely unknown LTV systems. As a special case, the method also applies to finite-horizon LQ control of linear time-invariant (LTI) systems. In this setting, a single input-state trajectory suffices to identify the optimal LQ feedback policy, improving significantly over existing Q-learning approaches for finite horizon LTI systems that typically require data from multiple episodes. The approach provides a new optimization-based perspective on Q-learning in time-varying settings and contributes to the broader understanding of data-driven control in non-stationary environments. Simulation results show that, compared to recent methods, the proposed approach achieves superior optimality and sample efficiency on LTV systems, and indicates potential for stabilizing and optimal control of nonlinear systems.</div></div>","PeriodicalId":17283,"journal":{"name":"Journal of The Franklin Institute-engineering and Applied Mathematics","volume":"362 16","pages":"Article 108118"},"PeriodicalIF":4.2000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of The Franklin Institute-engineering and Applied Mathematics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0016003225006106","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a sample-efficient, data-driven control framework for finite-horizon linear quadratic (LQ) control of linear time-varying (LTV) systems. In contrast to the time-invariant case, the time-varying LQ problem involves a differential Riccati equation (DRE) with time-dependent parameters and terminal boundary constraints, complicating data-driven control. Additionally, the time-varying dynamics invalidate the use of the Fundamental Lemma. To overcome these challenges, we formulate the LQ problem as a nonconvex optimization problem and conduct a rigorous analysis of its dual structure. By exploiting the inherent convexity of the dual problem and analyzing the KKT conditions, we derive an explicit relationship between the optimal dual solution and the parameters of the associated Q-function in time-varying case. This theoretical insight supports the development of a novel, sample-efficient, non-iterative semidefinite programming (SDP) algorithm that directly computes the optimal sequence of feedback gains from an ensemble of input-state data sequences without requiring model identification or a stabilizing controller. The resulting convex, data-dependent framework provides global optimality guarantees for completely unknown LTV systems. As a special case, the method also applies to finite-horizon LQ control of linear time-invariant (LTI) systems. In this setting, a single input-state trajectory suffices to identify the optimal LQ feedback policy, improving significantly over existing Q-learning approaches for finite horizon LTI systems that typically require data from multiple episodes. The approach provides a new optimization-based perspective on Q-learning in time-varying settings and contributes to the broader understanding of data-driven control in non-stationary environments. Simulation results show that, compared to recent methods, the proposed approach achieves superior optimality and sample efficiency on LTV systems, and indicates potential for stabilizing and optimal control of nonlinear systems.
期刊介绍:
The Journal of The Franklin Institute has an established reputation for publishing high-quality papers in the field of engineering and applied mathematics. Its current focus is on control systems, complex networks and dynamic systems, signal processing and communications and their applications. All submitted papers are peer-reviewed. The Journal will publish original research papers and research review papers of substance. Papers and special focus issues are judged upon possible lasting value, which has been and continues to be the strength of the Journal of The Franklin Institute.