{"title":"2 × 2双曲偏微分方程的反演神经算子","authors":"Shanshan Wang , Mamadou Diagne , Miroslav Krstic","doi":"10.1016/j.automatica.2025.112351","DOIUrl":null,"url":null,"abstract":"<div><div>Deep neural network approximation of nonlinear operators, commonly referred to as DeepONet, has proven capable of approximating PDE backstepping designs in which a single Goursat-form PDE governs a single feedback gain function. In boundary control of coupled hyperbolic PDEs, coupled Goursat-form PDEs govern two or more gain kernels — a structure unaddressed thus far with DeepONet. In this contribution, we open the subject of approximating systems of gain kernel PDEs by considering a counter-convecting 2 × 2 hyperbolic system whose backstepping boundary controller and observer gains are the solutions to 2 × 2 kernel PDE systems in Goursat form. We establish the continuity of the mapping from (a total of five) functional coefficients of the plant to the kernel PDEs solutions, prove the existence of an arbitrarily close DeepONet approximation to the kernel PDEs, and ensure that the DeepONet-based approximated gains guarantee stabilization when replacing the exact backstepping gain kernel functions. Taking into account anti-collocated boundary actuation and sensing, our <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span><em>-globally-exponentially stabilizing (GES)</em> control law requires the deep learning of both the controller and the observer gains. Moreover, the encoding of the feedback law into DeepONet ensures <em>semi-global practical exponential stability (SG-PES),</em> as established in our result. The neural operators (NOs) speed up the computation of both controller and observer gains by multiple orders of magnitude. Its theoretically proved stabilizing capability is demonstrated through simulations.</div></div>","PeriodicalId":55413,"journal":{"name":"Automatica","volume":"178 ","pages":"Article 112351"},"PeriodicalIF":4.8000,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Backstepping neural operators for 2 × 2 hyperbolic PDEs\",\"authors\":\"Shanshan Wang , Mamadou Diagne , Miroslav Krstic\",\"doi\":\"10.1016/j.automatica.2025.112351\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep neural network approximation of nonlinear operators, commonly referred to as DeepONet, has proven capable of approximating PDE backstepping designs in which a single Goursat-form PDE governs a single feedback gain function. In boundary control of coupled hyperbolic PDEs, coupled Goursat-form PDEs govern two or more gain kernels — a structure unaddressed thus far with DeepONet. In this contribution, we open the subject of approximating systems of gain kernel PDEs by considering a counter-convecting 2 × 2 hyperbolic system whose backstepping boundary controller and observer gains are the solutions to 2 × 2 kernel PDE systems in Goursat form. We establish the continuity of the mapping from (a total of five) functional coefficients of the plant to the kernel PDEs solutions, prove the existence of an arbitrarily close DeepONet approximation to the kernel PDEs, and ensure that the DeepONet-based approximated gains guarantee stabilization when replacing the exact backstepping gain kernel functions. Taking into account anti-collocated boundary actuation and sensing, our <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span><em>-globally-exponentially stabilizing (GES)</em> control law requires the deep learning of both the controller and the observer gains. Moreover, the encoding of the feedback law into DeepONet ensures <em>semi-global practical exponential stability (SG-PES),</em> as established in our result. The neural operators (NOs) speed up the computation of both controller and observer gains by multiple orders of magnitude. Its theoretically proved stabilizing capability is demonstrated through simulations.</div></div>\",\"PeriodicalId\":55413,\"journal\":{\"name\":\"Automatica\",\"volume\":\"178 \",\"pages\":\"Article 112351\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automatica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0005109825002444\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automatica","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0005109825002444","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Backstepping neural operators for 2 × 2 hyperbolic PDEs
Deep neural network approximation of nonlinear operators, commonly referred to as DeepONet, has proven capable of approximating PDE backstepping designs in which a single Goursat-form PDE governs a single feedback gain function. In boundary control of coupled hyperbolic PDEs, coupled Goursat-form PDEs govern two or more gain kernels — a structure unaddressed thus far with DeepONet. In this contribution, we open the subject of approximating systems of gain kernel PDEs by considering a counter-convecting 2 × 2 hyperbolic system whose backstepping boundary controller and observer gains are the solutions to 2 × 2 kernel PDE systems in Goursat form. We establish the continuity of the mapping from (a total of five) functional coefficients of the plant to the kernel PDEs solutions, prove the existence of an arbitrarily close DeepONet approximation to the kernel PDEs, and ensure that the DeepONet-based approximated gains guarantee stabilization when replacing the exact backstepping gain kernel functions. Taking into account anti-collocated boundary actuation and sensing, our -globally-exponentially stabilizing (GES) control law requires the deep learning of both the controller and the observer gains. Moreover, the encoding of the feedback law into DeepONet ensures semi-global practical exponential stability (SG-PES), as established in our result. The neural operators (NOs) speed up the computation of both controller and observer gains by multiple orders of magnitude. Its theoretically proved stabilizing capability is demonstrated through simulations.
期刊介绍:
Automatica is a leading archival publication in the field of systems and control. The field encompasses today a broad set of areas and topics, and is thriving not only within itself but also in terms of its impact on other fields, such as communications, computers, biology, energy and economics. Since its inception in 1963, Automatica has kept abreast with the evolution of the field over the years, and has emerged as a leading publication driving the trends in the field.
After being founded in 1963, Automatica became a journal of the International Federation of Automatic Control (IFAC) in 1969. It features a characteristic blend of theoretical and applied papers of archival, lasting value, reporting cutting edge research results by authors across the globe. It features articles in distinct categories, including regular, brief and survey papers, technical communiqués, correspondence items, as well as reviews on published books of interest to the readership. It occasionally publishes special issues on emerging new topics or established mature topics of interest to a broad audience.
Automatica solicits original high-quality contributions in all the categories listed above, and in all areas of systems and control interpreted in a broad sense and evolving constantly. They may be submitted directly to a subject editor or to the Editor-in-Chief if not sure about the subject area. Editorial procedures in place assure careful, fair, and prompt handling of all submitted articles. Accepted papers appear in the journal in the shortest time feasible given production time constraints.