{"title":"“学习提高稳定非线性系统的性能”的勘误","authors":"Luca Furieri;Clara Lucía Galimberti;Giancarlo Ferrari-Trecate","doi":"10.1109/OJCSYS.2025.3529361","DOIUrl":null,"url":null,"abstract":"This addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled initial conditions marked with $\\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 5. Mountains—Closed-loop trajectories after 25%, 50% and 75% of the total training whose closed-loop trajectory is shown in Fig. 4. Even if the performance can be further optimized, stability is always guaranteed. Figure 6. Mountains—Closed-loop trajectories after training. (Left and middle) Controller tested over a system with mass uncertainty (-10% and +10%, respectively). (Right) Trained controller with safety promotion through (45). Training initial conditions marked with $\\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 8. Mountains—Closed-loop trajectories when using the online policy given by (48). Snapshots of three trajectories starting at different test initial conditions. Figure 9. Mountains—Three different closed-loop trajectories after training a REN controller without ${\\mathcal{L}}_{2}$ stability guarantees over 100 randomly sampled initial conditions marked with $\\circ$. Colored (gray) lines show the trajectories in (after) the training time interval.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"53-53"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10870044","citationCount":"0","resultStr":"{\"title\":\"Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems”\",\"authors\":\"Luca Furieri;Clara Lucía Galimberti;Giancarlo Ferrari-Trecate\",\"doi\":\"10.1109/OJCSYS.2025.3529361\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled initial conditions marked with $\\\\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 5. Mountains—Closed-loop trajectories after 25%, 50% and 75% of the total training whose closed-loop trajectory is shown in Fig. 4. Even if the performance can be further optimized, stability is always guaranteed. Figure 6. Mountains—Closed-loop trajectories after training. (Left and middle) Controller tested over a system with mass uncertainty (-10% and +10%, respectively). (Right) Trained controller with safety promotion through (45). Training initial conditions marked with $\\\\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 8. Mountains—Closed-loop trajectories when using the online policy given by (48). Snapshots of three trajectories starting at different test initial conditions. Figure 9. Mountains—Three different closed-loop trajectories after training a REN controller without ${\\\\mathcal{L}}_{2}$ stability guarantees over 100 randomly sampled initial conditions marked with $\\\\circ$. Colored (gray) lines show the trajectories in (after) the training time interval.\",\"PeriodicalId\":73299,\"journal\":{\"name\":\"IEEE open journal of control systems\",\"volume\":\"4 \",\"pages\":\"53-53\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10870044\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE open journal of control systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10870044/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of control systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10870044/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Erratum to “Learning to Boost the Performance of Stable Nonlinear Systems”
This addresses errors in [1]. Due to a production error, Figs. 4, 5, 6, 8, and 9 are not rendering correctly in the article PDF. The correct figures are as follows. Figure 4. Mountains—Closed-loop trajectories before training (left) and after training (middle and right) over 100 randomly sampled initial conditions marked with $\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 5. Mountains—Closed-loop trajectories after 25%, 50% and 75% of the total training whose closed-loop trajectory is shown in Fig. 4. Even if the performance can be further optimized, stability is always guaranteed. Figure 6. Mountains—Closed-loop trajectories after training. (Left and middle) Controller tested over a system with mass uncertainty (-10% and +10%, respectively). (Right) Trained controller with safety promotion through (45). Training initial conditions marked with $\circ$. Snapshots taken at time-instants τ. Colored (gray) lines show the trajectories in [0, τi] ([τi, ∞)). Colored balls (and their radius) represent the agents (and their size for collision avoidance). Figure 8. Mountains—Closed-loop trajectories when using the online policy given by (48). Snapshots of three trajectories starting at different test initial conditions. Figure 9. Mountains—Three different closed-loop trajectories after training a REN controller without ${\mathcal{L}}_{2}$ stability guarantees over 100 randomly sampled initial conditions marked with $\circ$. Colored (gray) lines show the trajectories in (after) the training time interval.