{"title":"约束回归的迭代监督学习","authors":"Tejaswi K. C., Taeyoung Lee","doi":"10.1109/ur55393.2022.9826263","DOIUrl":null,"url":null,"abstract":"Regression in supervised learning often requires the enforcement of constraints to ensure that the trained models are consistent with the underlying structures of the input and output data. This paper presents an iterative procedure to perform regression under arbitrary constraints. It is achieved by alternating between a learning step and a constraint enforcement step, to which an affine extension function is incorporated. We show this leads to a contraction mapping under mild assumptions, from which the convergence is guaranteed analytically. The presented proof of convergence in regression with constraints is the unique contribution of this paper. Furthermore, numerical experiments illustrate improvements in the trained model in terms of the quality of regression, the satisfaction of constraints, and also the stability in training, when compared to other existing algorithms.","PeriodicalId":398742,"journal":{"name":"2022 19th International Conference on Ubiquitous Robots (UR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Iterative Supervised Learning for Regression with Constraints\",\"authors\":\"Tejaswi K. C., Taeyoung Lee\",\"doi\":\"10.1109/ur55393.2022.9826263\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Regression in supervised learning often requires the enforcement of constraints to ensure that the trained models are consistent with the underlying structures of the input and output data. This paper presents an iterative procedure to perform regression under arbitrary constraints. It is achieved by alternating between a learning step and a constraint enforcement step, to which an affine extension function is incorporated. We show this leads to a contraction mapping under mild assumptions, from which the convergence is guaranteed analytically. The presented proof of convergence in regression with constraints is the unique contribution of this paper. Furthermore, numerical experiments illustrate improvements in the trained model in terms of the quality of regression, the satisfaction of constraints, and also the stability in training, when compared to other existing algorithms.\",\"PeriodicalId\":398742,\"journal\":{\"name\":\"2022 19th International Conference on Ubiquitous Robots (UR)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 19th International Conference on Ubiquitous Robots (UR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ur55393.2022.9826263\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 19th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ur55393.2022.9826263","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Iterative Supervised Learning for Regression with Constraints
Regression in supervised learning often requires the enforcement of constraints to ensure that the trained models are consistent with the underlying structures of the input and output data. This paper presents an iterative procedure to perform regression under arbitrary constraints. It is achieved by alternating between a learning step and a constraint enforcement step, to which an affine extension function is incorporated. We show this leads to a contraction mapping under mild assumptions, from which the convergence is guaranteed analytically. The presented proof of convergence in regression with constraints is the unique contribution of this paper. Furthermore, numerical experiments illustrate improvements in the trained model in terms of the quality of regression, the satisfaction of constraints, and also the stability in training, when compared to other existing algorithms.