{"title":"基于迁移学习的可扩展GPs超参数自适应共享","authors":"Caie Hu, Sanyou Zeng, Changhe Li","doi":"10.1109/CEC55065.2022.9870288","DOIUrl":null,"url":null,"abstract":"Gaussian processes (GPs) are a kind of non-parametric Bayesian approach. They are widely used as surrogate models in data-driven optimization to approximate the exact functions. However, the cubic computation complexity is involved in building GPs. This paper proposes hyperparameters adaptive sharing based on transfer learning for scalable GPs to address the limitation. In this method, the hyperparameters across source tasks are adaptively shared to the target task by the linear predictor. This method can reduce the computation cost of building GPs without losing capability based on experimental analyses. The method's effectiveness is demonstrated on a set of benchmark problems.","PeriodicalId":153241,"journal":{"name":"2022 IEEE Congress on Evolutionary Computation (CEC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Hyperparameters Adaptive Sharing Based on Transfer Learning for Scalable GPs\",\"authors\":\"Caie Hu, Sanyou Zeng, Changhe Li\",\"doi\":\"10.1109/CEC55065.2022.9870288\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Gaussian processes (GPs) are a kind of non-parametric Bayesian approach. They are widely used as surrogate models in data-driven optimization to approximate the exact functions. However, the cubic computation complexity is involved in building GPs. This paper proposes hyperparameters adaptive sharing based on transfer learning for scalable GPs to address the limitation. In this method, the hyperparameters across source tasks are adaptively shared to the target task by the linear predictor. This method can reduce the computation cost of building GPs without losing capability based on experimental analyses. The method's effectiveness is demonstrated on a set of benchmark problems.\",\"PeriodicalId\":153241,\"journal\":{\"name\":\"2022 IEEE Congress on Evolutionary Computation (CEC)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Congress on Evolutionary Computation (CEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC55065.2022.9870288\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Congress on Evolutionary Computation (CEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC55065.2022.9870288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hyperparameters Adaptive Sharing Based on Transfer Learning for Scalable GPs
Gaussian processes (GPs) are a kind of non-parametric Bayesian approach. They are widely used as surrogate models in data-driven optimization to approximate the exact functions. However, the cubic computation complexity is involved in building GPs. This paper proposes hyperparameters adaptive sharing based on transfer learning for scalable GPs to address the limitation. In this method, the hyperparameters across source tasks are adaptively shared to the target task by the linear predictor. This method can reduce the computation cost of building GPs without losing capability based on experimental analyses. The method's effectiveness is demonstrated on a set of benchmark problems.