Maximilian P. Niroomand, L. Dicks, Edward Pyzer-Knapp, David J. Wales
{"title":"可解释的高斯过程:损失景观视角","authors":"Maximilian P. Niroomand, L. Dicks, Edward Pyzer-Knapp, David J. Wales","doi":"10.1088/2632-2153/ad62ad","DOIUrl":null,"url":null,"abstract":"\n Prior beliefs about the latent function to shape inductive biases can be incorporated into a Gaussian Process (GP) via the kernel. However, beyond kernel choices, the decision-making process of GP models remains poorly understood. In this work, we contribute an analysis of the loss landscape for GP models using methods from chemical physics. We demonstrate $\\nu$-continuity for Mat'ern kernels and outline aspects of catastrophe theory at critical points in the loss landscape. By directly including $\\nu$ in the hyperparameter optimisation for Mat'ern kernels, we find that typical values of $\\nu$ \\textcolor{black}{can be} far from optimal in terms of performance. We also provide an \\textit{a priori} method for evaluating the effect of GP ensembles and discuss various voting approaches based on physical properties of the loss landscape. The utility of these approaches is demonstrated for various synthetic and real datasets. Our findings provide \\textcolor{black}{insight into hyperparameter optimisation for} GPs and offer practical guidance for improving their performance and interpretability in a range of applications.","PeriodicalId":503691,"journal":{"name":"Machine Learning: Science and Technology","volume":"10 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable Gaussian Processes: a loss landscape perspective\",\"authors\":\"Maximilian P. Niroomand, L. Dicks, Edward Pyzer-Knapp, David J. Wales\",\"doi\":\"10.1088/2632-2153/ad62ad\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Prior beliefs about the latent function to shape inductive biases can be incorporated into a Gaussian Process (GP) via the kernel. However, beyond kernel choices, the decision-making process of GP models remains poorly understood. In this work, we contribute an analysis of the loss landscape for GP models using methods from chemical physics. We demonstrate $\\\\nu$-continuity for Mat'ern kernels and outline aspects of catastrophe theory at critical points in the loss landscape. By directly including $\\\\nu$ in the hyperparameter optimisation for Mat'ern kernels, we find that typical values of $\\\\nu$ \\\\textcolor{black}{can be} far from optimal in terms of performance. We also provide an \\\\textit{a priori} method for evaluating the effect of GP ensembles and discuss various voting approaches based on physical properties of the loss landscape. The utility of these approaches is demonstrated for various synthetic and real datasets. Our findings provide \\\\textcolor{black}{insight into hyperparameter optimisation for} GPs and offer practical guidance for improving their performance and interpretability in a range of applications.\",\"PeriodicalId\":503691,\"journal\":{\"name\":\"Machine Learning: Science and Technology\",\"volume\":\"10 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine Learning: Science and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/2632-2153/ad62ad\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning: Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2632-2153/ad62ad","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
关于潜在函数的先验信念可以通过内核纳入高斯过程(GP),从而形成归纳偏差。然而,除了核选择之外,人们对 GP 模型的决策过程仍然知之甚少。在这项工作中,我们使用化学物理学的方法对 GP 模型的损失景观进行了分析。我们证明了 Mat'ern 内核的 $\nu$ 连续性,并概述了损失景观临界点的灾难理论的各个方面。通过将 $\nu$ 直接纳入 Mat'ern 核的超参数优化,我们发现 $\textcolor{black}{can be} 的典型值在性能方面远非最优。我们还提供了一种评估 GP 集合效果的先验方法,并讨论了基于损失景观物理特性的各种投票方法。这些方法的实用性在各种合成和真实数据集上得到了证明。我们的发现为 GPs 的超参数优化提供了启示,并为在一系列应用中提高 GPs 的性能和可解释性提供了实际指导。
Explainable Gaussian Processes: a loss landscape perspective
Prior beliefs about the latent function to shape inductive biases can be incorporated into a Gaussian Process (GP) via the kernel. However, beyond kernel choices, the decision-making process of GP models remains poorly understood. In this work, we contribute an analysis of the loss landscape for GP models using methods from chemical physics. We demonstrate $\nu$-continuity for Mat'ern kernels and outline aspects of catastrophe theory at critical points in the loss landscape. By directly including $\nu$ in the hyperparameter optimisation for Mat'ern kernels, we find that typical values of $\nu$ \textcolor{black}{can be} far from optimal in terms of performance. We also provide an \textit{a priori} method for evaluating the effect of GP ensembles and discuss various voting approaches based on physical properties of the loss landscape. The utility of these approaches is demonstrated for various synthetic and real datasets. Our findings provide \textcolor{black}{insight into hyperparameter optimisation for} GPs and offer practical guidance for improving their performance and interpretability in a range of applications.