{"title":"过参数化回归方法及其在半监督学习中的应用","authors":"Katsuyuki Hagiwara","doi":"arxiv-2409.04001","DOIUrl":null,"url":null,"abstract":"The minimum norm least squares is an estimation strategy under an\nover-parameterized case and, in machine learning, is known as a helpful tool\nfor understanding a nature of deep learning. In this paper, to apply it in a\ncontext of non-parametric regression problems, we established several methods\nwhich are based on thresholding of SVD (singular value decomposition)\ncomponents, wihch are referred to as SVD regression methods. We considered\nseveral methods that are singular value based thresholding, hard-thresholding\nwith cross validation, universal thresholding and bridge thresholding.\nInformation on output samples is not utilized in the first method while it is\nutilized in the other methods. We then applied them to semi-supervised\nlearning, in which unlabeled input samples are incorporated into kernel\nfunctions in a regressor. The experimental results for real data showed that,\ndepending on the datasets, the SVD regression methods is superior to a naive\nridge regression method. Unfortunately, there were no clear advantage of the\nmethods utilizing information on output samples. Furthermore, for depending on\ndatasets, incorporation of unlabeled input samples into kernels is found to\nhave certain advantages.","PeriodicalId":501425,"journal":{"name":"arXiv - STAT - Methodology","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Over-parameterized regression methods and their application to semi-supervised learning\",\"authors\":\"Katsuyuki Hagiwara\",\"doi\":\"arxiv-2409.04001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The minimum norm least squares is an estimation strategy under an\\nover-parameterized case and, in machine learning, is known as a helpful tool\\nfor understanding a nature of deep learning. In this paper, to apply it in a\\ncontext of non-parametric regression problems, we established several methods\\nwhich are based on thresholding of SVD (singular value decomposition)\\ncomponents, wihch are referred to as SVD regression methods. We considered\\nseveral methods that are singular value based thresholding, hard-thresholding\\nwith cross validation, universal thresholding and bridge thresholding.\\nInformation on output samples is not utilized in the first method while it is\\nutilized in the other methods. We then applied them to semi-supervised\\nlearning, in which unlabeled input samples are incorporated into kernel\\nfunctions in a regressor. The experimental results for real data showed that,\\ndepending on the datasets, the SVD regression methods is superior to a naive\\nridge regression method. Unfortunately, there were no clear advantage of the\\nmethods utilizing information on output samples. Furthermore, for depending on\\ndatasets, incorporation of unlabeled input samples into kernels is found to\\nhave certain advantages.\",\"PeriodicalId\":501425,\"journal\":{\"name\":\"arXiv - STAT - Methodology\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Methodology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.04001\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Methodology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.04001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Over-parameterized regression methods and their application to semi-supervised learning
The minimum norm least squares is an estimation strategy under an
over-parameterized case and, in machine learning, is known as a helpful tool
for understanding a nature of deep learning. In this paper, to apply it in a
context of non-parametric regression problems, we established several methods
which are based on thresholding of SVD (singular value decomposition)
components, wihch are referred to as SVD regression methods. We considered
several methods that are singular value based thresholding, hard-thresholding
with cross validation, universal thresholding and bridge thresholding.
Information on output samples is not utilized in the first method while it is
utilized in the other methods. We then applied them to semi-supervised
learning, in which unlabeled input samples are incorporated into kernel
functions in a regressor. The experimental results for real data showed that,
depending on the datasets, the SVD regression methods is superior to a naive
ridge regression method. Unfortunately, there were no clear advantage of the
methods utilizing information on output samples. Furthermore, for depending on
datasets, incorporation of unlabeled input samples into kernels is found to
have certain advantages.