RandALO:快速进行样本外风险评估

Parth T. Nobel, Daniel LeJeune, Emmanuel J. Candès
{"title":"RandALO:快速进行样本外风险评估","authors":"Parth T. Nobel, Daniel LeJeune, Emmanuel J. Candès","doi":"arxiv-2409.09781","DOIUrl":null,"url":null,"abstract":"Estimating out-of-sample risk for models trained on large high-dimensional\ndatasets is an expensive but essential part of the machine learning process,\nenabling practitioners to optimally tune hyperparameters. Cross-validation (CV)\nserves as the de facto standard for risk estimation but poorly trades off high\nbias ($K$-fold CV) for computational cost (leave-one-out CV). We propose a\nrandomized approximate leave-one-out (RandALO) risk estimator that is not only\na consistent estimator of risk in high dimensions but also less computationally\nexpensive than $K$-fold CV. We support our claims with extensive simulations on\nsynthetic and real data and provide a user-friendly Python package implementing\nRandALO available on PyPI as randalo and at https://github.com/cvxgrp/randalo.","PeriodicalId":501379,"journal":{"name":"arXiv - STAT - Statistics Theory","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"RandALO: Out-of-sample risk estimation in no time flat\",\"authors\":\"Parth T. Nobel, Daniel LeJeune, Emmanuel J. Candès\",\"doi\":\"arxiv-2409.09781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Estimating out-of-sample risk for models trained on large high-dimensional\\ndatasets is an expensive but essential part of the machine learning process,\\nenabling practitioners to optimally tune hyperparameters. Cross-validation (CV)\\nserves as the de facto standard for risk estimation but poorly trades off high\\nbias ($K$-fold CV) for computational cost (leave-one-out CV). We propose a\\nrandomized approximate leave-one-out (RandALO) risk estimator that is not only\\na consistent estimator of risk in high dimensions but also less computationally\\nexpensive than $K$-fold CV. We support our claims with extensive simulations on\\nsynthetic and real data and provide a user-friendly Python package implementing\\nRandALO available on PyPI as randalo and at https://github.com/cvxgrp/randalo.\",\"PeriodicalId\":501379,\"journal\":{\"name\":\"arXiv - STAT - Statistics Theory\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Statistics Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09781\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Statistics Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

估算在大型高维数据集上训练的模型的样本外风险是机器学习过程中一个昂贵但重要的部分,它使实践者能够优化调整超参数。交叉验证(CV)是风险估计的事实标准,但在高偏差($K$-fold CV)与计算成本(leave-one-out CV)之间的权衡并不理想。我们提出了随机化近似撇除(RandALO)风险估计器,它不仅是高维度风险的一致估计器,而且计算成本低于 K$-fold CV。我们在合成数据和真实数据上进行了大量模拟,为我们的主张提供了支持,并提供了一个实现 RandALO 的用户友好型 Python 软件包,可在 PyPI 上以 randalo 的形式下载,也可在 https://github.com/cvxgrp/randalo 上下载。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
RandALO: Out-of-sample risk estimation in no time flat
Estimating out-of-sample risk for models trained on large high-dimensional datasets is an expensive but essential part of the machine learning process, enabling practitioners to optimally tune hyperparameters. Cross-validation (CV) serves as the de facto standard for risk estimation but poorly trades off high bias ($K$-fold CV) for computational cost (leave-one-out CV). We propose a randomized approximate leave-one-out (RandALO) risk estimator that is not only a consistent estimator of risk in high dimensions but also less computationally expensive than $K$-fold CV. We support our claims with extensive simulations on synthetic and real data and provide a user-friendly Python package implementing RandALO available on PyPI as randalo and at https://github.com/cvxgrp/randalo.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信