Yuhang Wang , Bin Zou , Jie Xu , Chen Xu , Yuan Yan Tang
{"title":"ALR-HT: A fast and efficient Lasso regression without hyperparameter tuning","authors":"Yuhang Wang , Bin Zou , Jie Xu , Chen Xu , Yuan Yan Tang","doi":"10.1016/j.neunet.2024.106885","DOIUrl":null,"url":null,"abstract":"<div><div>Lasso regression, known for its efficacy in high-dimensional data analysis and feature selection, stands as a cornerstone in the realm of supervised learning for regression estimation. However, hyperparameter tuning for Lasso regression is often time-consuming and susceptible to noisy data in big data scenarios. In this paper we introduce a new additive Lasso regression without Hyperparameter Tuning (ALR-HT) by integrating Markov resampling with additive models. We estimate the generalization bounds of the proposed ALR-HT and establish the fast learning rate. The experimental results for benchmark datasets confirm that the proposed ALR-HT algorithm has better performance in terms of sampling and training total time, mean squared error (MSE) compared to other algorithms. We present some discussions on the ALR-HT algorithm and apply it to Ridge regression, to show its versatility and effectiveness in regularized regression scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106885"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024008141","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Lasso regression, known for its efficacy in high-dimensional data analysis and feature selection, stands as a cornerstone in the realm of supervised learning for regression estimation. However, hyperparameter tuning for Lasso regression is often time-consuming and susceptible to noisy data in big data scenarios. In this paper we introduce a new additive Lasso regression without Hyperparameter Tuning (ALR-HT) by integrating Markov resampling with additive models. We estimate the generalization bounds of the proposed ALR-HT and establish the fast learning rate. The experimental results for benchmark datasets confirm that the proposed ALR-HT algorithm has better performance in terms of sampling and training total time, mean squared error (MSE) compared to other algorithms. We present some discussions on the ALR-HT algorithm and apply it to Ridge regression, to show its versatility and effectiveness in regularized regression scenarios.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.