Learning ℓ1-penalized logistic regressions with smooth approximation

J. Klimaszewski, M. Sklyar, M. Korzeń
{"title":"Learning ℓ1-penalized logistic regressions with smooth approximation","authors":"J. Klimaszewski, M. Sklyar, M. Korzeń","doi":"10.1109/INISTA.2017.8001144","DOIUrl":null,"url":null,"abstract":"The paper presents comparison of learning logistic regression model with different penalty terms. Main part of the paper concerns sparse regression, which includes absolute value function. This function is not strictly convex, thus common optimizers cannot be used directly. In the paper we show that in those cases smooth approximation of absolute value function can be effectively used either in the case of lasso regression or in fussed-lasso like case. One of examples focuses on two dimensional analogue of fussed-lasso model. The experimental results present the comparison of our implementations (in C++ and Python) on three benchmark datasets.","PeriodicalId":314687,"journal":{"name":"2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INISTA.2017.8001144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The paper presents comparison of learning logistic regression model with different penalty terms. Main part of the paper concerns sparse regression, which includes absolute value function. This function is not strictly convex, thus common optimizers cannot be used directly. In the paper we show that in those cases smooth approximation of absolute value function can be effectively used either in the case of lasso regression or in fussed-lasso like case. One of examples focuses on two dimensional analogue of fussed-lasso model. The experimental results present the comparison of our implementations (in C++ and Python) on three benchmark datasets.
用光滑逼近学习1惩罚逻辑回归
本文对具有不同惩罚项的学习逻辑回归模型进行了比较。本文的主要部分是稀疏回归,其中包括绝对值函数。这个函数不是严格的凸函数,因此不能直接使用普通的优化器。在这些情况下,无论是套索回归还是套索-类套索回归,都可以有效地利用绝对值函数的光滑逼近。其中一个例子是对模糊套索模型的二维模拟。实验结果显示了我们在三个基准数据集上的实现(c++和Python)的比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信