Journal of Machine Learning Research最新文献

筛选
英文 中文
Empirical evaluation of resampling procedures for optimising SVM hyperparameters 优化支持向量机超参数的重采样过程的经验评价
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2017-01-01 DOI: 10.5555/3122009.3122024
WainerJacques, CawleyGavin
{"title":"Empirical evaluation of resampling procedures for optimising SVM hyperparameters","authors":"WainerJacques, CawleyGavin","doi":"10.5555/3122009.3122024","DOIUrl":"https://doi.org/10.5555/3122009.3122024","url":null,"abstract":"Tuning the regularisation and kernel hyperparameters is a vital step in optimising the generalisation performance of kernel methods, such as the support vector machine (SVM). This is most often per...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71139671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Community Extraction in Multilayer Networks with Heterogeneous Community Structure. 具有异构社区结构的多层网络中的社区提取。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2017-01-01
James D Wilson, John Palowitch, Shankar Bhamidi, Andrew B Nobel
{"title":"Community Extraction in Multilayer Networks with Heterogeneous Community Structure.","authors":"James D Wilson,&nbsp;John Palowitch,&nbsp;Shankar Bhamidi,&nbsp;Andrew B Nobel","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Multilayer networks are a useful way to capture and model multiple, binary or weighted relationships among a fixed group of objects. While community detection has proven to be a useful exploratory technique for the analysis of single-layer networks, the development of community detection methods for multilayer networks is still in its infancy. We propose and investigate a procedure, called Multilayer Extraction, that identifies densely connected vertex-layer sets in multilayer networks. Multilayer Extraction makes use of a significance based score that quantifies the connectivity of an observed vertex-layer set through comparison with a fixed degree random graph model. Multilayer Extraction directly handles networks with heterogeneous layers where community structure may be different from layer to layer. The procedure can capture overlapping communities, as well as background vertex-layer pairs that do not belong to any community. We establish consistency of the vertex-layer set optimizer of our proposed multilayer score under the multilayer stochastic block model. We investigate the performance of Multilayer Extraction on three applications and a test bed of simulations. Our theoretical and numerical evaluations suggest that Multilayer Extraction is an effective exploratory tool for analyzing complex multilayer networks. Publicly available code is available at https://github.com/jdwilson4/MultilayerExtraction.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6927681/pdf/nihms-1022819.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37486356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic differentiation in machine learning 机器学习中的自动微分
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2017-01-01 DOI: 10.5555/3122009.3242010
BaydinAtılım Günes, A. PearlmutterBarak, RadulAlexey Andreyevich, S. Mark
{"title":"Automatic differentiation in machine learning","authors":"BaydinAtılım Günes, A. PearlmutterBarak, RadulAlexey Andreyevich, S. Mark","doi":"10.5555/3122009.3242010","DOIUrl":"https://doi.org/10.5555/3122009.3242010","url":null,"abstract":"Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply \"auto-diff\", is a fa...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71139732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
SnapVX: A Network-Based Convex Optimization Solver. 基于网络的凸优化求解器。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2017-01-01
David Hallac, Christopher Wong, Steven Diamond, Abhijit Sharang, Rok Sosič, Stephen Boyd, Jure Leskovec
{"title":"SnapVX: A Network-Based Convex Optimization Solver.","authors":"David Hallac,&nbsp;Christopher Wong,&nbsp;Steven Diamond,&nbsp;Abhijit Sharang,&nbsp;Rok Sosič,&nbsp;Stephen Boyd,&nbsp;Jure Leskovec","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>SnapVX is a high-performance solver for convex optimization problems defined on networks. For problems of this form, SnapVX provides a fast and scalable solution with guaranteed global convergence. It combines the capabilities of two open source software packages: Snap.py and CVXPY. Snap.py is a large scale graph processing library, and CVXPY provides a general modeling framework for small-scale subproblems. SnapVX offers a customizable yet easy-to-use Python interface with \"out-of-the-box\" functionality. Based on the Alternating Direction Method of Multipliers (ADMM), it is able to efficiently store, analyze, parallelize, and solve large optimization problems from a variety of different applications. Documentation, examples, and more can be found on the SnapVX website at http://snap.stanford.edu/snapvx.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5870756/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35960855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging supervised learning and test-based co-optimization 桥梁监督学习和基于测试的协同优化
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2017-01-01 DOI: 10.5555/3122009.3122047
PopoviciElena
{"title":"Bridging supervised learning and test-based co-optimization","authors":"PopoviciElena","doi":"10.5555/3122009.3122047","DOIUrl":"https://doi.org/10.5555/3122009.3122047","url":null,"abstract":"This paper takes a close look at the important commonalities and subtle differences between the well-established field of supervised learning and the much younger one of cooptimization. It explains...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71140012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The DFS fused lasso DFS熔接套索
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2017-01-01 DOI: 10.5555/3122009.3242033
PadillaOscar Hernan Madrid, SharpnackJames, G. ScottJames
{"title":"The DFS fused lasso","authors":"PadillaOscar Hernan Madrid, SharpnackJames, G. ScottJames","doi":"10.5555/3122009.3242033","DOIUrl":"https://doi.org/10.5555/3122009.3242033","url":null,"abstract":"The fused lasso, also known as (anisotropic) total variation denoising, is widely used for piecewise constant signal estimation with respect to a given undirected graph. The fused lasso estimate is...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71139798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-Leveraged Methods in Breast Cancer Risk Prediction. 乳腺癌风险预测中的结构杠杆方法。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2016-12-01
Jun Fan, Yirong Wu, Ming Yuan, David Page, Jie Liu, Irene M Ong, Peggy Peissig, Elizabeth Burnside
{"title":"Structure-Leveraged Methods in Breast Cancer Risk Prediction.","authors":"Jun Fan,&nbsp;Yirong Wu,&nbsp;Ming Yuan,&nbsp;David Page,&nbsp;Jie Liu,&nbsp;Irene M Ong,&nbsp;Peggy Peissig,&nbsp;Elizabeth Burnside","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Predicting breast cancer risk has long been a goal of medical research in the pursuit of precision medicine. The goal of this study is to develop novel penalized methods to improve breast cancer risk prediction by leveraging structure information in electronic health records. We conducted a retrospective case-control study, garnering 49 mammography descriptors and 77 high-frequency/low-penetrance single-nucleotide polymorphisms (SNPs) from an existing personalized medicine data repository. Structured mammography reports and breast imaging features have long been part of a standard electronic health record (EHR), and genetic markers likely will be in the near future. Lasso and its variants are widely used approaches to integrated learning and feature selection, and our methodological contribution is to incorporate the dependence structure among the features into these approaches. More specifically, we propose a new methodology by combining group penalty and [Formula: see text] (1 ≤ <i>p</i> ≤ 2) fusion penalty to improve breast cancer risk prediction, taking into account structure information in mammography descriptors and SNPs. We demonstrate that our method provides benefits that are both statistically significant and potentially significant to people's lives.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5446896/pdf/nihms-826646.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35042470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double or Nothing 要么加倍要么一无所获
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2016-08-24 DOI: 10.5555/2946645.3053447
Carol Sutton
{"title":"Double or Nothing","authors":"Carol Sutton","doi":"10.5555/2946645.3053447","DOIUrl":"https://doi.org/10.5555/2946645.3053447","url":null,"abstract":"Crowdsourcing has gained immense popularity in machine learning applications for obtaining large amounts of labeled data. Crowdsourcing is cheap and fast, but suffers from the problem of low-qualit...","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2016-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71138965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convex Regression with Interpretable Sharp Partitions. 带可解释锐分区的凸回归
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2016-06-01
Ashley Petersen, Noah Simon, Daniela Witten
{"title":"Convex Regression with Interpretable Sharp Partitions.","authors":"Ashley Petersen, Noah Simon, Daniela Witten","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose <i>convex regression with interpretable sharp partitions</i> (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021451/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140208103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L1-Regularized Least Squares for Support Recovery of High Dimensional Single Index Models with Gaussian Designs. 高斯设计的高维单指数模型支持恢复的 L1-Regularized Least Squares。
IF 6 3区 计算机科学
Journal of Machine Learning Research Pub Date : 2016-05-01
Matey Neykov, Jun S Liu, Tianxi Cai
{"title":"<i>L</i><sub>1</sub>-Regularized Least Squares for Support Recovery of High Dimensional Single Index Models with Gaussian Designs.","authors":"Matey Neykov, Jun S Liu, Tianxi Cai","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>It is known that for a certain class of single index models (SIMs) [Formula: see text], support recovery is impossible when <b><i>X</i></b> ~ 𝒩(0, 𝕀 <i><sub>p</sub></i><sub>×</sub><i><sub>p</sub></i> ) and a <i>model complexity adjusted sample size</i> is below a critical threshold. Recently, optimal algorithms based on Sliced Inverse Regression (SIR) were suggested. These algorithms work provably under the assumption that the design <b><i>X</i></b> comes from an i.i.d. Gaussian distribution. In the present paper we analyze algorithms based on covariance screening and least squares with <i>L</i><sub>1</sub> penalization (i.e. LASSO) and demonstrate that they can also enjoy optimal (up to a scalar) rescaled sample size in terms of support recovery, albeit under slightly different assumptions on <i>f</i> and <i>ε</i> compared to the SIR based algorithms. Furthermore, we show more generally, that LASSO succeeds in recovering the signed support of <b><i>β</i></b><sub>0</sub> if <b><i>X</i></b> ~ 𝒩 (0, <b>Σ</b>), and the covariance <b>Σ</b> satisfies the irrepresentable condition. Our work extends existing results on the support recovery of LASSO for the linear model, to a more general class of SIMs.</p>","PeriodicalId":50161,"journal":{"name":"Journal of Machine Learning Research","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2016-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5426818/pdf/nihms851690.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34994441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信