J. Mach. Learn. Res.最新文献

筛选
英文 中文
Scalable Computation of Causal Bounds 因果界的可伸缩计算
J. Mach. Learn. Res. Pub Date : 2023-08-04 DOI: 10.48550/arXiv.2308.02709
Madhumitha Shridharan, G. Iyengar
{"title":"Scalable Computation of Causal Bounds","authors":"Madhumitha Shridharan, G. Iyengar","doi":"10.48550/arXiv.2308.02709","DOIUrl":"https://doi.org/10.48550/arXiv.2308.02709","url":null,"abstract":"We consider the problem of computing bounds for causal queries on causal graphs with unobserved confounders and discrete valued observed variables, where identifiability does not hold. Existing non-parametric approaches for computing such bounds use linear programming (LP) formulations that quickly become intractable for existing solvers because the size of the LP grows exponentially in the number of edges in the causal graph. We show that this LP can be significantly pruned, allowing us to compute bounds for significantly larger causal inference problems compared to existing techniques. This pruning procedure allows us to compute bounds in closed form for a special class of problems, including a well-studied family of problems where multiple confounded treatments influence an outcome. We extend our pruning methodology to fractional LPs which compute bounds for causal queries which incorporate additional observations about the unit. We show that our methods provide significant runtime improvement compared to benchmarks in experiments and extend our results to the finite data setting. For causal inference without additional observations, we propose an efficient greedy heuristic that produces high quality bounds, and scales to problems that are several orders of magnitude larger than those for which the pruned LP can be solved.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"72 1","pages":"237:1-237:35"},"PeriodicalIF":0.0,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72930770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning 多智能体强化学习中分布值函数分解的统一框架
J. Mach. Learn. Res. Pub Date : 2023-06-04 DOI: 10.48550/arXiv.2306.02430
Wei-Fang Sun, Cheng-Kuang Lee, S. See, Chun-Yi Lee
{"title":"A Unified Framework for Factorizing Distributional Value Functions for Multi-Agent Reinforcement Learning","authors":"Wei-Fang Sun, Cheng-Kuang Lee, S. See, Chun-Yi Lee","doi":"10.48550/arXiv.2306.02430","DOIUrl":"https://doi.org/10.48550/arXiv.2306.02430","url":null,"abstract":"In fully cooperative multi-agent reinforcement learning (MARL) settings, environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of other agents. To address the above issues, we proposed a unified framework, called DFAC, for integrating distributional RL with value function factorization methods. This framework generalizes expected value function factorization methods to enable the factorization of return distributions. To validate DFAC, we first demonstrate its ability to factorize the value functions of a simple matrix game with stochastic rewards. Then, we perform experiments on all Super Hard maps of the StarCraft Multi-Agent Challenge and six self-designed Ultra Hard maps, showing that DFAC is able to outperform a number of baselines.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"7 1","pages":"220:1-220:32"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87734904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive False Discovery Rate Control with Privacy Guarantee 具有隐私保证的自适应错误发现率控制
J. Mach. Learn. Res. Pub Date : 2023-05-31 DOI: 10.48550/arXiv.2305.19482
Xintao Xia, Zhanrui Cai
{"title":"Adaptive False Discovery Rate Control with Privacy Guarantee","authors":"Xintao Xia, Zhanrui Cai","doi":"10.48550/arXiv.2305.19482","DOIUrl":"https://doi.org/10.48550/arXiv.2305.19482","url":null,"abstract":"Differentially private multiple testing procedures can protect the information of individuals used in hypothesis tests while guaranteeing a small fraction of false discoveries. In this paper, we propose a differentially private adaptive FDR control method that can control the classic FDR metric exactly at a user-specified level $alpha$ with privacy guarantee, which is a non-trivial improvement compared to the differentially private Benjamini-Hochberg method proposed in Dwork et al. (2021). Our analysis is based on two key insights: 1) a novel p-value transformation that preserves both privacy and the mirror conservative property, and 2) a mirror peeling algorithm that allows the construction of the filtration and application of the optimal stopping technique. Numerical studies demonstrate that the proposed DP-AdaPT performs better compared to the existing differentially private FDR control methods. Compared to the non-private AdaPT, it incurs a small accuracy loss but significantly reduces the computation cost.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"6 1","pages":"252:1-252:35"},"PeriodicalIF":0.0,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88380712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairlearn: Assessing and Improving Fairness of AI Systems 公平学习:评估和提高人工智能系统的公平性
J. Mach. Learn. Res. Pub Date : 2023-03-29 DOI: 10.48550/arXiv.2303.16626
Roman Lutz
{"title":"Fairlearn: Assessing and Improving Fairness of AI Systems","authors":"Roman Lutz","doi":"10.48550/arXiv.2303.16626","DOIUrl":"https://doi.org/10.48550/arXiv.2303.16626","url":null,"abstract":"Fairlearn is an open source project to help practitioners assess and improve fairness of artificial intelligence (AI) systems. The associated Python library, also named fairlearn, supports evaluation of a model's output across affected populations and includes several algorithms for mitigating fairness issues. Grounded in the understanding that fairness is a sociotechnical challenge, the project integrates learning resources that aid practitioners in considering a system's broader societal context.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"75 1","pages":"257:1-257:8"},"PeriodicalIF":0.0,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85219416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Generalization Bounds for Adversarial Contrastive Learning 对抗性对比学习的泛化界限
J. Mach. Learn. Res. Pub Date : 2023-02-21 DOI: 10.48550/arXiv.2302.10633
Xin Zou, Weiwei Liu
{"title":"Generalization Bounds for Adversarial Contrastive Learning","authors":"Xin Zou, Weiwei Liu","doi":"10.48550/arXiv.2302.10633","DOIUrl":"https://doi.org/10.48550/arXiv.2302.10633","url":null,"abstract":"Deep networks are well-known to be fragile to adversarial attacks, and adversarial training is one of the most popular methods used to train a robust model. To take advantage of unlabeled data, recent works have applied adversarial training to contrastive learning (Adversarial Contrastive Learning; ACL for short) and obtain promising robust performance. However, the theory of ACL is not well understood. To fill this gap, we leverage the Rademacher complexity to analyze the generalization performance of ACL, with a particular focus on linear models and multi-layer neural networks under $ell_p$ attack ($p ge 1$). Our theory shows that the average adversarial risk of the downstream tasks can be upper bounded by the adversarial unsupervised risk of the upstream task. The experimental results validate our theory.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"23 1","pages":"114:1-114:54"},"PeriodicalIF":0.0,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79130580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intrinsic Gaussian Process on Unknown Manifolds with Probabilistic Metrics 具有概率度量的未知流形上的内征高斯过程
J. Mach. Learn. Res. Pub Date : 2023-01-16 DOI: 10.48550/arXiv.2301.06533
Mu Niu, Zhenwen Dai, P. Cheung, Yizhu Wang
{"title":"Intrinsic Gaussian Process on Unknown Manifolds with Probabilistic Metrics","authors":"Mu Niu, Zhenwen Dai, P. Cheung, Yizhu Wang","doi":"10.48550/arXiv.2301.06533","DOIUrl":"https://doi.org/10.48550/arXiv.2301.06533","url":null,"abstract":"This article presents a novel approach to construct Intrinsic Gaussian Processes for regression on unknown manifolds with probabilistic metrics (GPUM) in point clouds. In many real world applications, one often encounters high dimensional data (e.g. point cloud data) centred around some lower dimensional unknown manifolds. The geometry of manifold is in general different from the usual Euclidean geometry. Naively applying traditional smoothing methods such as Euclidean Gaussian Processes (GPs) to manifold valued data and so ignoring the geometry of the space can potentially lead to highly misleading predictions and inferences. A manifold embedded in a high dimensional Euclidean space can be well described by a probabilistic mapping function and the corresponding latent space. We investigate the geometrical structure of the unknown manifolds using the Bayesian Gaussian Processes latent variable models(BGPLVM) and Riemannian geometry. The distribution of the metric tensor is learned using BGPLVM. The boundary of the resulting manifold is defined based on the uncertainty quantification of the mapping. We use the the probabilistic metric tensor to simulate Brownian Motion paths on the unknown manifold. The heat kernel is estimated as the transition density of Brownian Motion and used as the covariance functions of GPUM. The applications of GPUM are illustrated in the simulation studies on the Swiss roll, high dimensional real datasets of WiFi signals and image data examples. Its performance is compared with the Graph Laplacian GP, Graph Matern GP and Euclidean GP.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"23 1","pages":"104:1-104:42"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72749755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimal Width for Universal Property of Deep RNN 深度RNN通用性的最小宽度
J. Mach. Learn. Res. Pub Date : 2022-11-25 DOI: 10.48550/arXiv.2211.13866
Changhoon Song, Geonho Hwang, Jun ho Lee, Myung-joo Kang
{"title":"Minimal Width for Universal Property of Deep RNN","authors":"Changhoon Song, Geonho Hwang, Jun ho Lee, Myung-joo Kang","doi":"10.48550/arXiv.2211.13866","DOIUrl":"https://doi.org/10.48550/arXiv.2211.13866","url":null,"abstract":"A recurrent neural network (RNN) is a widely used deep-learning network for dealing with sequential data. Imitating a dynamical system, an infinite-width RNN can approximate any open dynamical system in a compact domain. In general, deep networks with bounded widths are more effective than wide networks in practice; however, the universal approximation theorem for deep narrow structures has yet to be extensively studied. In this study, we prove the universality of deep narrow RNNs and show that the upper bound of the minimum width for universality can be independent of the length of the data. Specifically, we show that a deep RNN with ReLU activation can approximate any continuous function or $L^p$ function with the widths $d_x+d_y+2$ and $max{d_x+1,d_y}$, respectively, where the target function maps a finite sequence of vectors in $mathbb{R}^{d_x}$ to a finite sequence of vectors in $mathbb{R}^{d_y}$. We also compute the additional width required if the activation function is $tanh$ or more. In addition, we prove the universality of other recurrent networks, such as bidirectional RNNs. Bridging a multi-layer perceptron and an RNN, our theory and proof technique can be an initial step toward further research on deep RNNs.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"200 1","pages":"121:1-121:41"},"PeriodicalIF":0.0,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79915630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adaptive Data Depth via Multi-Armed Bandits 基于多武装强盗的自适应数据深度
J. Mach. Learn. Res. Pub Date : 2022-11-08 DOI: 10.48550/arXiv.2211.03985
Tavor Z. Baharav, T. Lai
{"title":"Adaptive Data Depth via Multi-Armed Bandits","authors":"Tavor Z. Baharav, T. Lai","doi":"10.48550/arXiv.2211.03985","DOIUrl":"https://doi.org/10.48550/arXiv.2211.03985","url":null,"abstract":"Data depth, introduced by Tukey (1975), is an important tool in data science, robust statistics, and computational geometry. One chief barrier to its broader practical utility is that many common measures of depth are computationally intensive, requiring on the order of $n^d$ operations to exactly compute the depth of a single point within a data set of $n$ points in $d$-dimensional space. Often however, we are not directly interested in the absolute depths of the points, but rather in their relative ordering. For example, we may want to find the most central point in a data set (a generalized median), or to identify and remove all outliers (points on the fringe of the data set with low depth). With this observation, we develop a novel and instance-adaptive algorithm for adaptive data depth computation by reducing the problem of exactly computing $n$ depths to an $n$-armed stochastic multi-armed bandit problem which we can efficiently solve. We focus our exposition on simplicial depth, developed by Liu (1990), which has emerged as a promising notion of depth due to its interpretability and asymptotic properties. We provide general instance-dependent theoretical guarantees for our proposed algorithms, which readily extend to many other common measures of data depth including majority depth, Oja depth, and likelihood depth. When specialized to the case where the gaps in the data follow a power law distribution with parameter $alpha<2$, we show that we can reduce the complexity of identifying the deepest point in the data set (the simplicial median) from $O(n^d)$ to $tilde{O}(n^{d-(d-1)alpha/2})$, where $tilde{O}$ suppresses logarithmic factors. We corroborate our theoretical results with numerical experiments on synthetic data, showing the practical utility of our proposed methods.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"11 1","pages":"155:1-155:29"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75395385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model 176B参数语言模型BLOOM的碳足迹估算
J. Mach. Learn. Res. Pub Date : 2022-11-03 DOI: 10.48550/arXiv.2211.02001
A. Luccioni, S. Viguier, Anne-Laure Ligozat
{"title":"Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model","authors":"A. Luccioni, S. Viguier, Anne-Laure Ligozat","doi":"10.48550/arXiv.2211.02001","DOIUrl":"https://doi.org/10.48550/arXiv.2211.02001","url":null,"abstract":"Progress in machine learning (ML) comes with a cost to the environment, given that training ML models requires significant computational resources, energy and materials. In the present article, we aim to quantify the carbon footprint of BLOOM, a 176-billion parameter language model, across its life cycle. We estimate that BLOOM's final training emitted approximately 24.7 tonnes of~carboneq~if we consider only the dynamic power consumption, and 50.5 tonnes if we account for all processes ranging from equipment manufacturing to energy-based operational consumption. We also study the energy requirements and carbon emissions of its deployment for inference via an API endpoint receiving user queries in real-time. We conclude with a discussion regarding the difficulty of precisely estimating the carbon footprint of ML models and future research directions that can contribute towards improving carbon emissions reporting.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"174 1","pages":"253:1-253:15"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86462319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities 高效计算深度学习:算法趋势和机遇
J. Mach. Learn. Res. Pub Date : 2022-10-13 DOI: 10.48550/arXiv.2210.06640
Brian Bartoldson, B. Kailkhura, Davis W. Blalock
{"title":"Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities","authors":"Brian Bartoldson, B. Kailkhura, Davis W. Blalock","doi":"10.48550/arXiv.2210.06640","DOIUrl":"https://doi.org/10.48550/arXiv.2210.06640","url":null,"abstract":"Although deep learning has made great progress in recent years, the exploding economic and environmental costs of training neural networks are becoming unsustainable. To address this problem, there has been a great deal of research on *algorithmically-efficient deep learning*, which seeks to reduce training costs not at the hardware or implementation level, but through changes in the semantics of the training program. In this paper, we present a structured and comprehensive overview of the research in this field. First, we formalize the *algorithmic speedup* problem, then we use fundamental building blocks of algorithmically efficient training to develop a taxonomy. Our taxonomy highlights commonalities of seemingly disparate methods and reveals current research gaps. Next, we present evaluation best practices to enable comprehensive, fair, and reliable comparisons of speedup techniques. To further aid research and applications, we discuss common bottlenecks in the training pipeline (illustrated via experiments) and offer taxonomic mitigation strategies for them. Finally, we highlight some unsolved research challenges and present promising future directions.","PeriodicalId":14794,"journal":{"name":"J. Mach. Learn. Res.","volume":"36 1","pages":"122:1-122:77"},"PeriodicalIF":0.0,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90619569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信