Variational Representations and Neural Network Estimation of Rényi Divergences

IF 1.9 Q1 MATHEMATICS, APPLIED
Jeremiah Birrell, P. Dupuis, M. Katsoulakis, L. Rey-Bellet, Jie Wang
{"title":"Variational Representations and Neural Network Estimation of Rényi Divergences","authors":"Jeremiah Birrell, P. Dupuis, M. Katsoulakis, L. Rey-Bellet, Jie Wang","doi":"10.1137/20m1368926","DOIUrl":null,"url":null,"abstract":"We derive a new variational formula for the R{e}nyi family of divergences, $R_\\alpha(Q\\|P)$, between probability measures $Q$ and $P$. Our result generalizes the classical Donsker-Varadhan variational formula for the Kullback-Leibler divergence. We further show that this R{e}nyi variational formula holds over a range of function spaces; this leads to a formula for the optimizer under very weak assumptions and is also key in our development of a consistency theory for R{e}nyi divergence estimators. By applying this theory to neural network estimators, we show that if a neural network family satisfies one of several strengthened versions of the universal approximation property then the corresponding R{e}nyi divergence estimator is consistent. In contrast to likelihood-ratio based methods, our estimators involve only expectations under $Q$ and $P$ and hence are more effective in high dimensional systems. We illustrate this via several numerical examples of neural network estimation in systems of up to 5000 dimensions.","PeriodicalId":74797,"journal":{"name":"SIAM journal on mathematics of data science","volume":"6 1","pages":"1093-1116"},"PeriodicalIF":1.9000,"publicationDate":"2020-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM journal on mathematics of data science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/20m1368926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 22

Abstract

We derive a new variational formula for the R{e}nyi family of divergences, $R_\alpha(Q\|P)$, between probability measures $Q$ and $P$. Our result generalizes the classical Donsker-Varadhan variational formula for the Kullback-Leibler divergence. We further show that this R{e}nyi variational formula holds over a range of function spaces; this leads to a formula for the optimizer under very weak assumptions and is also key in our development of a consistency theory for R{e}nyi divergence estimators. By applying this theory to neural network estimators, we show that if a neural network family satisfies one of several strengthened versions of the universal approximation property then the corresponding R{e}nyi divergence estimator is consistent. In contrast to likelihood-ratio based methods, our estimators involve only expectations under $Q$ and $P$ and hence are more effective in high dimensional systems. We illustrate this via several numerical examples of neural network estimation in systems of up to 5000 dimensions.
rassanyi散度的变分表示与神经网络估计
我们为概率测度$Q$和$P$之间的R{e}nyi散度族$R_\ α (Q\|P)$导出了一个新的变分公式。我们的结果推广了经典的关于Kullback-Leibler散度的Donsker-Varadhan变分公式。我们进一步证明了这个R{e}nyi变分公式在一系列函数空间上成立;这导致了在非常弱的假设下优化器的公式,也是我们发展R{e}nyi散度估计的一致性理论的关键。通过将这一理论应用于神经网络估计量,我们证明了如果一个神经网络族满足全称近似性质的几个强化版本之一,则相应的R{e}nyi散度估计量是一致的。与基于似然比的方法相比,我们的估计器只涉及$Q$和$P$下的期望,因此在高维系统中更有效。我们通过在多达5000维的系统中进行神经网络估计的几个数值例子来说明这一点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信