An Empirical Study of Uncertainty Gap for Disentangling Factors

Jiantao Wu, Shentong Mo, Lin Wang
{"title":"An Empirical Study of Uncertainty Gap for Disentangling Factors","authors":"Jiantao Wu, Shentong Mo, Lin Wang","doi":"10.1145/3475731.3484954","DOIUrl":null,"url":null,"abstract":"Disentangling factors has proven to be crucial for building interpretable AI systems. Disentangled generative models would have explanatory input variables to increase the trustworthiness and robustness. Previous works apply a progressive disentanglement learning regime where the ground-truth factors are disentangled in an order. However, they didn't answer why such an order for disentanglement is important. In this work, we propose a novel metric, namely Uncertainty Gap, to evaluate how the uncertainty of generative models changes given input variables. We generalize the Uncertainty Gap to image reconstruction tasks using BCE and MSE. Extensive experiments on three commonly-used benchmarks also demonstrate the effectiveness of our Uncertainty Gap in evaluating both informativeness and redundancy of given variables. We empirically find that the significant factor with the largest Uncertainty Gap should be disentangled before insignificant factors, which indicates that a suitable order of disentangling factors facilities the performance.","PeriodicalId":355632,"journal":{"name":"Proceedings of the 1st International Workshop on Trustworthy AI for Multimedia Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Trustworthy AI for Multimedia Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3475731.3484954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Disentangling factors has proven to be crucial for building interpretable AI systems. Disentangled generative models would have explanatory input variables to increase the trustworthiness and robustness. Previous works apply a progressive disentanglement learning regime where the ground-truth factors are disentangled in an order. However, they didn't answer why such an order for disentanglement is important. In this work, we propose a novel metric, namely Uncertainty Gap, to evaluate how the uncertainty of generative models changes given input variables. We generalize the Uncertainty Gap to image reconstruction tasks using BCE and MSE. Extensive experiments on three commonly-used benchmarks also demonstrate the effectiveness of our Uncertainty Gap in evaluating both informativeness and redundancy of given variables. We empirically find that the significant factor with the largest Uncertainty Gap should be disentangled before insignificant factors, which indicates that a suitable order of disentangling factors facilities the performance.
解纠缠因素的不确定性差距实证研究
事实证明,解开因素对于构建可解释的人工智能系统至关重要。解纠缠生成模型将具有解释性输入变量,以增加可信度和鲁棒性。以前的作品应用了一种渐进的解纠缠学习制度,其中基础真理因素按顺序解纠缠。然而,他们没有回答为什么这样一个解除纠缠的命令是重要的。在这项工作中,我们提出了一个新的度量,即不确定性差距,以评估给定输入变量的生成模型的不确定性如何变化。我们将不确定性差距推广到使用BCE和MSE的图像重建任务中。在三种常用基准上的广泛实验也证明了我们的不确定性差距在评估给定变量的信息性和冗余性方面的有效性。实证研究发现,不确定性差距最大的显著因子应先于不显著因子进行解纠缠,说明解纠缠因子的合适顺序有利于绩效的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信