Exponential Dissimilarity-Dispersion Family for Domain-Specific Representation Learning

IF 13.7
Ren Togo;Nao Nakagawa;Takahiro Ogawa;Miki Haseyama
{"title":"Exponential Dissimilarity-Dispersion Family for Domain-Specific Representation Learning","authors":"Ren Togo;Nao Nakagawa;Takahiro Ogawa;Miki Haseyama","doi":"10.1109/TIP.2025.3608661","DOIUrl":null,"url":null,"abstract":"This paper presents a new domain-specific representation learning method, exponential dissimilarity-dispersion family (EDDF), a novel distribution family that includes a dissimilarity function and a global dispersion parameter. In generative models, variational autoencoders (VAEs) has a solid theoretical foundation based on variational inference in visual representation learning and are also used as one of core components of other generative models. This paper addresses the issue where conventional VAEs, with the commonly adopted Gaussian settings, tend to experience performance degradation in generative modeling for high-dimensional data. This degradation is often caused by their excessively limited model family. To tackle this problem, we propose EDDF, a new domain-specific method introducing a novel distribution family with a dissimilarity function and a global dispersion parameter. A decoder using this family employs dissimilarity functions for the evidence lower bound (ELBO) reconstruction loss, leveraging domain-specific knowledge to enhance high-dimensional data modeling. We also propose an ELBO optimization method for VAEs with EDDF decoders that implicitly approximates the stochastic gradient of the normalizing constant using log-expected dissimilarity. Empirical evaluations of the generative performance show the effectiveness of our model family and proposed method. Our framework can be integrated into any VAE-based generative models in representation learning. The code and model are available at <uri>https://github.com/ganmodokix/eddf-vae</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"6110-6125"},"PeriodicalIF":13.7000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11175279","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11175279/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a new domain-specific representation learning method, exponential dissimilarity-dispersion family (EDDF), a novel distribution family that includes a dissimilarity function and a global dispersion parameter. In generative models, variational autoencoders (VAEs) has a solid theoretical foundation based on variational inference in visual representation learning and are also used as one of core components of other generative models. This paper addresses the issue where conventional VAEs, with the commonly adopted Gaussian settings, tend to experience performance degradation in generative modeling for high-dimensional data. This degradation is often caused by their excessively limited model family. To tackle this problem, we propose EDDF, a new domain-specific method introducing a novel distribution family with a dissimilarity function and a global dispersion parameter. A decoder using this family employs dissimilarity functions for the evidence lower bound (ELBO) reconstruction loss, leveraging domain-specific knowledge to enhance high-dimensional data modeling. We also propose an ELBO optimization method for VAEs with EDDF decoders that implicitly approximates the stochastic gradient of the normalizing constant using log-expected dissimilarity. Empirical evaluations of the generative performance show the effectiveness of our model family and proposed method. Our framework can be integrated into any VAE-based generative models in representation learning. The code and model are available at https://github.com/ganmodokix/eddf-vae
领域特定表示学习的指数不相似-离散族
本文提出了一种新的领域表征学习方法——指数不相似-离散族(EDDF),这是一种包含不相似函数和全局离散参数的新型分布族。在生成模型中,变分自编码器(VAEs)是基于变分推理的视觉表示学习的坚实理论基础,也是其他生成模型的核心组成部分之一。本文解决了传统VAEs通常采用高斯设置的问题,在高维数据的生成建模中往往会出现性能下降。这种退化往往是由于他们的模范家庭过于有限造成的。为了解决这个问题,我们提出了一种新的领域特定方法EDDF,它引入了一个具有不相似函数和全局色散参数的新分布族。使用该家族的解码器为证据下界(ELBO)重建损失使用不相似函数,利用特定领域的知识来增强高维数据建模。我们还为带有EDDF解码器的vae提出了一种ELBO优化方法,该方法使用对数期望不相似性隐式逼近规范化常数的随机梯度。生成性能的实证评估表明了我们的模型族和提出的方法的有效性。我们的框架可以集成到任何基于vae的表示学习生成模型中。代码和模型可在https://github.com/ganmodokix/eddf-vae上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信