高斯网络的变分期望最大化训练

N. Nasios, A. Bors
{"title":"高斯网络的变分期望最大化训练","authors":"N. Nasios, A. Bors","doi":"10.1109/NNSP.2003.1318033","DOIUrl":null,"url":null,"abstract":"This paper introduces variational expectation-maximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparameters and the number of Gaussian mixture components. A dual EM algorithm is employed as the initialization stage in the VEM-based learning. In the first stage the EM algorithm is applied on the given data set while the second stage EM is used on distributions of parameters resulted from several runs of the first stage EM. Appropriate maximum log-likelihood estimators are considered for all the parameter distributions involved.","PeriodicalId":315958,"journal":{"name":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Variational expectation-maximization training for Gaussian networks\",\"authors\":\"N. Nasios, A. Bors\",\"doi\":\"10.1109/NNSP.2003.1318033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper introduces variational expectation-maximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparameters and the number of Gaussian mixture components. A dual EM algorithm is employed as the initialization stage in the VEM-based learning. In the first stage the EM algorithm is applied on the given data set while the second stage EM is used on distributions of parameters resulted from several runs of the first stage EM. Appropriate maximum log-likelihood estimators are considered for all the parameter distributions involved.\",\"PeriodicalId\":315958,\"journal\":{\"name\":\"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NNSP.2003.1318033\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.2003.1318033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

介绍了一种训练高斯网络的变分期望最大化(VEM)算法。表征高斯混合密度的参数的超参数模型分布。该算法采用一种分层学习策略来估计一组超参数和高斯混合分量的数量。采用双EM算法作为初始化阶段进行基于EM的学习。在第一阶段,EM算法应用于给定的数据集,而第二阶段EM用于处理由第一阶段EM多次运行产生的参数分布。对于所涉及的所有参数分布,都考虑适当的最大对数似然估计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Variational expectation-maximization training for Gaussian networks
This paper introduces variational expectation-maximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparameters and the number of Gaussian mixture components. A dual EM algorithm is employed as the initialization stage in the VEM-based learning. In the first stage the EM algorithm is applied on the given data set while the second stage EM is used on distributions of parameters resulted from several runs of the first stage EM. Appropriate maximum log-likelihood estimators are considered for all the parameter distributions involved.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信