Xin Bing, Florentina Bunea, Jonathan Niles-Weed, Marten Wegkamp
{"title":"利用热启动电磁学习大型软最大混合物","authors":"Xin Bing, Florentina Bunea, Jonathan Niles-Weed, Marten Wegkamp","doi":"arxiv-2409.09903","DOIUrl":null,"url":null,"abstract":"Mixed multinomial logits are discrete mixtures introduced several decades ago\nto model the probability of choosing an attribute from $p$ possible candidates,\nin heterogeneous populations. The model has recently attracted attention in the\nAI literature, under the name softmax mixtures, where it is routinely used in\nthe final layer of a neural network to map a large number $p$ of vectors in\n$\\mathbb{R}^L$ to a probability vector. Despite its wide applicability and\nempirical success, statistically optimal estimators of the mixture parameters,\nobtained via algorithms whose running time scales polynomially in $L$, are not\nknown. This paper provides a solution to this problem for contemporary\napplications, such as large language models, in which the mixture has a large\nnumber $p$ of support points, and the size $N$ of the sample observed from the\nmixture is also large. Our proposed estimator combines two classical\nestimators, obtained respectively via a method of moments (MoM) and the\nexpectation-minimization (EM) algorithm. Although both estimator types have\nbeen studied, from a theoretical perspective, for Gaussian mixtures, no similar\nresults exist for softmax mixtures for either procedure. We develop a new MoM\nparameter estimator based on latent moment estimation that is tailored to our\nmodel, and provide the first theoretical analysis for a MoM-based procedure in\nsoftmax mixtures. Although consistent, MoM for softmax mixtures can exhibit\npoor numerical performance, as observed other mixture models. Nevertheless, as\nMoM is provably in a neighborhood of the target, it can be used as warm start\nfor any iterative algorithm. We study in detail the EM algorithm, and provide\nits first theoretical analysis for softmax mixtures. Our final proposal for\nparameter estimation is the EM algorithm with a MoM warm start.","PeriodicalId":501379,"journal":{"name":"arXiv - STAT - Statistics Theory","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning large softmax mixtures with warm start EM\",\"authors\":\"Xin Bing, Florentina Bunea, Jonathan Niles-Weed, Marten Wegkamp\",\"doi\":\"arxiv-2409.09903\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mixed multinomial logits are discrete mixtures introduced several decades ago\\nto model the probability of choosing an attribute from $p$ possible candidates,\\nin heterogeneous populations. The model has recently attracted attention in the\\nAI literature, under the name softmax mixtures, where it is routinely used in\\nthe final layer of a neural network to map a large number $p$ of vectors in\\n$\\\\mathbb{R}^L$ to a probability vector. Despite its wide applicability and\\nempirical success, statistically optimal estimators of the mixture parameters,\\nobtained via algorithms whose running time scales polynomially in $L$, are not\\nknown. This paper provides a solution to this problem for contemporary\\napplications, such as large language models, in which the mixture has a large\\nnumber $p$ of support points, and the size $N$ of the sample observed from the\\nmixture is also large. Our proposed estimator combines two classical\\nestimators, obtained respectively via a method of moments (MoM) and the\\nexpectation-minimization (EM) algorithm. Although both estimator types have\\nbeen studied, from a theoretical perspective, for Gaussian mixtures, no similar\\nresults exist for softmax mixtures for either procedure. We develop a new MoM\\nparameter estimator based on latent moment estimation that is tailored to our\\nmodel, and provide the first theoretical analysis for a MoM-based procedure in\\nsoftmax mixtures. Although consistent, MoM for softmax mixtures can exhibit\\npoor numerical performance, as observed other mixture models. Nevertheless, as\\nMoM is provably in a neighborhood of the target, it can be used as warm start\\nfor any iterative algorithm. We study in detail the EM algorithm, and provide\\nits first theoretical analysis for softmax mixtures. Our final proposal for\\nparameter estimation is the EM algorithm with a MoM warm start.\",\"PeriodicalId\":501379,\"journal\":{\"name\":\"arXiv - STAT - Statistics Theory\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Statistics Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09903\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Statistics Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09903","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning large softmax mixtures with warm start EM
Mixed multinomial logits are discrete mixtures introduced several decades ago
to model the probability of choosing an attribute from $p$ possible candidates,
in heterogeneous populations. The model has recently attracted attention in the
AI literature, under the name softmax mixtures, where it is routinely used in
the final layer of a neural network to map a large number $p$ of vectors in
$\mathbb{R}^L$ to a probability vector. Despite its wide applicability and
empirical success, statistically optimal estimators of the mixture parameters,
obtained via algorithms whose running time scales polynomially in $L$, are not
known. This paper provides a solution to this problem for contemporary
applications, such as large language models, in which the mixture has a large
number $p$ of support points, and the size $N$ of the sample observed from the
mixture is also large. Our proposed estimator combines two classical
estimators, obtained respectively via a method of moments (MoM) and the
expectation-minimization (EM) algorithm. Although both estimator types have
been studied, from a theoretical perspective, for Gaussian mixtures, no similar
results exist for softmax mixtures for either procedure. We develop a new MoM
parameter estimator based on latent moment estimation that is tailored to our
model, and provide the first theoretical analysis for a MoM-based procedure in
softmax mixtures. Although consistent, MoM for softmax mixtures can exhibit
poor numerical performance, as observed other mixture models. Nevertheless, as
MoM is provably in a neighborhood of the target, it can be used as warm start
for any iterative algorithm. We study in detail the EM algorithm, and provide
its first theoretical analysis for softmax mixtures. Our final proposal for
parameter estimation is the EM algorithm with a MoM warm start.