{"title":"Learning along a Channel: the Expectation part of Expectation-Maximisation","authors":"Bart Jacobs","doi":"10.1016/j.entcs.2019.09.008","DOIUrl":null,"url":null,"abstract":"<div><p>This paper first investigates a form of frequentist learning that is often called Maximal Likelihood Estimation (MLE). It is redescribed as a natural transformation from multisets to distributions that commutes with marginalisation and disintegration. It forms the basis for the next, main topic: learning of hidden states, which is reformulated as learning along a channel. This topic requires a fundamental look at what data is and what its validity is in a particular state. The paper distinguishes two forms, denoted as ‘M’ for ‘multiple states’ and ‘C’ for ‘copied states’. It is shown that M and C forms exist for validity of data, for learning from data, and for learning along a channel. This M/C distinction allows us to capture two completely different examples from the literature which both claim to be instances of Expectation-Maximisation.</p></div>","PeriodicalId":38770,"journal":{"name":"Electronic Notes in Theoretical Computer Science","volume":"347 ","pages":"Pages 143-160"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.entcs.2019.09.008","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronic Notes in Theoretical Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1571066119301240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 6
Abstract
This paper first investigates a form of frequentist learning that is often called Maximal Likelihood Estimation (MLE). It is redescribed as a natural transformation from multisets to distributions that commutes with marginalisation and disintegration. It forms the basis for the next, main topic: learning of hidden states, which is reformulated as learning along a channel. This topic requires a fundamental look at what data is and what its validity is in a particular state. The paper distinguishes two forms, denoted as ‘M’ for ‘multiple states’ and ‘C’ for ‘copied states’. It is shown that M and C forms exist for validity of data, for learning from data, and for learning along a channel. This M/C distinction allows us to capture two completely different examples from the literature which both claim to be instances of Expectation-Maximisation.
期刊介绍:
ENTCS is a venue for the rapid electronic publication of the proceedings of conferences, of lecture notes, monographs and other similar material for which quick publication and the availability on the electronic media is appropriate. Organizers of conferences whose proceedings appear in ENTCS, and authors of other material appearing as a volume in the series are allowed to make hard copies of the relevant volume for limited distribution. For example, conference proceedings may be distributed to participants at the meeting, and lecture notes can be distributed to those taking a course based on the material in the volume.