{"title":"Covariance Matrix Decomposition Using Cascade of Linear Tree Transformations","authors":"N. T. Khajavi, A. Kuh","doi":"10.1109/GlobalSIP45357.2019.8969543","DOIUrl":null,"url":null,"abstract":"The tree model can be computed efficiently using the Chow-Liu algorithm to minimize the Kullback-Leibler (KL) divergence. This paper goes beyond tree approximations by systematically forming a cascade of linear transformations where each linear transformation represents a tree structure. The linear transformation is found via a Cholesky factorization to provide sparsity to the inverse covariance matrix. We show that each successive additional cascade linear transformation improves the approximation with respect to the KL divergence. We conclude by showing some simulation results on synthetic data examining the quality of tree and non-tree approximations.","PeriodicalId":221378,"journal":{"name":"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GlobalSIP45357.2019.8969543","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The tree model can be computed efficiently using the Chow-Liu algorithm to minimize the Kullback-Leibler (KL) divergence. This paper goes beyond tree approximations by systematically forming a cascade of linear transformations where each linear transformation represents a tree structure. The linear transformation is found via a Cholesky factorization to provide sparsity to the inverse covariance matrix. We show that each successive additional cascade linear transformation improves the approximation with respect to the KL divergence. We conclude by showing some simulation results on synthetic data examining the quality of tree and non-tree approximations.