{"title":"The Local Geometry of Orthogonal Dictionary Learning using L1 Minimization","authors":"Qiuwei Li, Zhihui Zhu, M. Wakin, Gongguo Tang","doi":"10.1109/IEEECONF44664.2019.9049030","DOIUrl":null,"url":null,"abstract":"Feature learning that extracts concise and general- izable representations for data is one of the central problems in machine learning and signal processing. Sparse dictionary learning, also known as sparse coding, distinguishes from other feature learning techniques in sparsity exploitation, allowing the formulation of nonconvex optimizations that simultaneously uncover a structured dictionary and sparse representations. Despite the popularity of dictionary learning in applications, the landscapes of these optimizations that enable effective learning largely remain a mystery. This work characterizes the local optimization geometry for a simplified version of sparse coding where the L1 norm of the sparse coefficient matrix is minimized subject to orthogonal dictionary constraints. In particular, we show that the ground-truth dictionary and coefficient matrix are locally identifiable under the assumption that the coefficient matrix is sufficiently sparse and the number of training data columns is sufficiently large.","PeriodicalId":6684,"journal":{"name":"2019 53rd Asilomar Conference on Signals, Systems, and Computers","volume":"110 1","pages":"1217-1221"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 53rd Asilomar Conference on Signals, Systems, and Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEEECONF44664.2019.9049030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Feature learning that extracts concise and general- izable representations for data is one of the central problems in machine learning and signal processing. Sparse dictionary learning, also known as sparse coding, distinguishes from other feature learning techniques in sparsity exploitation, allowing the formulation of nonconvex optimizations that simultaneously uncover a structured dictionary and sparse representations. Despite the popularity of dictionary learning in applications, the landscapes of these optimizations that enable effective learning largely remain a mystery. This work characterizes the local optimization geometry for a simplified version of sparse coding where the L1 norm of the sparse coefficient matrix is minimized subject to orthogonal dictionary constraints. In particular, we show that the ground-truth dictionary and coefficient matrix are locally identifiable under the assumption that the coefficient matrix is sufficiently sparse and the number of training data columns is sufficiently large.