{"title":"Quantized nonnegative matrix factorization","authors":"R. Fréin","doi":"10.1109/ICDSP.2014.6900690","DOIUrl":null,"url":null,"abstract":"Even though Nonnegative Matrix Factorization (NMF) in its original form performs rank reduction and signal compaction implicitly, it does not explicitly consider storage or transmission constraints. We propose a Frobenius-norm Quantized Nonnegative Matrix Factorization algorithm that is 1) almost as precise as traditional NMF for decomposition ranks of interest (with in 1-4dB), 2) admits to practical encoding techniques by learning a factorization which is simpler than NMF's (by a factor of 20-70) and 3) exhibits a complexity which is comparable with state-of-the-art NMF methods. These properties are achieved by considering the quantization residual via an outer quantization optimization step, in an extended NMF iteration, namely QNMF. This approach comes in two forms: QNMF with 1) quasi-fixed and 2) adaptive quantization levels. Quantized NMF considers element-wise quantization constraints in the learning algorithm to eliminate defects due to post factorization quantization. We demonstrate significant reduction in the cardinality of the factor signal values set for comparable Signal-to-Noise-Ratios in a matrix decomposition task.","PeriodicalId":301856,"journal":{"name":"2014 19th International Conference on Digital Signal Processing","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 19th International Conference on Digital Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2014.6900690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Even though Nonnegative Matrix Factorization (NMF) in its original form performs rank reduction and signal compaction implicitly, it does not explicitly consider storage or transmission constraints. We propose a Frobenius-norm Quantized Nonnegative Matrix Factorization algorithm that is 1) almost as precise as traditional NMF for decomposition ranks of interest (with in 1-4dB), 2) admits to practical encoding techniques by learning a factorization which is simpler than NMF's (by a factor of 20-70) and 3) exhibits a complexity which is comparable with state-of-the-art NMF methods. These properties are achieved by considering the quantization residual via an outer quantization optimization step, in an extended NMF iteration, namely QNMF. This approach comes in two forms: QNMF with 1) quasi-fixed and 2) adaptive quantization levels. Quantized NMF considers element-wise quantization constraints in the learning algorithm to eliminate defects due to post factorization quantization. We demonstrate significant reduction in the cardinality of the factor signal values set for comparable Signal-to-Noise-Ratios in a matrix decomposition task.