Quantized nonnegative matrix factorization

R. Fréin
{"title":"Quantized nonnegative matrix factorization","authors":"R. Fréin","doi":"10.1109/ICDSP.2014.6900690","DOIUrl":null,"url":null,"abstract":"Even though Nonnegative Matrix Factorization (NMF) in its original form performs rank reduction and signal compaction implicitly, it does not explicitly consider storage or transmission constraints. We propose a Frobenius-norm Quantized Nonnegative Matrix Factorization algorithm that is 1) almost as precise as traditional NMF for decomposition ranks of interest (with in 1-4dB), 2) admits to practical encoding techniques by learning a factorization which is simpler than NMF's (by a factor of 20-70) and 3) exhibits a complexity which is comparable with state-of-the-art NMF methods. These properties are achieved by considering the quantization residual via an outer quantization optimization step, in an extended NMF iteration, namely QNMF. This approach comes in two forms: QNMF with 1) quasi-fixed and 2) adaptive quantization levels. Quantized NMF considers element-wise quantization constraints in the learning algorithm to eliminate defects due to post factorization quantization. We demonstrate significant reduction in the cardinality of the factor signal values set for comparable Signal-to-Noise-Ratios in a matrix decomposition task.","PeriodicalId":301856,"journal":{"name":"2014 19th International Conference on Digital Signal Processing","volume":"120 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 19th International Conference on Digital Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2014.6900690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Even though Nonnegative Matrix Factorization (NMF) in its original form performs rank reduction and signal compaction implicitly, it does not explicitly consider storage or transmission constraints. We propose a Frobenius-norm Quantized Nonnegative Matrix Factorization algorithm that is 1) almost as precise as traditional NMF for decomposition ranks of interest (with in 1-4dB), 2) admits to practical encoding techniques by learning a factorization which is simpler than NMF's (by a factor of 20-70) and 3) exhibits a complexity which is comparable with state-of-the-art NMF methods. These properties are achieved by considering the quantization residual via an outer quantization optimization step, in an extended NMF iteration, namely QNMF. This approach comes in two forms: QNMF with 1) quasi-fixed and 2) adaptive quantization levels. Quantized NMF considers element-wise quantization constraints in the learning algorithm to eliminate defects due to post factorization quantization. We demonstrate significant reduction in the cardinality of the factor signal values set for comparable Signal-to-Noise-Ratios in a matrix decomposition task.
量化非负矩阵分解
尽管原始形式的非负矩阵分解(NMF)隐式地执行秩约简和信号压缩,但它没有显式地考虑存储或传输约束。我们提出了一种frobenius范数量化非负矩阵分解算法,该算法1)在分解感兴趣的秩(1- 4db)方面几乎与传统NMF一样精确,2)通过学习比NMF更简单的分解(20-70倍)来承认实用的编码技术,3)显示出与最先进的NMF方法相当的复杂性。这些性质是通过在扩展的NMF迭代(即QNMF)中通过外部量化优化步骤考虑量化残差来实现的。这种方法有两种形式:1)准固定和2)自适应量化水平的QNMF。量化NMF在学习算法中考虑了元素量化约束,以消除后因式量化带来的缺陷。我们证明了在矩阵分解任务中为可比信噪比设置的因子信号值的基数显著减少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信