低比特率视频编码的时空在线词典学习

Xin Tang, H. Xiong
{"title":"低比特率视频编码的时空在线词典学习","authors":"Xin Tang, H. Xiong","doi":"10.1109/DCC.2013.101","DOIUrl":null,"url":null,"abstract":"To speed up the convergence rate of learning dictionary in low bit-rate video coding, this paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to improve the original adaptive regularized dictionary learning with K-SVD which involves a high computational complexity and interfere with the coding efficiency. Considering the intrinsic dimensionality of the primitives in training each series of 2-D sub dictionaries is low, the 3-D low-frequency and high-frequency dictionary pair would be formed by the online dictionary learning to update the atoms for optimal sparse representation and convergence. Instead of classical first-order stochastic gradient descent on the constraint set, e.g. K-SVD, the online algorithm would exploit the structure of sparse coding in the design of an optimization procedure in terms of stochastic approximations. It depends on low memory consumption and lower computational cost without the need of explicit learning rate tuning. Through drawing a cubic from i.i.d. samples of a distribution in each inner loop and alternating classical sparse coding steps for computing the decomposition coefficient of the cubic over previous dictionary, the dictionary update problem is converted to solve the expected cost instead of the empirical cost. For dynamic training data over time, online dictionary learning behaves faster than second-order iteration batch alternatives, e.g. K-SVD. Through experiments, the super-resolution reconstruction based on STOL obviously reduces the computational complexity to 40% to 50% of the K-SVD learning-based schemes with a guaranteed accuracy.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"STOL: Spatio-Temporal Online Dictionary Learning for Low Bit-Rate Video Coding\",\"authors\":\"Xin Tang, H. Xiong\",\"doi\":\"10.1109/DCC.2013.101\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To speed up the convergence rate of learning dictionary in low bit-rate video coding, this paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to improve the original adaptive regularized dictionary learning with K-SVD which involves a high computational complexity and interfere with the coding efficiency. Considering the intrinsic dimensionality of the primitives in training each series of 2-D sub dictionaries is low, the 3-D low-frequency and high-frequency dictionary pair would be formed by the online dictionary learning to update the atoms for optimal sparse representation and convergence. Instead of classical first-order stochastic gradient descent on the constraint set, e.g. K-SVD, the online algorithm would exploit the structure of sparse coding in the design of an optimization procedure in terms of stochastic approximations. It depends on low memory consumption and lower computational cost without the need of explicit learning rate tuning. Through drawing a cubic from i.i.d. samples of a distribution in each inner loop and alternating classical sparse coding steps for computing the decomposition coefficient of the cubic over previous dictionary, the dictionary update problem is converted to solve the expected cost instead of the empirical cost. For dynamic training data over time, online dictionary learning behaves faster than second-order iteration batch alternatives, e.g. K-SVD. Through experiments, the super-resolution reconstruction based on STOL obviously reduces the computational complexity to 40% to 50% of the K-SVD learning-based schemes with a guaranteed accuracy.\",\"PeriodicalId\":388717,\"journal\":{\"name\":\"2013 Data Compression Conference\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 Data Compression Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DCC.2013.101\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Data Compression Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.2013.101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

为了加快低比特率视频编码中学习字典的收敛速度,本文提出了一种时空在线字典学习(STOL)算法,以改进基于K-SVD的自适应正则化字典学习算法,该算法计算量大,影响编码效率。考虑到训练每一组二维子字典时原语的固有维数较低,通过在线字典学习形成三维低频和高频字典对,更新原子以获得最优的稀疏表示和收敛性。在线算法将利用稀疏编码的结构来设计基于随机逼近的优化过程,而不是在约束集上的经典一阶随机梯度下降,例如K-SVD。它依赖于低内存消耗和较低的计算成本,而不需要显式的学习率调整。通过在每个内环中从一个分布的iid个样本中绘制一个三次,并交替进行经典稀疏编码步骤来计算该三次对前一个字典的分解系数,将字典更新问题转化为求解期望代价而不是经验代价。对于随时间变化的动态训练数据,在线字典学习比二阶迭代批处理方法(例如K-SVD)表现得更快。实验结果表明,基于STOL的超分辨率重建算法在保证精度的前提下,将计算复杂度降低到基于K-SVD学习方法的40% ~ 50%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
STOL: Spatio-Temporal Online Dictionary Learning for Low Bit-Rate Video Coding
To speed up the convergence rate of learning dictionary in low bit-rate video coding, this paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to improve the original adaptive regularized dictionary learning with K-SVD which involves a high computational complexity and interfere with the coding efficiency. Considering the intrinsic dimensionality of the primitives in training each series of 2-D sub dictionaries is low, the 3-D low-frequency and high-frequency dictionary pair would be formed by the online dictionary learning to update the atoms for optimal sparse representation and convergence. Instead of classical first-order stochastic gradient descent on the constraint set, e.g. K-SVD, the online algorithm would exploit the structure of sparse coding in the design of an optimization procedure in terms of stochastic approximations. It depends on low memory consumption and lower computational cost without the need of explicit learning rate tuning. Through drawing a cubic from i.i.d. samples of a distribution in each inner loop and alternating classical sparse coding steps for computing the decomposition coefficient of the cubic over previous dictionary, the dictionary update problem is converted to solve the expected cost instead of the empirical cost. For dynamic training data over time, online dictionary learning behaves faster than second-order iteration batch alternatives, e.g. K-SVD. Through experiments, the super-resolution reconstruction based on STOL obviously reduces the computational complexity to 40% to 50% of the K-SVD learning-based schemes with a guaranteed accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信