Learning anomalous features via sparse coding using matrix norms

Bradley M. Whitaker, David V. Anderson
{"title":"Learning anomalous features via sparse coding using matrix norms","authors":"Bradley M. Whitaker, David V. Anderson","doi":"10.1109/DSP-SPE.2015.7369552","DOIUrl":null,"url":null,"abstract":"Our goal is to find anomalous features in a dataset using the sparse coding concept of dictionary learning. Rather than using the averaged column ℓ2-norm for the dictionary update as is typically done in sparse coding, we explore using three matrix norms: ∥·∥1, ∥·∥2, and ∥·∥∞. Minimizing the matrix norms represents minimizing a maximum deviation in the reconstruction error rather than an average deviation, hopefully allowing us to find features that contribute significantly but infrequently to sample training points. We find that while solving for the dictionaries using matrix norm minimization takes longer to compute, all three methods are able to recover a known basis from a simple set of training data. In addition, the ∥·∥1 matrix norm is able to recover a known anomalous feature in the training data that the other norms (including the standard averaged ℓ2-norm) are unable to find.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"2 1","pages":"196-201"},"PeriodicalIF":0.0000,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSP-SPE.2015.7369552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Our goal is to find anomalous features in a dataset using the sparse coding concept of dictionary learning. Rather than using the averaged column ℓ2-norm for the dictionary update as is typically done in sparse coding, we explore using three matrix norms: ∥·∥1, ∥·∥2, and ∥·∥∞. Minimizing the matrix norms represents minimizing a maximum deviation in the reconstruction error rather than an average deviation, hopefully allowing us to find features that contribute significantly but infrequently to sample training points. We find that while solving for the dictionaries using matrix norm minimization takes longer to compute, all three methods are able to recover a known basis from a simple set of training data. In addition, the ∥·∥1 matrix norm is able to recover a known anomalous feature in the training data that the other norms (including the standard averaged ℓ2-norm) are unable to find.
通过使用矩阵规范的稀疏编码学习异常特征
我们的目标是使用字典学习的稀疏编码概念在数据集中发现异常特征。而不是像在稀疏编码中通常做的那样使用平均列l2范数进行字典更新,我们探索使用三个矩阵范数:∥·∥1,∥·∥2和∥·∥∞。最小化矩阵范数表示最小化重构误差中的最大偏差,而不是平均偏差,希望能让我们找到对样本训练点贡献显著但不常见的特征。我们发现,虽然使用矩阵范数最小化求解字典需要更长的计算时间,但这三种方法都能够从一组简单的训练数据中恢复已知的基。此外,∥·∥1矩阵范数能够恢复训练数据中已知的异常特征,这是其他范数(包括标准平均l2范数)无法找到的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信