Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions

A. Cichocki, Namgil Lee, I. Oseledets, A. Phan, Qibin Zhao, D. Mandic
{"title":"Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions","authors":"A. Cichocki, Namgil Lee, I. Oseledets, A. Phan, Qibin Zhao, D. Mandic","doi":"10.1561/2200000059","DOIUrl":null,"url":null,"abstract":"Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph.","PeriodicalId":431372,"journal":{"name":"Found. Trends Mach. Learn.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"378","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Found. Trends Mach. Learn.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1561/2200000059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 378

Abstract

Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale, multi-modal andmulti-relational datasets. Given that such data are often efficientlyrepresented as multiway arrays or tensors, it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems. Our particular emphasis is on elucidating that,by virtue of the underlying low-rank approximations, tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas. In Part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks. Such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa, and provides a platform for further developments, practicalapplications, and non-Euclidean extensions. It also permits theintroduction of various tensor network operations without an explicitnotion of mathematical expressions, which may be beneficial for manyresearch communities that do not directly rely on multilinear algebra.Our focus is on the Tucker and tensor train TT decompositions andtheir extensions, and on demonstrating the ability of tensor networksto provide linearly or even super-linearly e.g., logarithmically scalablesolutions, as illustrated in detail in Part 2 of this monograph.
面向降维和大规模优化的张量网络:第1部分:低秩张量分解
工程和数据科学中的现代应用越来越多地基于超高容量、多样性和结构丰富性的多维数据。然而,标准的机器学习算法通常会随着数据量和跨模态耦合的复杂性(即所谓的维数诅咒)呈指数级增长,这不利于分析大规模、多模态和多关系数据集。鉴于这些数据通常被有效地表示为多路数组或张量,因此,对于多学科机器学习和数据分析社区来说,回顾低秩张量分解和张量网络作为降维和大规模优化问题的新兴工具是及时和有价值的。我们特别强调的是阐明,凭借潜在的低秩近似,张量网络有能力减轻许多应用领域的维数诅咒。在本专著的第1部分中,我们提供了低秩张量网络分解的创新解决方案和张量网络数学运算的易于解释的表示。这样的概念洞察力允许从平面视图矩阵到张量网络操作的想法无缝迁移,反之亦然,并为进一步的开发,实际应用和非欧几里得扩展提供了一个平台。它还允许在没有数学表达式的明确概念的情况下引入各种张量网络操作,这可能对许多不直接依赖于多线性代数的研究团体有益。我们的重点是Tucker和张量训练TT分解及其扩展,以及展示张量网络提供线性甚至超线性(例如对数可扩展)解决方案的能力,如本专著的第2部分所详细说明的那样。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信