在矩阵乘法时间内计算克雷洛夫迭代

Vincent Neiger, Clément Pernet, Gilles Villard
{"title":"在矩阵乘法时间内计算克雷洛夫迭代","authors":"Vincent Neiger, Clément Pernet, Gilles Villard","doi":"arxiv-2402.07345","DOIUrl":null,"url":null,"abstract":"Krylov methods rely on iterated matrix-vector products $A^k u_j$ for an\n$n\\times n$ matrix $A$ and vectors $u_1,\\ldots,u_m$. The space spanned by all\niterates $A^k u_j$ admits a particular basis -- the \\emph{maximal Krylov basis}\n-- which consists of iterates of the first vector $u_1, Au_1, A^2u_1,\\ldots$,\nuntil reaching linear dependency, then iterating similarly the subsequent\nvectors until a basis is obtained. Finding minimal polynomials and Frobenius\nnormal forms is closely related to computing maximal Krylov bases. The fastest\nway to produce these bases was, until this paper, Keller-Gehrig's 1985\nalgorithm whose complexity bound $O(n^\\omega \\log(n))$ comes from repeated\nsquarings of $A$ and logarithmically many Gaussian eliminations. Here\n$\\omega>2$ is a feasible exponent for matrix multiplication over the base\nfield. We present an algorithm computing the maximal Krylov basis in\n$O(n^\\omega\\log\\log(n))$ field operations when $m \\in O(n)$, and even\n$O(n^\\omega)$ as soon as $m\\in O(n/\\log(n)^c)$ for some fixed real $c>0$. As a\nconsequence, we show that the Frobenius normal form together with a\ntransformation matrix can be computed deterministically in $O(n^\\omega\n\\log\\log(n)^2)$, and therefore matrix exponentiation~$A^k$ can be performed in\nthe latter complexity if $\\log(k) \\in O(n^{\\omega-1-\\varepsilon})$, for\n$\\varepsilon>0$. A key idea for these improvements is to rely on fast\nalgorithms for $m\\times m$ polynomial matrices of average degree $n/m$,\ninvolving high-order lifting and minimal kernel bases.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Computing Krylov iterates in the time of matrix multiplication\",\"authors\":\"Vincent Neiger, Clément Pernet, Gilles Villard\",\"doi\":\"arxiv-2402.07345\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Krylov methods rely on iterated matrix-vector products $A^k u_j$ for an\\n$n\\\\times n$ matrix $A$ and vectors $u_1,\\\\ldots,u_m$. The space spanned by all\\niterates $A^k u_j$ admits a particular basis -- the \\\\emph{maximal Krylov basis}\\n-- which consists of iterates of the first vector $u_1, Au_1, A^2u_1,\\\\ldots$,\\nuntil reaching linear dependency, then iterating similarly the subsequent\\nvectors until a basis is obtained. Finding minimal polynomials and Frobenius\\nnormal forms is closely related to computing maximal Krylov bases. The fastest\\nway to produce these bases was, until this paper, Keller-Gehrig's 1985\\nalgorithm whose complexity bound $O(n^\\\\omega \\\\log(n))$ comes from repeated\\nsquarings of $A$ and logarithmically many Gaussian eliminations. Here\\n$\\\\omega>2$ is a feasible exponent for matrix multiplication over the base\\nfield. We present an algorithm computing the maximal Krylov basis in\\n$O(n^\\\\omega\\\\log\\\\log(n))$ field operations when $m \\\\in O(n)$, and even\\n$O(n^\\\\omega)$ as soon as $m\\\\in O(n/\\\\log(n)^c)$ for some fixed real $c>0$. As a\\nconsequence, we show that the Frobenius normal form together with a\\ntransformation matrix can be computed deterministically in $O(n^\\\\omega\\n\\\\log\\\\log(n)^2)$, and therefore matrix exponentiation~$A^k$ can be performed in\\nthe latter complexity if $\\\\log(k) \\\\in O(n^{\\\\omega-1-\\\\varepsilon})$, for\\n$\\\\varepsilon>0$. A key idea for these improvements is to rely on fast\\nalgorithms for $m\\\\times m$ polynomial matrices of average degree $n/m$,\\ninvolving high-order lifting and minimal kernel bases.\",\"PeriodicalId\":501033,\"journal\":{\"name\":\"arXiv - CS - Symbolic Computation\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Symbolic Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2402.07345\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2402.07345","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

克雷洛夫方法依赖于一个 n 次 n 元矩阵 $A$ 和向量 $u_1,\ldots,u_m$ 的迭代矩阵向量积 $A^k u_j$。Alliterates $A^k u_j$ 所跨越的空间有一个特定的基础 - \emph{maximal Krylov basis}--它由第一个向量 $u_1, Au_1, A^2u_1,\ldots$ 的迭代组成,直到达到线性相关,然后对后面的向量进行类似的迭代,直到得到一个基础。寻找最小多项式和弗罗贝尼斯正则表达式与计算最大克雷洛夫基密切相关。在本文之前,产生这些基的最快方法是凯勒-盖里格 1985 年的算法,其复杂度边界 $O(n^\omega \log(n))$ 来自 $A$ 的重复求值和对数多次高斯消元。这里,$\omega>2$ 是基场矩阵乘法的可行指数。我们提出了一种算法,当 $m \in O(n)$ 时,只要 $m\in O(n/\log(n)^c)$ 对于某个固定实数 $c>0$,就能在 $O(n^\omega\log(n))$ 场运算中计算最大克雷洛夫基。因此,我们证明,如果 $\log(k) \in O(n^{\omega-1-\varepsilon})$, for$\varepsilon>0$, Frobenius 正则表达式和变换矩阵可以在 $O(n^\omega\log\log(n)^2)$ 内确定地计算,因此矩阵指数化~$A^k$ 可以在后一种复杂度内执行。这些改进的一个关键想法是依靠平均度数为 $n/m$ 的 $m/times m$ 多项式矩阵的快速算法,其中涉及高阶提升和最小核基。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Computing Krylov iterates in the time of matrix multiplication
Krylov methods rely on iterated matrix-vector products $A^k u_j$ for an $n\times n$ matrix $A$ and vectors $u_1,\ldots,u_m$. The space spanned by all iterates $A^k u_j$ admits a particular basis -- the \emph{maximal Krylov basis} -- which consists of iterates of the first vector $u_1, Au_1, A^2u_1,\ldots$, until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig's 1985 algorithm whose complexity bound $O(n^\omega \log(n))$ comes from repeated squarings of $A$ and logarithmically many Gaussian eliminations. Here $\omega>2$ is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in $O(n^\omega\log\log(n))$ field operations when $m \in O(n)$, and even $O(n^\omega)$ as soon as $m\in O(n/\log(n)^c)$ for some fixed real $c>0$. As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in $O(n^\omega \log\log(n)^2)$, and therefore matrix exponentiation~$A^k$ can be performed in the latter complexity if $\log(k) \in O(n^{\omega-1-\varepsilon})$, for $\varepsilon>0$. A key idea for these improvements is to rely on fast algorithms for $m\times m$ polynomial matrices of average degree $n/m$, involving high-order lifting and minimal kernel bases.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信