Omar De la Cruz Cabrera, Jiafeng Jin, Lothar Reichel
{"title":"复杂网络的稀疏逼近","authors":"Omar De la Cruz Cabrera, Jiafeng Jin, Lothar Reichel","doi":"10.1016/j.apnum.2024.01.002","DOIUrl":null,"url":null,"abstract":"<div><div><span>This paper considers the problem of recovering a sparse approximation </span><span><math><mi>A</mi><mo>∈</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi><mo>×</mo><mi>n</mi></mrow></msup></math></span><span> of an unknown (exact) adjacency matrix </span><span><math><msub><mrow><mi>A</mi></mrow><mrow><mtext>true</mtext></mrow></msub></math></span> for a network from a corrupted version of a communicability matrix <span><math><mi>C</mi><mo>=</mo><mi>exp</mi><mo></mo><mo>(</mo><msub><mrow><mi>A</mi></mrow><mrow><mtext>true</mtext></mrow></msub><mo>)</mo><mo>+</mo><mi>N</mi><mo>∈</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi><mo>×</mo><mi>n</mi></mrow></msup></math></span>, where <strong>N</strong> denotes an unknown “noise matrix”. We consider two methods for determining an approximation <strong>A</strong> of <span><math><msub><mrow><mi>A</mi></mrow><mrow><mtext>true</mtext></mrow></msub></math></span>: <span><math><mo>(</mo><mrow><mi>i</mi><mo>)</mo></mrow></math></span><span> a Newton method with soft-thresholding and line search, and </span><span><math><mo>(</mo><mrow><mi>ii</mi><mo>)</mo></mrow></math></span><span> a proximal gradient method with line search. These methods are applied to compute the solution of the minimization problem</span><span><span><span><math><munder><mrow><mi>arg</mi><mo></mo><mi>min</mi></mrow><mrow><mi>A</mi><mo>∈</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi><mo>×</mo><mi>n</mi></mrow></msup></mrow></munder><mo>{</mo><msubsup><mrow><mo>‖</mo><mi>exp</mi><mo></mo><mo>(</mo><mi>A</mi><mo>)</mo><mo>−</mo><mi>C</mi><mo>‖</mo></mrow><mrow><mi>F</mi></mrow><mrow><mn>2</mn></mrow></msubsup><mo>+</mo><mi>μ</mi><msub><mrow><mo>‖</mo><mtext>vec</mtext><mo>(</mo><mi>A</mi><mo>)</mo><mo>‖</mo></mrow><mrow><mn>1</mn></mrow></msub><mo>}</mo><mo>,</mo></math></span></span></span> where <span><math><mi>μ</mi><mo>></mo><mn>0</mn></math></span><span> is a regularization parameter that controls the amount of shrinkage. We discuss the effect of </span><em>μ</em><span> on the computed solution, conditions for convergence, and the rate of convergence of the methods. Computed examples illustrate their performance when applied to directed and undirected networks.</span></div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 170-188"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sparse approximation of complex networks\",\"authors\":\"Omar De la Cruz Cabrera, Jiafeng Jin, Lothar Reichel\",\"doi\":\"10.1016/j.apnum.2024.01.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div><span>This paper considers the problem of recovering a sparse approximation </span><span><math><mi>A</mi><mo>∈</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi><mo>×</mo><mi>n</mi></mrow></msup></math></span><span> of an unknown (exact) adjacency matrix </span><span><math><msub><mrow><mi>A</mi></mrow><mrow><mtext>true</mtext></mrow></msub></math></span> for a network from a corrupted version of a communicability matrix <span><math><mi>C</mi><mo>=</mo><mi>exp</mi><mo></mo><mo>(</mo><msub><mrow><mi>A</mi></mrow><mrow><mtext>true</mtext></mrow></msub><mo>)</mo><mo>+</mo><mi>N</mi><mo>∈</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi><mo>×</mo><mi>n</mi></mrow></msup></math></span>, where <strong>N</strong> denotes an unknown “noise matrix”. We consider two methods for determining an approximation <strong>A</strong> of <span><math><msub><mrow><mi>A</mi></mrow><mrow><mtext>true</mtext></mrow></msub></math></span>: <span><math><mo>(</mo><mrow><mi>i</mi><mo>)</mo></mrow></math></span><span> a Newton method with soft-thresholding and line search, and </span><span><math><mo>(</mo><mrow><mi>ii</mi><mo>)</mo></mrow></math></span><span> a proximal gradient method with line search. These methods are applied to compute the solution of the minimization problem</span><span><span><span><math><munder><mrow><mi>arg</mi><mo></mo><mi>min</mi></mrow><mrow><mi>A</mi><mo>∈</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi><mo>×</mo><mi>n</mi></mrow></msup></mrow></munder><mo>{</mo><msubsup><mrow><mo>‖</mo><mi>exp</mi><mo></mo><mo>(</mo><mi>A</mi><mo>)</mo><mo>−</mo><mi>C</mi><mo>‖</mo></mrow><mrow><mi>F</mi></mrow><mrow><mn>2</mn></mrow></msubsup><mo>+</mo><mi>μ</mi><msub><mrow><mo>‖</mo><mtext>vec</mtext><mo>(</mo><mi>A</mi><mo>)</mo><mo>‖</mo></mrow><mrow><mn>1</mn></mrow></msub><mo>}</mo><mo>,</mo></math></span></span></span> where <span><math><mi>μ</mi><mo>></mo><mn>0</mn></math></span><span> is a regularization parameter that controls the amount of shrinkage. We discuss the effect of </span><em>μ</em><span> on the computed solution, conditions for convergence, and the rate of convergence of the methods. Computed examples illustrate their performance when applied to directed and undirected networks.</span></div></div>\",\"PeriodicalId\":8199,\"journal\":{\"name\":\"Applied Numerical Mathematics\",\"volume\":\"208 \",\"pages\":\"Pages 170-188\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Numerical Mathematics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0168927424000023\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Numerical Mathematics","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0168927424000023","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
摘要
本文考虑的问题是从可通信矩阵 C=exp(Atrue)+N∈Rn×n 的损坏版本中恢复网络未知(精确)邻接矩阵 Atrue 的稀疏近似值 A∈Rn×n,其中 N 表示未知 "噪声矩阵"。我们考虑了两种确定 Atrue 近似值 A 的方法:(i) 带软阈值和直线搜索的牛顿法,以及 (ii) 带直线搜索的近似梯度法。这些方法用于计算最小化问题argminA∈Rn×n{‖exp(A)-C‖F2+μ‖vec(A)‖1}的解,其中μ>0 是控制收缩量的正则化参数。我们讨论了 μ 对计算解的影响、收敛条件以及方法的收敛速度。计算实例说明了这些方法应用于有向和无向网络时的性能。
This paper considers the problem of recovering a sparse approximation of an unknown (exact) adjacency matrix for a network from a corrupted version of a communicability matrix , where N denotes an unknown “noise matrix”. We consider two methods for determining an approximation A of : a Newton method with soft-thresholding and line search, and a proximal gradient method with line search. These methods are applied to compute the solution of the minimization problem where is a regularization parameter that controls the amount of shrinkage. We discuss the effect of μ on the computed solution, conditions for convergence, and the rate of convergence of the methods. Computed examples illustrate their performance when applied to directed and undirected networks.
期刊介绍:
The purpose of the journal is to provide a forum for the publication of high quality research and tutorial papers in computational mathematics. In addition to the traditional issues and problems in numerical analysis, the journal also publishes papers describing relevant applications in such fields as physics, fluid dynamics, engineering and other branches of applied science with a computational mathematics component. The journal strives to be flexible in the type of papers it publishes and their format. Equally desirable are:
(i) Full papers, which should be complete and relatively self-contained original contributions with an introduction that can be understood by the broad computational mathematics community. Both rigorous and heuristic styles are acceptable. Of particular interest are papers about new areas of research, in which other than strictly mathematical arguments may be important in establishing a basis for further developments.
(ii) Tutorial review papers, covering some of the important issues in Numerical Mathematics, Scientific Computing and their Applications. The journal will occasionally publish contributions which are larger than the usual format for regular papers.
(iii) Short notes, which present specific new results and techniques in a brief communication.