Applied and Computational Harmonic Analysis最新文献

筛选
英文 中文
ANOVA-boosting for random Fourier features 随机傅里叶特征的anova增强
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-06-18 DOI: 10.1016/j.acha.2025.101789
Daniel Potts, Laura Weidensager
{"title":"ANOVA-boosting for random Fourier features","authors":"Daniel Potts,&nbsp;Laura Weidensager","doi":"10.1016/j.acha.2025.101789","DOIUrl":"10.1016/j.acha.2025.101789","url":null,"abstract":"<div><div>We propose two algorithms for boosting random Fourier feature models for approximating high-dimensional functions. These methods utilize the classical and generalized analysis of variance (ANOVA) decomposition to learn low-order functions, where there are few interactions between the variables. Our algorithms are able to find an index set of important input variables and variable interactions reliably.</div><div>Furthermore, we generalize already existing random Fourier feature models to an ANOVA setting, where terms of different order can be used. Our algorithms have the advantage of being interpretable, meaning that the influence of every input variable is known in the learned model, even for dependent input variables. We provide theoretical as well as numerical results that our algorithms perform well for sensitivity analysis. The ANOVA-boosting step reduces the approximation error of existing methods significantly.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101789"},"PeriodicalIF":2.6,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New results on sparse representations in unions of orthonormal bases 标准正交基并中的稀疏表示的新结果
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-06-11 DOI: 10.1016/j.acha.2025.101786
Tao Zhang , Gennian Ge
{"title":"New results on sparse representations in unions of orthonormal bases","authors":"Tao Zhang ,&nbsp;Gennian Ge","doi":"10.1016/j.acha.2025.101786","DOIUrl":"10.1016/j.acha.2025.101786","url":null,"abstract":"<div><div>The problem of sparse representation has significant applications in signal processing. The spark of a dictionary plays a crucial role in the study of sparse representation. Donoho and Elad initially explored the spark, and they provided a general lower bound. When the dictionary is a union of several orthonormal bases, Gribonval and Nielsen presented an improved lower bound for spark. In this paper, we introduce a new construction of dictionary, achieving the spark bound given by Gribonval and Nielsen. More precisely, let <em>q</em> be a power of 2, we show that for any positive integer <em>t</em>, there exists a dictionary in <span><math><msup><mrow><mi>R</mi></mrow><mrow><msup><mrow><mi>q</mi></mrow><mrow><mn>2</mn><mi>t</mi></mrow></msup></mrow></msup></math></span>, which is a union of <span><math><mi>q</mi><mo>+</mo><mn>1</mn></math></span> orthonormal bases, such that the spark of the dictionary attains Gribonval-Nielsen's bound. Our result extends previously best known result from <span><math><mi>t</mi><mo>=</mo><mn>1</mn><mo>,</mo><mn>2</mn></math></span> to arbitrarily positive integer <em>t</em>, and our construction is technically different from previous ones. Their method is more combinatorial, while ours is algebraic, which is more general.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101786"},"PeriodicalIF":2.6,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144262223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization analysis of an unfolding network for analysis-based compressed sensing 基于分析的压缩感知展开网络的泛化分析
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-06-06 DOI: 10.1016/j.acha.2025.101787
Vicky Kouni , Yannis Panagakis
{"title":"Generalization analysis of an unfolding network for analysis-based compressed sensing","authors":"Vicky Kouni ,&nbsp;Yannis Panagakis","doi":"10.1016/j.acha.2025.101787","DOIUrl":"10.1016/j.acha.2025.101787","url":null,"abstract":"<div><div>Unfolding networks have shown promising results in the Compressed Sensing (CS) field. Yet, the investigation of their generalization ability is still in its infancy. In this paper, we perform a generalization analysis of a state-of-the-art ADMM-based unfolding network, which jointly learns a decoder for CS and a sparsifying redundant analysis operator. To this end, we first impose a structural constraint on the learnable sparsifier, which parametrizes the network's hypothesis class. For the latter, we estimate its Rademacher complexity. With this estimate in hand, we deliver generalization error bounds – which scale like the square root of the number of layers – for the examined network. Finally, the validity of our theory is assessed and numerical comparisons to a state-of-the-art unfolding network are made, on synthetic and real-world datasets. Our experimental results demonstrate that our proposed framework complies with our theoretical findings and outperforms the baseline, consistently for all datasets.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101787"},"PeriodicalIF":2.6,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computing the proximal operator of the q-th power of the ℓ1,q-norm for group sparsity 群稀疏性的q-范数计算了l1的q-幂的近邻算子
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-06-06 DOI: 10.1016/j.acha.2025.101788
Rongrong Lin , Shihai Chen , Han Feng , Yulan Liu
{"title":"Computing the proximal operator of the q-th power of the ℓ1,q-norm for group sparsity","authors":"Rongrong Lin ,&nbsp;Shihai Chen ,&nbsp;Han Feng ,&nbsp;Yulan Liu","doi":"10.1016/j.acha.2025.101788","DOIUrl":"10.1016/j.acha.2025.101788","url":null,"abstract":"<div><div>In this note, we comprehensively characterize the proximal operator of the <em>q</em>-th power of the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow></msub></math></span>-norm (denoted by <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow><mrow><mi>q</mi></mrow></msubsup></math></span>) with <span><math><mn>0</mn><mo>&lt;</mo><mi>q</mi><mo>&lt;</mo><mn>1</mn></math></span> by exploiting the well-known proximal operator of <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mi>q</mi></mrow></msup></math></span> on the real line. In particular, much more explicit characterizations can be obtained whenever <span><math><mi>q</mi><mo>=</mo><mn>1</mn><mo>/</mo><mn>2</mn></math></span> and <span><math><mi>q</mi><mo>=</mo><mn>2</mn><mo>/</mo><mn>3</mn></math></span> due to the existence of closed-form expressions for the proximal operators of <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup></math></span> and <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mn>2</mn><mo>/</mo><mn>3</mn></mrow></msup></math></span>. Numerical experiments demonstrate potential advantages of the <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow><mrow><mi>q</mi></mrow></msubsup></math></span> regularization in the inter-group and intra-group sparse vector recovery.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101788"},"PeriodicalIF":2.6,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified stochastic framework for neural network quantization and pruning 神经网络量化与剪枝的统一随机框架
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-06-02 DOI: 10.1016/j.acha.2025.101778
Haoyu Zhang , Rayan Saab
{"title":"Unified stochastic framework for neural network quantization and pruning","authors":"Haoyu Zhang ,&nbsp;Rayan Saab","doi":"10.1016/j.acha.2025.101778","DOIUrl":"10.1016/j.acha.2025.101778","url":null,"abstract":"<div><div>Quantization and pruning are two essential techniques for compressing neural networks, yet they are often treated independently, with limited theoretical analysis connecting them. This paper introduces a unified framework for post-training quantization and pruning using stochastic path-following algorithms. Our approach builds on the Stochastic Path Following Quantization (SPFQ) method, extending its applicability to pruning and low-bit quantization, including challenging 1-bit regimes. By incorporating a scaling parameter and generalizing the stochastic operator, the proposed method achieves robust error correction and yields rigorous theoretical error bounds for both quantization and pruning as well as their combination.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101778"},"PeriodicalIF":2.6,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A tighter generalization error bound for wide GCN based on loss landscape 基于损失分布的广义GCN更严格的泛化误差界
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-05-21 DOI: 10.1016/j.acha.2025.101777
Xianchen Zhou , Kun Hu , Hongxia Wang
{"title":"A tighter generalization error bound for wide GCN based on loss landscape","authors":"Xianchen Zhou ,&nbsp;Kun Hu ,&nbsp;Hongxia Wang","doi":"10.1016/j.acha.2025.101777","DOIUrl":"10.1016/j.acha.2025.101777","url":null,"abstract":"<div><div>The generalization capability of Graph Convolutional Networks (GCNs) has been researched recently. The generalization error bound based on algorithmic stability is obtained for various structures of GCN. However, the generalization error bound computed by this method increases rapidly during the iteration since the algorithmic stability exponential depends on the number of iterations, which is not consistent with the performance of GCNs in practice. Based on the fact that the property of loss landscape, such as convex, exp-concave, or Polyak-Lojasiewicz* (PL*) leads to tighter stability and better generalization error bound, this paper focuses on the semi-supervised loss landscape of wide GCN. It shows that a wide GCN has a Hessian matrix with a small norm, which can lead to a positive definite training tangent kernel. Then GCN's loss can satisfy the PL* condition and lead to a tighter uniform stability independent of the iteration compared with previous work. Therefore, the generalization error bound in this paper depends on the graph filter's norm and layers, which is consistent with the experiments' results.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101777"},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An eigenfunction approach to conversion of the Laplace transform of point masses on the real line to the Fourier domain 实线上质点的拉普拉斯变换到傅里叶域的特征函数转换方法
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-05-21 DOI: 10.1016/j.acha.2025.101776
Michael E. Mckenna , Hrushikesh N. Mhaskar , Richard G. Spencer
{"title":"An eigenfunction approach to conversion of the Laplace transform of point masses on the real line to the Fourier domain","authors":"Michael E. Mckenna ,&nbsp;Hrushikesh N. Mhaskar ,&nbsp;Richard G. Spencer","doi":"10.1016/j.acha.2025.101776","DOIUrl":"10.1016/j.acha.2025.101776","url":null,"abstract":"<div><div>Motivated by applications in magnetic resonance relaxometry, we consider the following problem: given samples of a function <span><math><mi>t</mi><mo>↦</mo><msubsup><mrow><mo>∑</mo></mrow><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>K</mi></mrow></msubsup><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub><mi>exp</mi><mo>⁡</mo><mo>(</mo><mo>−</mo><mi>t</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>)</mo></math></span>, where <span><math><mi>K</mi><mo>≥</mo><mn>2</mn></math></span> is an integer, <span><math><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>∈</mo><mi>R</mi></math></span>, <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>&gt;</mo><mn>0</mn></math></span> for <span><math><mi>k</mi><mo>=</mo><mn>1</mn><mo>,</mo><mo>⋯</mo><mo>,</mo><mi>K</mi></math></span>, determine <em>K</em>, <span><math><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s and <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s. Unlike the case in which the <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s are purely imaginary, this problem is notoriously ill-posed. Our goal is to show that this problem can be transformed into an equivalent one in which the <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s are replaced by <span><math><mi>i</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>. We show that this may be accomplished by approximation in terms of Hermite functions, and using the fact that these functions are eigenfunctions of the Fourier transform. We present a preliminary numerical exploration of parameter extraction from this formalism, including the effect of noise. The inherent ill-posedness of the original problem persists in the new domain, as reflected in the numerical results.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101776"},"PeriodicalIF":2.6,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144222723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Framelet message passing 小框架消息传递
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-05-12 DOI: 10.1016/j.acha.2025.101773
Xinliang Liu , Bingxin Zhou , Chutian Zhang , Yu Guang Wang
{"title":"Framelet message passing","authors":"Xinliang Liu ,&nbsp;Bingxin Zhou ,&nbsp;Chutian Zhang ,&nbsp;Yu Guang Wang","doi":"10.1016/j.acha.2025.101773","DOIUrl":"10.1016/j.acha.2025.101773","url":null,"abstract":"<div><div>Graph neural networks have achieved champions in wide applications. Neural message passing is a typical key module for feature propagation by aggregating neighboring features. In this work, we propose a new message passing based on multiscale framelet transforms, called Framelet Message Passing. Different from traditional spatial methods, it integrates framelet representation of neighbor nodes from multiple hops away in node message update. We also propose a continuous message passing using neural ODE solvers. Both discrete and continuous cases can provably mitigate oversmoothing and achieve superior performance. Numerical experiments on real graph datasets show that the continuous version of the framelet message passing significantly outperforms existing methods when learning heterogeneous graphs and achieves state-of-the-art performance on classic node classification tasks with low computational costs.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101773"},"PeriodicalIF":2.6,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An oracle gradient regularized Newton method for quadratic measurements regression 二次测量回归的oracle梯度正则牛顿法
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-05-08 DOI: 10.1016/j.acha.2025.101775
Jun Fan , Jie Sun , Ailing Yan , Shenglong Zhou
{"title":"An oracle gradient regularized Newton method for quadratic measurements regression","authors":"Jun Fan ,&nbsp;Jie Sun ,&nbsp;Ailing Yan ,&nbsp;Shenglong Zhou","doi":"10.1016/j.acha.2025.101775","DOIUrl":"10.1016/j.acha.2025.101775","url":null,"abstract":"<div><div>Recovering an unknown signal from quadratic measurements has gained popularity due to its wide range of applications, including phase retrieval, fusion frame phase retrieval, and positive operator-valued measures. In this paper, we employ a least squares approach to reconstruct the signal and establish its non-asymptotic statistical properties. Our analysis shows that the estimator perfectly recovers the true signal in the noiseless case, while the error between the estimator and the true signal is bounded by <span><math><mi>O</mi><mo>(</mo><msqrt><mrow><mi>p</mi><mi>log</mi><mo>⁡</mo><mo>(</mo><mn>1</mn><mo>+</mo><mn>2</mn><mi>n</mi><mo>)</mo><mo>/</mo><mi>n</mi></mrow></msqrt><mo>)</mo></math></span> in the noisy case, where <em>n</em> is the number of measurements and <em>p</em> is the dimension of the signal. We then develop a two-phase algorithm, gradient regularized Newton method (GRNM), to solve the least squares problem. It is proven that the first phase terminates within finitely many steps, and the sequence generated in the second phase converges to a unique local minimum at a superlinear rate under certain mild conditions. Beyond these deterministic results, GRNM is capable of exactly reconstructing the true signal in the noiseless case and achieving the stated error rate with a high probability in the noisy case. Numerical experiments demonstrate that GRNM offers a high level of recovery capability and accuracy as well as fast computational speed.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101775"},"PeriodicalIF":2.6,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143935916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A parameter-free two-bit covariance estimator with improved operator norm error rate 一种改进算子范数错误率的无参数二位协方差估计器
IF 2.6 2区 数学
Applied and Computational Harmonic Analysis Pub Date : 2025-05-02 DOI: 10.1016/j.acha.2025.101774
Junren Chen , Michael K. Ng
{"title":"A parameter-free two-bit covariance estimator with improved operator norm error rate","authors":"Junren Chen ,&nbsp;Michael K. Ng","doi":"10.1016/j.acha.2025.101774","DOIUrl":"10.1016/j.acha.2025.101774","url":null,"abstract":"<div><div>A covariance matrix estimator using two bits per entry was recently developed by Dirksen et al. (2022) <span><span>[11]</span></span>. The estimator achieves near minimax operator norm rate for general sub-Gaussian distributions, but also suffers from two downsides: theoretically, there is an essential gap on operator norm error between their estimator and sample covariance when the diagonal of the covariance matrix is dominated by only a few entries; practically, its performance heavily relies on the dithering scale, which needs to be tuned according to some unknown parameters. In this work, we propose a new 2-bit covariance matrix estimator that simultaneously addresses both issues. Unlike the sign quantizer associated with uniform dither in Dirksen et al., we adopt a triangular dither prior to a 2-bit quantizer inspired by the multi-bit uniform quantizer. By employing dithering scales varying across entries, our estimator enjoys an improved operator norm error rate that depends on the effective rank of the underlying covariance matrix rather than the ambient dimension, which is optimal up to logarithmic factors. Moreover, our proposed method eliminates the need of <em>any</em> tuning parameter, as the dithering scales are entirely determined by the data. While our estimator requires a pass of all unquantized samples to determine the dithering scales, it can be adapted to the online setting where the samples arise sequentially. Experimental results are provided to demonstrate the advantages of our estimators over the existing ones.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101774"},"PeriodicalIF":2.6,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143903641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信