{"title":"An augmented Lagrangian approach for cardinality constrained minimization applied to variable selection problems","authors":"N. Krejić , E.H.M. Krulikovski , M. Raydan","doi":"10.1016/j.apnum.2023.12.006","DOIUrl":"10.1016/j.apnum.2023.12.006","url":null,"abstract":"<div><div>To solve convex constrained minimization problems, that also include a cardinality constraint, we propose an augmented Lagrangian scheme combined with alternating projection ideas. Optimization problems that involve a cardinality constraint are NP-hard mathematical programs and typically very hard to solve approximately. Our approach takes advantage of a recently developed and analyzed continuous formulation that relaxes the cardinality constraint. Based on that formulation, we solve a sequence of smooth convex constrained minimization problems, for which we use projected gradient-type methods. In our setting, the convex constraint region can be written as the intersection of a finite collection of convex sets that are easy and inexpensive to project. We apply our approach to a variety of over and under determined constrained linear least-squares problems, with both synthetic and real data that arise in variable selection, and demonstrate its effectiveness.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 284-296"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138820449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis Iordanis, Christos Koukouvinos, Iliana Silou
{"title":"On the efficacy of conditioned and progressive Latin hypercube sampling in supervised machine learning","authors":"Ioannis Iordanis, Christos Koukouvinos, Iliana Silou","doi":"10.1016/j.apnum.2023.12.016","DOIUrl":"10.1016/j.apnum.2023.12.016","url":null,"abstract":"<div><div><span>In this paper, Latin Hypercube Sampling<span> (LHS) method is compared as per its effectiveness in supervised machine learning procedures. Employing LHS saves computer's processing time and in conjunction with Latin hypercube design properties and space filling ability, is considered as one of the most advanced mechanisms in terms of sampling. Although more data usually deliver better results, when using LHS techniques, same quality outputs can be produced with less data and, as a result, </span></span>storage cost<span> and training time are reduced. Conditioned Latin Hypercube Sampling (cLHS) is one of those techniques, successfully performing in supervised machine learning tasks. Unfortunately, the minimum sufficient training dataset size cannot be known in advance. In this case, progressive sampling is recommended since it begins with a small sample and progressively increases its size until model accuracy no longer improves. Combining Latin hypercube sampling and the idea of sequentially incrementing sampling, we test Progressive Latin Hypercube Sampling (PLHS) while monitoring the performance of the sampling-based training as the sample size grows. PLHS and cLHS algorithms are applied in datasets with discrete variables securing that each sample is provided with the Latin hypercube design properties and preserves the principal ability of LHS for space filling, as illustrated in respective sample projecting diagrams. The performance of the above LHS methods in supervised machine learning is evaluated by the degree of training of the model, which is certified through the accuracy of the produced confusion matrices in test files. The results from the use of the above Latin Hypercube Sampling techniques compared against benchmark sampling method empirically prove that machine learning training process becomes less costfull, while remaining reliable.</span></div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 256-270"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139094504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Regularisation and iterated regularisation of Hamiltonian systems of the second quasi-Painlevé equation","authors":"Galina Filipuk","doi":"10.1016/j.apnum.2024.10.012","DOIUrl":"10.1016/j.apnum.2024.10.012","url":null,"abstract":"<div><div>In this paper we consider several Hamiltonian functions for the second quasi-Painlevé equation. One of the features of these functions is that they give rise to the same final chart regular systems once using certain blowups and twists in the regularisation procedure. We also discuss what happens if we iterate the blowup process for these final chart systems. Using birational transformations between different Hamiltonian systems we show how to construct new Hamiltonian functions which give rise to the second quasi-Painlevé equation with shifted coefficients. We also give an explicit example of the Bäcklund transformation.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 290-300"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143141564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Omar A. Alhelali , S.D. Georgiou , C. Koukouvinos , S. Stylianou
{"title":"Orthogonal designs for computer experiments constructed from sequences with zero autocorrelation","authors":"Omar A. Alhelali , S.D. Georgiou , C. Koukouvinos , S. Stylianou","doi":"10.1016/j.apnum.2023.09.010","DOIUrl":"10.1016/j.apnum.2023.09.010","url":null,"abstract":"<div><div>Designs for computer experiments constitute an important class of experimental designs<span>. Computer experiments are used when the physical experiments are expensive or time-consuming and attracted a lot of attention in recent years. In this paper, we proposed a method for generating computer experiments with many factors and symmetric runs. These designs are suitable for computer experiments and are constructed using known sequences with zero autocorrelation function<span>, such as T-sequences, Bases sequences, Normal sequences, and other. The results appear to be encouraging as the methodologies can transform the known sequences into designs for computer experiments without the need for a computer search. The generated designs have some favorable properties, including the symmetry in their runs which results in all the even orders effects being orthogonal to the main effects.</span></span></div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 22-31"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134976513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new class of symplectic methods for stochastic Hamiltonian systems","authors":"Cristina Anton","doi":"10.1016/j.apnum.2024.01.021","DOIUrl":"10.1016/j.apnum.2024.01.021","url":null,"abstract":"<div><div>We propose a systematic approach to construct a new family of stochastic symplectic schemes for the strong approximation of the solution of stochastic Hamiltonian systems. Our approach is based both on B-series and generating functions. The proposed schemes are a generalization of the implicit midpoint rule, they require derivatives of the Hamiltonian functions of at most order two, and are constructed by defining a generating function. We construct some schemes with strong convergence order one and a half, and we illustrate numerically their long term performance.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 43-59"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139657336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Krylov subspace methods for large multidimensional eigenvalue computation","authors":"Anas El Hachimi , Khalide Jbilou , Ahmed Ratnani","doi":"10.1016/j.apnum.2024.01.017","DOIUrl":"10.1016/j.apnum.2024.01.017","url":null,"abstract":"<div><div><span>In this paper, we describe some Krylov subspace methods for computing eigentubes and </span>eigenvectors (eigenslices) for large and sparse third-order tensors. This work provides projection methods for computing some of the largest (or smallest) eigentubes and eigenslices using the t-product. In particular, we use the tensor Arnoldi's approach for the non-hermitian case and the tensor Lanczos's approach for f-hermitian tensors. We also use the tensor block Arnoldi method to approximate the extreme eigentubes of a large tensor. Computed examples are given to illustrate the effectiveness of these methods.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 205-221"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139657341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Srati , A. Oulmelk , L. Afraites , A. Hadri , M.A. Zaky , A. Aldraiweesh , A.S. Hendy
{"title":"An inverse problem of determining the parameters in diffusion equations by using fractional physics-informed neural networks","authors":"M. Srati , A. Oulmelk , L. Afraites , A. Hadri , M.A. Zaky , A. Aldraiweesh , A.S. Hendy","doi":"10.1016/j.apnum.2024.10.016","DOIUrl":"10.1016/j.apnum.2024.10.016","url":null,"abstract":"<div><div>In this study, we address an inverse problem in nonlinear time-fractional diffusion equations using a deep neural network. The challenge arises from the equation's nonlinear behavior, the involvement of time-based fractional Caputo derivatives, and the need to estimate parameters influenced by space or the solution of the fractional PDE. Our solution involves a fractional physics-informed neural network (FPINN). Initially, we use FPINN to solve a straightforward problem. Then, we apply FPINN to the inverse problem of estimating parameter and model non-linearity. For the inverse problem, we enhance our method by including the mean square error of final observations in the FPINN's cost function. This adjustment helps effectively in tackling the unique challenges of the time-fractional diffusion equation. Numerical tests involving regular and singular examples demonstrate the effectiveness of the physics-informed neural network approach in accurately recovering parameters. We reinforce this finding through a numerical comparison with alternative methods such as the alternating direction multiplier method (ADMM), the gradient descent, and the DeepONets (deep operator networks) method.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 189-213"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143141227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two efficient iteration methods for solving the absolute value equations","authors":"Xiaohui Yu , Qingbiao Wu","doi":"10.1016/j.apnum.2024.10.009","DOIUrl":"10.1016/j.apnum.2024.10.009","url":null,"abstract":"<div><div>Two efficient iteration methods are proposed for solving the absolute value equation which are the accelerated generalized SOR-like (AGSOR-like) iteration method and the preconditioned generalized SOR-like (PGSOR-like) iteration method. We prove the convergence of the two proposed iterative methods after applying some qualification conditions to the parameters involved. We also discuss the optimal values of the parameters involved in the two methods. Also, some numerical experiments demonstrate the practicability, robustness and high efficiency of the two new methods. In addition, applying the optimal parameter values obtained from theoretical analysis to the PGSOR-like method, it can give solutions with high accuracy after a small number of iterations, demonstrating significant advantages.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 148-159"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143141569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicoletta Del Buono , Flavia Esposito , Laura Selicato , Rafał Zdunek
{"title":"Penalty hyperparameter optimization with diversity measure for nonnegative low-rank approximation","authors":"Nicoletta Del Buono , Flavia Esposito , Laura Selicato , Rafał Zdunek","doi":"10.1016/j.apnum.2024.10.002","DOIUrl":"10.1016/j.apnum.2024.10.002","url":null,"abstract":"<div><div>Learning tasks are often based on penalized optimization problems in which a sparse solution is desired. This can lead to more interpretative results by identifying a smaller subset of important features or components and reducing the dimensionality of the data representation, as well. In this study, we propose a new method to solve a constrained Frobenius norm-based nonnegative low-rank approximation, and the tuning of the associated penalty hyperparameter, simultaneously. The penalty term added is a particular diversity measure that is more effective for sparseness purposes than other classical norm-based penalties (i.e., <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> or <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn><mo>,</mo><mn>1</mn></mrow></msub></math></span> norms). As it is well known, setting the hyperparameters of an algorithm is not an easy task. Our work drew on developing an optimization method and the corresponding algorithm that simultaneously solves the sparsity-constrained nonnegative approximation problem and optimizes its associated penalty hyperparameters. We test the proposed method by numerical experiments and show its promising results on several synthetic and real datasets.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 189-204"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Boujoudar , Abdelaziz Beljadid , Ahmed Taik
{"title":"Implicit EXP-RBF techniques for modeling unsaturated flow through soils with water uptake by plant roots","authors":"Mohamed Boujoudar , Abdelaziz Beljadid , Ahmed Taik","doi":"10.1016/j.apnum.2024.10.003","DOIUrl":"10.1016/j.apnum.2024.10.003","url":null,"abstract":"<div><div>Modeling unsaturated flow through soils with water uptake by plant root has many applications in agriculture and water resources management. In this study, our aim is to develop efficient numerical techniques for solving the Richards equation with a sink term due to plant root water uptake. The Feddes model is used for water absorption by plant roots, and the van-Genuchten model is employed for capillary pressure. We introduce a numerical approach that combines the localized exponential radial basis function (EXP-RBF) method for space and the second-order backward differentiation formula (BDF2) for temporal discretization. The localized RBF methods eliminate the need for mesh generation and avoid ill-conditioning problems. This approach yields a sparse matrix for the global system, optimizing memory usage and computational time. The proposed implicit EXP-RBF techniques have advantages in terms of accuracy and computational efficiency thanks to the use of BDF2 and the localized RBF method. Modified Picards iteration method for the mixed form of the Richards equation is employed to linearize the system. Various numerical experiments are conducted to validate the proposed numerical model of infiltration with plant root water absorption. The obtained results conclusively demonstrate the effectiveness of the proposed numerical model in accurately predicting soil moisture dynamics under water uptake by plant roots. The proposed numerical techniques can be incorporated in the numerical models where unsaturated flows and water uptake by plant roots are involved such as in hydrology, agriculture, and water management.</div></div>","PeriodicalId":8199,"journal":{"name":"Applied Numerical Mathematics","volume":"208 ","pages":"Pages 79-97"},"PeriodicalIF":2.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}