{"title":"A parallel implicit Runge-Kutta method for solving ordinary differential equations","authors":"M. Green","doi":"10.1145/503506.503540","DOIUrl":null,"url":null,"abstract":"Within the past few years, great advances have been made in computer technology, with a major emphasis being in the area of the large-scale integration of digital circuits. As prices continue to decrease and quality continues to increase, manufacturers have increased the speed of hardware components almost to the limit. The present challenge is to find new approaches to increased computational speed. Parallel architecture promises to provide such an increase, but this promise depends critically upon the development of numerical methods that can take advantage of the parallel structure. An obvious approach is to structure new algorithms in such a way that several independent computations can be carried out simultaneously.Most of the existing methods for solving ordinary differential equations are serial in nature. There has been some recent work in the area of modifying and extending serial methods for use on parallel or vector computers. Implicit single-step methods have been studied by Stoller and Morrison, Ceschino and Kuntzman, Butcher and others for implementation on serial computers. The standard techniques for solving the nonlinear implicit equations during each step are not parallel in nature. Miranker and Liniger, who give a general set of parallel linear multistep methods for any even number of arithmetic processors, also give an explicit Runge-Kutta formula which can be used in parallel. Rosser suggested obtaining a block of new values simultaneously, in which step information could be interchanged within the block. Fewer function evaluations per step are needed which makes the implicit methods more competitive. Rosser discusses a procedure for calculating four new values at each stage or function evaluation. Clippinger and Dimsdale have suggested a similar procedure, but with two new values at each stage. Worland has given modifications to sequential procedures which allow them to be executed in parallel. He also shows how these can capitalize effectively on the use of parallel or vector computers available today. Shampine and Watts have made studies on evenly-spaced block implicit single-step methods which are actually more suitable for parallel computation. They suggested that unequal spacing based upon a Lobatto quadrature formula might be used as effectively as equal spacing. This allows a higher-order result to be attained.The purpose of this paper is to present a method for solving ordinary differential equations using an implicit Runge-Kutta single-step formula with uneven spacing.Our primary concern will be the development of a Gauss-based implicit formula for the parallel solution of differential equations that could be used on vector computers (i.e., CDC STAR 100). Other similar techniques could be based on Lobatto or Radau quadrature.We focus on the Gauss forms (where no end points are involved) mainly because they have advantages when the computations are to be carried out on a truely parallel computer.An algorithm is developed which will produce simultaneous approximations for several steps at the same time, or basically in parallel. The algorithm is presented to solve ordinary differential equations using either of 2 Gauss implicit forms: the 3 point or the 9 point. All computations are carried out in double precision (17 digits) on a PDP 11/55. The algorithm's main purpose is to show a means of parallelism within a block. Since the PDP 11/55 is a serial computer, this is actually done serially; nevertheless, the algorithm is completely parallel. Vector computers such as the CDC STAR 100 could perform these parallel computations using vector instructions.Three test cases will be presented to show the quality of this method as opposed to now existing serial methods.","PeriodicalId":258426,"journal":{"name":"ACM-SE 17","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1979-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM-SE 17","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/503506.503540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Within the past few years, great advances have been made in computer technology, with a major emphasis being in the area of the large-scale integration of digital circuits. As prices continue to decrease and quality continues to increase, manufacturers have increased the speed of hardware components almost to the limit. The present challenge is to find new approaches to increased computational speed. Parallel architecture promises to provide such an increase, but this promise depends critically upon the development of numerical methods that can take advantage of the parallel structure. An obvious approach is to structure new algorithms in such a way that several independent computations can be carried out simultaneously.Most of the existing methods for solving ordinary differential equations are serial in nature. There has been some recent work in the area of modifying and extending serial methods for use on parallel or vector computers. Implicit single-step methods have been studied by Stoller and Morrison, Ceschino and Kuntzman, Butcher and others for implementation on serial computers. The standard techniques for solving the nonlinear implicit equations during each step are not parallel in nature. Miranker and Liniger, who give a general set of parallel linear multistep methods for any even number of arithmetic processors, also give an explicit Runge-Kutta formula which can be used in parallel. Rosser suggested obtaining a block of new values simultaneously, in which step information could be interchanged within the block. Fewer function evaluations per step are needed which makes the implicit methods more competitive. Rosser discusses a procedure for calculating four new values at each stage or function evaluation. Clippinger and Dimsdale have suggested a similar procedure, but with two new values at each stage. Worland has given modifications to sequential procedures which allow them to be executed in parallel. He also shows how these can capitalize effectively on the use of parallel or vector computers available today. Shampine and Watts have made studies on evenly-spaced block implicit single-step methods which are actually more suitable for parallel computation. They suggested that unequal spacing based upon a Lobatto quadrature formula might be used as effectively as equal spacing. This allows a higher-order result to be attained.The purpose of this paper is to present a method for solving ordinary differential equations using an implicit Runge-Kutta single-step formula with uneven spacing.Our primary concern will be the development of a Gauss-based implicit formula for the parallel solution of differential equations that could be used on vector computers (i.e., CDC STAR 100). Other similar techniques could be based on Lobatto or Radau quadrature.We focus on the Gauss forms (where no end points are involved) mainly because they have advantages when the computations are to be carried out on a truely parallel computer.An algorithm is developed which will produce simultaneous approximations for several steps at the same time, or basically in parallel. The algorithm is presented to solve ordinary differential equations using either of 2 Gauss implicit forms: the 3 point or the 9 point. All computations are carried out in double precision (17 digits) on a PDP 11/55. The algorithm's main purpose is to show a means of parallelism within a block. Since the PDP 11/55 is a serial computer, this is actually done serially; nevertheless, the algorithm is completely parallel. Vector computers such as the CDC STAR 100 could perform these parallel computations using vector instructions.Three test cases will be presented to show the quality of this method as opposed to now existing serial methods.
在过去的几年中,计算机技术取得了巨大的进步,主要的重点是在数字电路的大规模集成领域。随着价格的不断下降和质量的不断提高,制造商已经将硬件组件的速度提高到了极限。目前的挑战是找到提高计算速度的新方法。并行架构承诺提供这样的增长,但这一承诺关键取决于能够利用并行结构的数值方法的发展。一个显而易见的方法是构建新的算法,使几个独立的计算可以同时进行。现有的求解常微分方程的方法基本上都是串行的。最近在修改和扩展串行方法以用于并行或矢量计算机方面已经有了一些工作。隐式单步方法已经被Stoller和Morrison, Ceschino和Kuntzman, Butcher和其他人研究了在串行计算机上的实现。求解非线性隐式方程的标准技术在每一步都是不平行的。Miranker和Liniger给出了一组适用于任意偶数算术处理器的并行线性多步方法,并给出了一个可以并行使用的显式龙格-库塔公式。Rosser建议同时获取一个新值块,其中可以在块内交换步长信息。每一步所需的函数求值更少,这使得隐式方法更具竞争力。Rosser讨论了在函数求值的每个阶段计算四个新值的过程。Clippinger和Dimsdale提出了类似的程序,但在每个阶段都有两个新的值。Worland对顺序过程进行了修改,使它们能够并行执行。他还展示了如何有效地利用当今可用的并行或矢量计算机。Shampine和Watts对等间隔块隐式单步方法进行了研究,这种方法实际上更适合并行计算。他们建议,基于Lobatto正交公式的不等间距可以像等间距一样有效地使用。这允许获得更高阶的结果。本文的目的是给出一种用非均匀间隔的隐式龙格-库塔单步公式求解常微分方程的方法。我们的主要关注将是开发一个基于高斯的隐式公式,用于微分方程的并行解,可用于矢量计算机(即CDC STAR 100)。其他类似的技术可以基于Lobatto或Radau正交。我们关注高斯形式(不涉及端点)主要是因为当计算要在真正的并行计算机上执行时,它们具有优势。提出了一种算法,该算法可以同时对几个步骤或基本上并行地产生同时逼近。给出了用3点或9点两种高斯隐式形式求解常微分方程的算法。所有计算均在PDP 11/55上以双精度(17位)进行。该算法的主要目的是显示块内并行性的一种方法。由于PDP 11/55是串行计算机,这实际上是串行完成的;然而,该算法是完全并行的。像CDC STAR 100这样的矢量计算机可以使用矢量指令执行这些并行计算。将给出三个测试用例,以显示此方法相对于现有串行方法的质量。