{"title":"n体问题的计算结构","authors":"J. Katzenelson","doi":"10.1137/0910048","DOIUrl":null,"url":null,"abstract":"This work considers tree algorithms for the N-body problem where the number of particles is on the order of a million. The main concern of this work is the organization and performance of these computations on parallel computers.This work introduces a formulation of the N-body problem as a set of recursive equations based on a few elementary functions. It is shown that both the algorithm of Barnes–Hut and that of Greengard–Rokhlin satisfy these equations using different elementary functions. The recursive formulation leads directly to a computational structure in the form of a pyramid-like graph, where each vertex is a process, and each arc a communication link.The pyramid is mapped to three different processor configurations: (1) a pyramid of processors corresponding to the processes pyramid graph; (2) a hypercube of processors, e.g., a connection-machine-like architecture; and (3) a rather small array, e.g., $2 \\times 2 \\times 2$, of processors faster than the ones considered in (1) and (2) above.The main conclusion is that simulations of this size can be performed on any of the three architectures in reasonable time. Approximately 24 seconds per timestep is the estimate for a million equally distributed particles using the Greengard-Rokhlin algorithm on the CM-2 connection machine. The smaller array of processors is quite competitive in performance.","PeriodicalId":200176,"journal":{"name":"Siam Journal on Scientific and Statistical Computing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":"{\"title\":\"Computational structure of the N-body problem\",\"authors\":\"J. Katzenelson\",\"doi\":\"10.1137/0910048\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work considers tree algorithms for the N-body problem where the number of particles is on the order of a million. The main concern of this work is the organization and performance of these computations on parallel computers.This work introduces a formulation of the N-body problem as a set of recursive equations based on a few elementary functions. It is shown that both the algorithm of Barnes–Hut and that of Greengard–Rokhlin satisfy these equations using different elementary functions. The recursive formulation leads directly to a computational structure in the form of a pyramid-like graph, where each vertex is a process, and each arc a communication link.The pyramid is mapped to three different processor configurations: (1) a pyramid of processors corresponding to the processes pyramid graph; (2) a hypercube of processors, e.g., a connection-machine-like architecture; and (3) a rather small array, e.g., $2 \\\\times 2 \\\\times 2$, of processors faster than the ones considered in (1) and (2) above.The main conclusion is that simulations of this size can be performed on any of the three architectures in reasonable time. Approximately 24 seconds per timestep is the estimate for a million equally distributed particles using the Greengard-Rokhlin algorithm on the CM-2 connection machine. The smaller array of processors is quite competitive in performance.\",\"PeriodicalId\":200176,\"journal\":{\"name\":\"Siam Journal on Scientific and Statistical Computing\",\"volume\":\"53 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1989-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"39\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Siam Journal on Scientific and Statistical Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1137/0910048\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Siam Journal on Scientific and Statistical Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/0910048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This work considers tree algorithms for the N-body problem where the number of particles is on the order of a million. The main concern of this work is the organization and performance of these computations on parallel computers.This work introduces a formulation of the N-body problem as a set of recursive equations based on a few elementary functions. It is shown that both the algorithm of Barnes–Hut and that of Greengard–Rokhlin satisfy these equations using different elementary functions. The recursive formulation leads directly to a computational structure in the form of a pyramid-like graph, where each vertex is a process, and each arc a communication link.The pyramid is mapped to three different processor configurations: (1) a pyramid of processors corresponding to the processes pyramid graph; (2) a hypercube of processors, e.g., a connection-machine-like architecture; and (3) a rather small array, e.g., $2 \times 2 \times 2$, of processors faster than the ones considered in (1) and (2) above.The main conclusion is that simulations of this size can be performed on any of the three architectures in reasonable time. Approximately 24 seconds per timestep is the estimate for a million equally distributed particles using the Greengard-Rokhlin algorithm on the CM-2 connection machine. The smaller array of processors is quite competitive in performance.