{"title":"电磁散射有限元法的并行实现","authors":"P. Atlamazoglou, G. Pagiatakis, N. Uzunoglu","doi":"10.1109/AEM.1996.873072","DOIUrl":null,"url":null,"abstract":"The finite element method for the computation of electromagnetic fields is gaining popularity in recent times due to its ability to handle arbitrary geometries and its versatility in modeling inhomogeneities and material discontinuities. For electrically large and complex structures, massively parallel computers have to be used in order to obtain sufficiently accurate solutions in acceptable time fiames. In this paper we present the implementation details and the performance evaluation of a parallel three dimensional finite element code for open domain electromagnetic problems. In the finite element method the computational domain is divided into smaller nonoverlapping subvolumes, in our implementation tetrahedra. Within each tetrahedron the scattered electric field is represented using edge based vector basis hnctions. The finite element mesh is truncated artificially at some distance fiom the scatterer with the use of a second order absorbing boundary condition [ 11. The whole mathematical procedure leads to a linear system with symmetric complex sparse matrix. Only the nonzero elements of the upper triangular part of this matrix are stored using the compressed row storage format. The linear system is solved with the conjugate orthogonal conjugate gradient method. The parallel computer we use is the Parsytec We13612 of the Athens High Performance Computing Laboratory. It is a distributed memory machine with message passing architecture. It consists of 512 T805 transputers arranged on a two dimensional grid. In order to parallelize the finite element code using np processors, we divided the global matrix and the vectors into np sections, and assign one section to each processor. The data decomposition is performed in a manner that reduces interprocessor communication, while balancing the load on each processor. We organize the np processors in a virtual ring topology, and employ asynchronous communication that allows us to overlap message exchange with computations for better efficiency. We tested the parallel code for the case of a plane wave incident on a dielectric sphere. The near field values were in good agreement with those from a Me series solution, although the absorbing boundary surface was placed only a fraction of a wavelength away fiom the scatterer. We observed significant speedups for large numbers of processors. This means that the finite element method is well suited for parallelization in a massively parallel environment. We firther noticed that for a given problem size, we can always find an upper boundary of processors above which performance deteriorates, as the increased communication overhead exceeds the time saved by parallel execution of computations. However we can set this boundary arbitrarily high by scaling up sufficiently the size of the problem. References","PeriodicalId":445510,"journal":{"name":"Trans Black Sea Region Symposium on Applied Electromagnetism","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A parallel implementation of the finite element method for electromagnetic scattering\",\"authors\":\"P. Atlamazoglou, G. Pagiatakis, N. Uzunoglu\",\"doi\":\"10.1109/AEM.1996.873072\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The finite element method for the computation of electromagnetic fields is gaining popularity in recent times due to its ability to handle arbitrary geometries and its versatility in modeling inhomogeneities and material discontinuities. For electrically large and complex structures, massively parallel computers have to be used in order to obtain sufficiently accurate solutions in acceptable time fiames. In this paper we present the implementation details and the performance evaluation of a parallel three dimensional finite element code for open domain electromagnetic problems. In the finite element method the computational domain is divided into smaller nonoverlapping subvolumes, in our implementation tetrahedra. Within each tetrahedron the scattered electric field is represented using edge based vector basis hnctions. The finite element mesh is truncated artificially at some distance fiom the scatterer with the use of a second order absorbing boundary condition [ 11. The whole mathematical procedure leads to a linear system with symmetric complex sparse matrix. Only the nonzero elements of the upper triangular part of this matrix are stored using the compressed row storage format. The linear system is solved with the conjugate orthogonal conjugate gradient method. The parallel computer we use is the Parsytec We13612 of the Athens High Performance Computing Laboratory. It is a distributed memory machine with message passing architecture. It consists of 512 T805 transputers arranged on a two dimensional grid. In order to parallelize the finite element code using np processors, we divided the global matrix and the vectors into np sections, and assign one section to each processor. The data decomposition is performed in a manner that reduces interprocessor communication, while balancing the load on each processor. We organize the np processors in a virtual ring topology, and employ asynchronous communication that allows us to overlap message exchange with computations for better efficiency. We tested the parallel code for the case of a plane wave incident on a dielectric sphere. The near field values were in good agreement with those from a Me series solution, although the absorbing boundary surface was placed only a fraction of a wavelength away fiom the scatterer. We observed significant speedups for large numbers of processors. This means that the finite element method is well suited for parallelization in a massively parallel environment. We firther noticed that for a given problem size, we can always find an upper boundary of processors above which performance deteriorates, as the increased communication overhead exceeds the time saved by parallel execution of computations. However we can set this boundary arbitrarily high by scaling up sufficiently the size of the problem. References\",\"PeriodicalId\":445510,\"journal\":{\"name\":\"Trans Black Sea Region Symposium on Applied Electromagnetism\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1996-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Trans Black Sea Region Symposium on Applied Electromagnetism\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AEM.1996.873072\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Trans Black Sea Region Symposium on Applied Electromagnetism","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AEM.1996.873072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A parallel implementation of the finite element method for electromagnetic scattering
The finite element method for the computation of electromagnetic fields is gaining popularity in recent times due to its ability to handle arbitrary geometries and its versatility in modeling inhomogeneities and material discontinuities. For electrically large and complex structures, massively parallel computers have to be used in order to obtain sufficiently accurate solutions in acceptable time fiames. In this paper we present the implementation details and the performance evaluation of a parallel three dimensional finite element code for open domain electromagnetic problems. In the finite element method the computational domain is divided into smaller nonoverlapping subvolumes, in our implementation tetrahedra. Within each tetrahedron the scattered electric field is represented using edge based vector basis hnctions. The finite element mesh is truncated artificially at some distance fiom the scatterer with the use of a second order absorbing boundary condition [ 11. The whole mathematical procedure leads to a linear system with symmetric complex sparse matrix. Only the nonzero elements of the upper triangular part of this matrix are stored using the compressed row storage format. The linear system is solved with the conjugate orthogonal conjugate gradient method. The parallel computer we use is the Parsytec We13612 of the Athens High Performance Computing Laboratory. It is a distributed memory machine with message passing architecture. It consists of 512 T805 transputers arranged on a two dimensional grid. In order to parallelize the finite element code using np processors, we divided the global matrix and the vectors into np sections, and assign one section to each processor. The data decomposition is performed in a manner that reduces interprocessor communication, while balancing the load on each processor. We organize the np processors in a virtual ring topology, and employ asynchronous communication that allows us to overlap message exchange with computations for better efficiency. We tested the parallel code for the case of a plane wave incident on a dielectric sphere. The near field values were in good agreement with those from a Me series solution, although the absorbing boundary surface was placed only a fraction of a wavelength away fiom the scatterer. We observed significant speedups for large numbers of processors. This means that the finite element method is well suited for parallelization in a massively parallel environment. We firther noticed that for a given problem size, we can always find an upper boundary of processors above which performance deteriorates, as the increased communication overhead exceeds the time saved by parallel execution of computations. However we can set this boundary arbitrarily high by scaling up sufficiently the size of the problem. References