parallel multi-zone methods for large-scale ...partial differential equations must be discretized...

9
Parallel Multi-Zone Methods for Large-Scale Multidisciplinary Computational Physics Simulations Ding Li, Guoping Xia, and Charles L. Merkle Mechanical Engineering, Purdue University Chaffee Hall, 500 Allison Road, West Lafayette, IN 47907-2014 Email: [email protected] Abstract. A parallel multi-zone method for the simulation of large-scale multidisciplinary applications involving field equations from multiple branches of physics is outlined. The equations of mathematical physics are expressed in a unified form that enables a single algorithm and computational code to describe problems involving diverse, but closely coupled, physics. Specific sub-disciplines include fluid and plasma dynamics, electrodynamics, radiative energy transfer, thermal/mechanical stress and strain distributions and conjugate heat transfer in solids. Efficient parallel implementation of these coupled physics must take into account the different number of governing field equations in the various physical zones and the close coupling inside and between regions. This is accomplished by implementing the unified computational algorithm in terms of an arbitrary grid and a flexible data structure that allows load balancing by sub-clusters. Capabilities are demonstrated by a trapped vortex liquid spray combustor, an MHD power generator, combustor cooling in a rocket engine and a pulsed detonation engine-based combustion system for a gas turbine. The results show a variety of interesting physical phenomena and the efficacy of the computational implementation. Introduction High fidelity computational simulations of complex physical behavior are common in many fields of physics such as structures, plasma dynamics, fluid dynamics, electromagnetics, radiative energy transfer and neutron transport. Detailed three-dimensional simulations in any one of these fields can tax the capabilities of present-day parallel processing, but the looming challenge for parallel computation is to provide detailed simulations of systems that couple several or all of these basic physics disciplines into a single application. In the simplest multidisciplinary problems, the physics are loosely coupled and individual codes from the several sub-disciplines can be combined to provide practical solutions. Many applications, however, arise in which the multidisciplinary physics are so intensely coupled that the equations from the various sub-branches of physics must likewise be closely coupled and solved simultaneously. This implies that the computational algorithm, data structure, message passing and load balancing steps must all be addressed simultaneously in conjunction with the physical aspects of the problem. In the present paper we outline a method for dealing with such multi-physics problems with emphasis on the computational formulation and parallel implementation. The focus is on applications that involve closely coupled physics from several or all of the sub-domains listed above. Because conservation laws expressed as field variable solutions to partial differential equations are central in all of these sub-domains of physics, the formulation is based upon a generalized implementation of the partial differential equations that unifies the various physical phenomena. In the following sections we first present a general conservation law for field variables and outline the numerical solution procedure. Following this we describe the GEMS code in which these conservation laws are implemented along with a multi-physics zone method and companion data structure that is used to establish our parallel computing approach. Representative results are then presented that demonstrate the degree of efficiency of the parallel implementation as well as several practical applications including a

Upload: others

Post on 12-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

Parallel Multi-Zone Methods for Large-Scale Multidisciplinary Computational Physics Simulations

Ding Li, Guoping Xia, and Charles L. Merkle

Mechanical Engineering, Purdue University Chaffee Hall, 500 Allison Road, West Lafayette, IN 47907-2014

Email: [email protected]

Abstract.

A parallel multi-zone method for the simulation of large-scale multidisciplinary applications involving field equations from multiple branches of physics is outlined. The equations of mathematical physics are expressed in a unified form that enables a single algorithm and computational code to describe problems involving diverse, but closely coupled, physics. Specific sub-disciplines include fluid and plasma dynamics, electrodynamics, radiative energy transfer, thermal/mechanical stress and strain distributions and conjugate heat transfer in solids. Efficient parallel implementation of these coupled physics must take into account the different number of governing field equations in the various physical zones and the close coupling inside and between regions. This is accomplished by implementing the unified computational algorithm in terms of an arbitrary grid and a flexible data structure that allows load balancing by sub-clusters. Capabilities are demonstrated by a trapped vortex liquid spray combustor, an MHD power generator, combustor cooling in a rocket engine and a pulsed detonation engine-based combustion system for a gas turbine. The results show a variety of interesting physical phenomena and the efficacy of the computational implementation.

Introduction

High fidelity computational simulations of complex physical behavior are common in many fields of physics such as structures, plasma dynamics, fluid dynamics, electromagnetics, radiative energy transfer and neutron transport. Detailed three-dimensional simulations in any one of these fields can tax the capabilities of present-day parallel processing, but the looming challenge for parallel computation is to provide detailed simulations of systems that couple several or all of these basic physics disciplines into a single application.

In the simplest multidisciplinary problems, the physics are loosely coupled and individual codes from the several sub-disciplines can be combined to provide practical solutions. Many applications, however, arise in which the multidisciplinary physics are so intensely coupled that the equations from the various sub-branches of physics must likewise be closely coupled and solved simultaneously. This implies that the computational algorithm, data structure, message passing and load balancing steps must all be addressed simultaneously in conjunction with the physical aspects of the problem. In the present paper we outline a method for dealing with such multi-physics problems with emphasis on the computational formulation and parallel implementation. The focus is on applications that involve closely coupled physics from several or all of the sub-domains listed above. Because conservation laws expressed as field variable solutions to partial differential equations are central in all of these sub-domains of physics, the formulation is based upon a generalized implementation of the partial differential equations that unifies the various physical phenomena.

In the following sections we first present a general conservation law for field variables and outline the numerical solution procedure. Following this we describe the GEMS code in which these conservation laws are implemented along with a multi-physics zone method and companion data structure that is used to establish our parallel computing approach. Representative results are then presented that demonstrate the degree of efficiency of the parallel implementation as well as several practical applications including a

Page 2: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

trapped vortex combustor, an MHD power generator analysis, a conjugate heat transfer analysis for a rocket engine combustor and a constant volume combustion turbine system simulation

Conservation Laws for Field Variables

The fundamental phenomena in nearly all fields of mathematical physics are described in terms of a set of coupled partial differential equations (pde’s) augmented by a series of constitutive algebraic relations that are used to close the system. The pde’s typically describe basic conservation relations for quantitie4s such as mass, momentum, energy and electrical charge. In general, these pde’s involve three types of vector operators, the curl, the divergence and the gradient that appear individually or in combination. When they appear alone, they typically represent wave phenomena, while when they appear in combination (as the div-grad operator) they typically represent the effects of diffusion. An important feature to in simulating multi-physical phenomena is that the structure of the conservation relations is essentially parallel for all branches of mathematical physics.

In contrast to the conservation equations, the constitutive relations are most often algebraic in nature. Constitutive relations are used to relate thermodynamic variables through appropriate thermal and caloric equations of state and pertinent property relations; to relate electric and/or magnetic fields to currents, and to relate stresses and strains to velocity and displacement. The partial differential character of the conservation relations imply that these pde’s set the global structure of the computational algorithm and the code, while the algebraic nature of the constitutive relations implies that this auxiliary data can be provided in subroutine fashion as needed, but does not impact the global structure of the code. These fundamental concepts provide much insight into parallel implementations as well.

Mathematically, the conservation relations for a general branch of mathematical physics may be written as a generic set of partial differential equations of the form:

0=Φ∇+×∇+•∇+∂∂

CD FFtQ

(1)

where Q and Φ are column vectors in the conservation system and DFr

and CFr

are tensors with similar column length whose rows correspond to the number of dimensions. (The subscripts, D and C, refer to ‘Divergence’ and ‘Curl’ tensors respectively.) Various components of these variables may be null. In writing these expressions, we have defined a generalized curl operator that applies to vectors of length larger than three. For computational purposes it is often useful to expand the divergence operator to four variables and include the temporal derivative as an additional ‘spatial’ variable in the divergence term. These equations also imply the existence of a complete set of independent variables (here denoted as pQ to

represent the ‘primitive’ or most fundamental set of independent variables in the problem) such that two vectors and tensors in the conservation relations may be expressed as complete functions of the vector, pQ

(i.e., ( )pQQQ = , ( )pQΦ=Φ , ( )pDD QFF = and ( )pCC QFF = ).

An important issue for computational purposes is that the length of the primitive variables vector can vary dramatically among different fields of physics. For example, in solids where mechanical and thermal stresses are to be determined, pQ , will generally include three displacements, three velocities and one

temperature for a total of seven equations. In simpler solid applications where only heat conduction is included, only one partial differential equation need be solved and pQ contains only one component. In

simple fluid dynamics problems, the variables will include three velocity components, the pressure and the temperature for a total of five components, but in more complex applications there may be as many as six additional partial differential equations for turbulence modeling and an essentially unlimited number of species continuity equations and phasic equations to describe finite rate chemical reactions and phase change. In electromagnetics, there are typically two vectors of length three for a total of six, while when the MHD approximation is used it is possible to get by with three. Finally, for radiation, the number of equations can vary from one three-dimensional equation to coupled six-dimensional equations. A summary of the number of equations in various sub-disciplines is given in the Table on the following page.

Page 3: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

Table 1. Number of Partial Differential Equations in Various Fields Physics Domain Sub-Domain Independent Variables Total No. Eqs.

Maxwell Eqs. B (3) and E (3) Fields 6 MHD Approx. B(3) Field 3 Electromagentics B-E with divergence control

B (3) and E (3) Fields, Scalar potentials (2) 8

Small Displacements Displacement (3) 3

Solid Mechanics Large Displacement, Thermal Stress, Phase Change

dispU (3), velU (3) & T 7

Laminar Flow velU (3), p and T 5 Fluid Dynamics

Turbulent Flow velU (3), p, T and NT Turblence (0<NT<7) 5- 11

Combustion Reaction flow velU (3), p, T and NS Species(0<NS<100) 5 – 100

Plasma Physics Fluids plus Electromagnetics

A combination of fluid dynamics plus electromagnetics >8

Radiation (Up to six dimensions) Intensity 1-10

Numerical Discretization and Solution Procedure

Partial differential equations must be discretized before they can be solved numerically. Because the conservation laws from nearly all branches of physics form matrices are wide-banded and, in addition, are nonlinear, iterative methods must be used to solve the resulting discretized systems. To accomplish the discretization and to define an appropriate iterative procedure, we add a pseudo-time derivative to the space-time conservation relations. The discretization procedure is then performed by integrating over a series of control volumes of finite size to obtain an integral relation of the form:

0=∫ ΩΦ∇+∫ Ω×∇+∫ Ω•∇+∫ Ω∂∂

+Ω∂

∂Γ

ΩΩΩΩddFdFd

tQQ

CDp

τ (2)

By invoking the familiar theorems of Green, Stokes and Gauss, the volume integrals can be written as surface integrals,

0=∫ ΣΦ+∫ Σ×+∫ Σ•+

∫ Ω

∂∂

+Ω∂

∂Γ

Ω∂Ω∂Ω∂ΩdndFndFnQd

t

QCD

p

τ (3)

The surface integrals require the specification of a ‘numerical’ flux across each face of the selected control volume and indicate that the rate of change of the primitive variables in pseudo time is determined by the sum of fluxes across the several faces of each control volume. An upwind scheme is employed to evaluate the numerical flux across the faces. The curl, divergence and gradient operators each generate unique flux functions at the faces. A key issue in the discretization and in the application to a multi-disciplinary procedure is that discretized expressions must be defined for each of the three vector operators. As a part of the discretization step, the pseudo-time can also be used to define the numerical flux across each face of the control volume.

In addition to defining the discretized equations, the coefficient matrix, Γ , in the pseudo time term also introduces an artificial property procedure that allows the eigenvalues of the convergence process to be properly conditioned thereby providing an efficient convergence algorithm to handle different time scale problems.

The introduction of the integral formulation for the discretized equation system allows the use of an arbitrary, structured/unstructured grid capability to enable applications to complex geometry. This unstructured grid allows parallel load balancing on a cell-by-cell basis while enabling efficient data transfer. Specific data and code structures are implemented in a fashion that mimics the conventional mathematical notations given above and the corresponding operations for tensors, vectors and scalar functions. To allow for different numbers of conservation equations in different problems, the number of equations is chosen at input. In addition, individual problems which contain multiple zones in which different conservation equations must be solved are often encountered. To allow efficient computation and load balancing over such regions, we use a ‘multi-zone’ procedure, with cells from each ‘zone’ being

Page 4: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

transmitted to their own sub-cluster of processors. The computational code that incorporates these general equations, the arbitrary mesh and the multiple physical zones is referred to as the General Equation and Mesh Solver (GEMS) code.

GEMS: General Equation and Mesh Solver

The GEMS code uses contemporary numerical methods to solve coupled systems of partial differential equations and auxiliary constitutive relations for pertinent engineering field variables (Fig. 1) on the basis of a generalized unstructured grid format. After converting the pertinent conservation equations from differential to integral form as noted above, the spatial discretization is accomplished by a generalized Riemann approach for convective terms and a Galerkin approach for diffusion terms. The numerical solution of these equations is then obtained by employing a multi-level pseudo-time marching algorithm that controls artificial dissipation and anti-diffusion for maximum accuracy, non-linear convergence effects at the outset of a computation and convergence efficiency in linear regimes. The multi-time formulation has been adapted to handle convection-dominated or diffusion-dominated problems with similar effectiveness so that radically different field equations can be handled efficiently by a single algorithm. The solution algorithm complements the conservation equations by means of generalized constitutive relations such as arbitrary thermal and caloric equations of state for fluids and solution-dependent electrical and thermal conductivity for fluids and solids. For multi-disciplinary problems, GEMS divides the computational domain into distinct ‘zones’ to provide flexibility, promote load balancing in parallel implementations, and to ensure efficiency. The detail of these techniques are given in next section.

Multi-Physics Zone Method

In practical applications, we often face problems involving multiple media in which the pertinent phenomena are governed by different conservation laws. For such applications, we define a physics zone as a domain that is governed by a particular set (or sets) of conservation equations. For example, in a conjugate heat transfer problem, there are two physics zone. The continuity, momentum and energy equations are solved in the ‘fluid’ zone, while only the energy equation is solved in the ‘solid’ zone. Similarly, MHD problems can be divided into four physical zones, the fluid, electric conductor, dielectric

rings and surrounding vacuum zones (see figure 2). The continuity, momentum, energy, species and magnetic diffusion equations are solved in the fluid zone; the energy and magnetic diffusion equations are solved in the electric conductor and dielectric solid zones, and only the magnetic diffusion equation is solved in the vacuum zone. In an arc-heater problem, the inner plasma region in which fluids, radiation and electromagnetics co-exist would be one zone; the heater walls in which electromagnetics and conjugate heat transfer are desired would be a second zone, and the external environment where only the EM equations are solved would be a third zone. To accomplish this effect, the number (and type) of equations to be solved in each zone is an input quantity. This zonal approach provides economies in terms of machine storage and CPU requirements while also simplifying load balancing. Regions with larger numbers of conservation equations are distributed to more processors and

allocated more storage elements per cell, and etc.

Fig. 1. Components of GEMS code

Fig. 2. Four physical zones in MHD power generator

Page 5: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

This division into zones coupled with the unstructured grid makes load-balancing on parallel computers quite straightforward. The complete computational domain can be subdivided into several sub zones each of which is computed on a separate processor. For example, Zone 1 in Fig. 3 is processed in a 4-processor sub-cluster, while Zone 2 is processed in a 3-processor sub-cluster 2, and Zone 3 is processed in a 2-processor sub-cluster. The problem sketched in Fig. 3 uses a cluster of 9 processors. In order to optimize the parallel computing time, each processor loading has to be balanced. Because each physics zone has different grid numbers and also different numbers of equations, the combination of the number of equations

and the number of grids has to be balanced and optimized. The interface between multiple physics zones is treated as internal and external boundary conditions. Sharing information between two physics zones satisfies the internal boundary conditions while the external boundary conditions are treated as normal boundary conditions. The interface between processors in each sub cluster for a given physics zone has the same unknown variables and is treated as a normal inter-processor boundary. (Note that this physical zone definition is different from a multiple block grid of the type

obtained from structured grid generators. Normally, all grid zones in a multiple block computation have the same conservation equations. There could be multiple grid blocks in each physical zone shown in Figure 3).

PC-Linux Cluster Architecture

Our primary computational implementation has been done on two PC clusters. The SIMBA linux cluster built in early 2001 has a head node with 50 slave nodes. All nodes have a single Intel Pentium 4 1.8 Ghz CPU with 1 Gb memory and 10/100 BaseT network adapter and are linked by 60 ports on a 10/100 Fast Ethernet switch. Simba has a custom installation Beowulf cluster software including Redhat OS with a custom kernel for performance, optimized message passing software (MPICH), libraries, custom scripts for node reboot, power off and file copy across the cluster, C & Fortran compilers, Ganglia cluster monitoring utility and graphical user interface tools, security patches including preconfigured port mapper, IP chains and open PBS batch scheduling system

The second cluster is an Opteron cluster with 100 nodes each of which has a dual AMD Opteron 24 1.6 GHz CPU, with 4 Gb of memory which we are just bringing on line. These processors are connected by high performance non-blocking infiniband network switches for low latency and high speed. A new 10 TeraGb storage system is available for data repository and data mining. The cluster has Intel, Portland and Pathscale Fortran compilers optimized for the 64-bit AMD Opteron CPU.

Parallel Approach

As a parallel program GEMS must communicate between different processors. Our multi-physics zone method uses a fully implicit algorithm that is highly coupled inside each processor while loosely coupled between processors so that only small parts of data adjacent to the interface between two partitions need to be communicated between processors (see Fig. 4). A parallel point-to-point technique is used to reduce host control time and traffic between the nodes and also to handle any number of nodes in the cluster architecture. This technique should not introduce significant overhead to the computation until the number of processors in the cluster is increased to over 100. With the emergence of massively parallel computing architectures with potential for teraflop performance, any code development activity must effectively utilize the

Fig. 3. Sketch of multi-physical zones with their interfaces

interface

receiving data

sending data

current partition

Fig. 4. Diagram of interface between partitions

Page 6: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

computer architecture in achieving the proper load balance with minimum inter-nodal data communication. The massively parallel processing has been implemented in GEMS for cross disciplines such as computational fluid dynamics, computational structural dynamics and computational electromagnetic simulations for both structured grid and unstructured grid arrangements. The hybrid unstructured grid-based finite-volume GEMS code was developed and optimized for distributed memory parallel architectures. The code handles inter-processor communication and other functions unique to the parallel implementation using MPI libraries which allows one to execute the code on a variety of platforms such as a PC-linux cluster, IBM SP2, Cray T3D and T3E, and SGI origin, as well as on windows workstation clusters. Very few MPI calls are used in GEMS code due to well defined data structure of shared data that is detailed later. Only the mpi_sendrecv subroutine is used for sending and receiving data between processors to update interface information after each iteration. GEMS loads mesh and other data into a master processor and then distributes this data to appropriate processors thereby making the code is

portable and flexible. The efficiency of the parallel methods considered in GEMS is rests

upon the storage scheme used for the data shared between cluster nodes. Any single processor needs to send data to several other nodes while receiving data from an alternative array of nodes. The magnitude of the data sent from any processor is not necessarily equal to the amount it receives. To manage these inter-processor communications, we designed an exchanging prototype matrix (see figure 5.). In the figure 5 the index of the row represents the index of the sending processor while the index of the column represents the index of the receiving processor. A zero element in the matrix implies there is no communication between the two subject processors represented by the row and column indexes while nonzero elements of the matrix represent the number of data packets sent to the row index processor from the column index processor. The sum of row-wise elements is the total number of data received by the row index

processor while the sum of columns-wise elements is the total number of data sent to the column index processor. The diagonal elements of the matrix are always equal to zero as there is no data sent or received inside the processor.

Having set up this communications matrix, we can now use effective compressed row storage (CRS) format to collect and pack data into a contiguous pointer array sorted in the order of the processors. The pointer array physically links to the storage in memory for the physical locations of those data. This exchange data structure is necessary to achieve the point-to-point message passing operation. In the GEMS code, two data pointer stacks, sending and receiving, are allocated to collect the exchange data between processors (fig. 6.). Each stack includes two arrays in CRS format: one is an index and the other is data. The index array stores the amount of data that is sent to individual processors that also indicate the current node has data to send to the node with non-zero value. Otherwise, there are no passing operations token.

As an example, consider a data stack shown in Figure 6. The current node has 3 data messages to be sent to Node 1, 8 data messages to Node 4 and 5 data messages to Node 8. Similarly, receiving stack has two arrays that receive the data messages from nodes sent. Figure 6 shows that the current node has 1 data message received from Node 2, 3 data messages from Node 3, 2 data messages from Node 4 and 12 data messages from 6. This communication, which is send directly from one node to another, brings significant communications efficiency to a distributed memory cluster, especially as the number of nodes in the cluster increases. In addition, the exchange data structure used in the pointer array allows the data in the physical

Fig. 6. Shared data structure of data communication between nodes

Fig. 5. The exchanging prototype matrix for sending and receiving data

Page 7: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

domain to be automatically updated when the message passing operations are completed. No additional steps are required to accomplish the updating.

Figure 7 shows the total wall clock time and the wall clock time per cell and per iteration vs. the number of processors for a two-dimensional hypersonic fluid flow calculation with about a half million cells. The computational domain is partitioned to 5, 10, 20 and 40 partitions respectively and tested in our SIMBA cluster. The wall clock time is decreased when the number of processors increases while the average value of wall clock

time of each cell and iteration (wtime) should be constant in the ideal case (in the absence of communication costs or other system operations). When the number of processors is less than 10, the wtime is almost constant while as the number of processors are increased above 20 the wtime is increased by about (wtime_30-wtime_10)/wtime_10=1.5%.

Representative Applications

The multi-physics zone GEMS code has been successfully applied to a variety of applications including a trapped vortex combustor (TVC) with liquid fuel spray, an MHD power generator with a plasma channel enclosed by dielectric/conductor walls of sandwich construction and the surrounding atmosphere in which the magnetic field decays to zero, the conjugate heat transfer in a rocket engine combustor and a combined pulsed detonation combustion with unsteady ejectors in operating in cyclic fashion. Results from these various cases are outlined below.

Trapped Vortex Combustor

The TVC concept is a revolutionary technology with potential payoffs in almost every category of gas turbine combustor performance including heat release rate, operability, weight, and cost. The TVC departs from a conventional gas turbine combustor in several ways, the most substantial of which is the mechanism that stabilizes the flame. TVC stabilizes the flame by trapping a vortex in cavities located in the walls of the combustor as shown in Figure 8. Strategically placed air and fuel injection points in the forward and aft walls of the cavity create a vortex in the cavity. The resulting vortex recirculates the hot combustion gases within the cavity which are then exhausted out of the cavity and transported along the face of the combustor. In this applications, the approximately 1 million grid points are partitioned and distributed to 40 processors. Figure 8 is an overview of the stream particle traces near the cavity region with temperature contours and an iso-surface of the 800k degree temperature contour. The chord wise vortex was observed inside the cavity and then trailed down the flame holder (left bottom). The right top is the iso-surface of the 100 Kpa pressure surface while the right bottom charts show temperature contour slices.

Fig. 8. Trapped Vortex Combustor Simulation

Number of Processors

WT

ime/

cells

/iter

atio

ns

Wal

lTim

e(s

)

5 10 15 20 25 30 35 400

0.0001

0.0002

0.0003

0

500

1000

1500

2000

2500

wall time

wtime/cells/iterations

Fig. 7. Wall clock time and wall clock time per cell per iteration vs. number of processor

Page 8: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

MHD power generator

Magnetohydro-dynamic power generators use the flow of an ionized gas through a magnetic field to produce electrical power. Although several classical geometrical configurations are available, the gas in

the generator typically passes through a flow channel composed of individual sections of high electrical conductivity separated from each other by insulating, dielectric sections. The particular arrangement of the conductor / dielectric sections distinguishes the types of MHD generator and their ensuing electrical output characteristics. A characteristic of MHD generators is that the combined electro-magnetic/hydrodynamic fields are highly complex and inherently three-dimensional in nature.

Our approach is based upon a coupled three-dimensional solution of the magnetic diffusion equations and the Reynolds averaged Navier-Stokes equations that provides the three-dimensional magnetic field, electric current and fluid flow characteristics of an MHD power generator. The formulation solves the Navier-Stokes equations in conjunction with the magnetic

diffusion equation and a two-equation turbulence model on a hybrid unstructured grid. Physical difficulties with boundary conditions are removed by extending the computational domain to encompass the plasma channel, the conducting and dielectric walls and the surrounding air. The specific problem of interest in the present paper is a diagonal MHD channel immersed in the field of an external magnet. Supersonic flow enters the channel from one end and exits through the other. The air is seeded with potassium to provide electrical conductivity. The channel is composed of a series of conducting regions separated from each other by strips of insulator stacked along the channel and oriented at an angle with respect to the axis. The multiple diagonal sections in combination with the electrically conducting plasma give the advantage of increased output voltage through an effectively series connection. Representative conditions of interest involve low magnetic Reynolds numbers and large Hall parameters. Figure 9 shows a combination of applications of an MHD power generator, an electric conductor channel and the magnetic field in a wire.

Constant Volume Combustion Turbine System

Constant volume combustion has the potential to provide substantial performance improvements in air-breathing propulsion systems although these improvements are difficult to realize in practical systems. In the present example, we look at a series of pulsed detonation tubes as a possible means for implementing constant volume combustion in a gas turbine system. The analysis involve both reacting and non-reacting flow solutions to the Navier-Stokes equations using the GEMS code that enables generalized fluids and generalized grids. The pressure contours of an unsteady, three-dimensional constant volume combustor are shown in Fig. 11 along with detailed diagnostics of the flow through a single PDE tube that is combined with a straight-tube ejector. The latter solution is compared with experiment to validate the PDE simulations.

Fig. 11. Constant volume hybrid combustion system for turbine engine application

Fig. 9. Magnetohydrodynamics Application

Page 9: Parallel Multi-Zone Methods for Large-Scale ...Partial differential equations must be discretized before they can be solved umerically. Because then conservation laws from nearly all

Summary

We have described a unified parallel framework for dealing with multi-physics problems with emphasis on the computational implementation. The equations of motion are written in a generalized form with divergence, curl and gradient operators that allow solids, liquids, gases, supercritical fluids and multi-phase or multi-component mixtures to be treated in a common manner. The partial differential equations are complemented by arbitrary thermodynamic and caloric equations of state in fluid phases and constitutive equations for solid phases. The computational implementation of the general solver for the system uses an arbitrary, structured/unstructured grid capability to enable applications to complex geometry while the general conservation formulation allows simulations of arbitrary material. Data and code structures are implemented in a fashion that mimics conventional mathematical notation and operation for tensors, vectors and partial differential equations. A parallel point-to-point technique is used to reduce host control time and traffic between the processors and to handle any number of processors in the cluster architecture that has potential to apply in the fast growing grid computing arena. Timings on parallel processor indicates efficient operation up to 40 processors. Current testing to cluster sizes exceeding 100 processors is in progress. The broad capabilities of the unified framework are demonstrated using a series of test cases that involve a trapped vortex combustor, an MHD power generator analysis and a constant volume combustion turbine system simulation. Each of the examples showcases the effectiveness of using parallel computing for handling complex multidisciplinary physics problems. The results show a variety of interesting physical phenomena and the efficacy of the computational implementation

Acknowledgement

Portions of this work have been supported by the Air Force Office of Scientific Research Test and Evaluation Program (AFOSR).

References

D. Li and Charles L. Merkle, “Fundament of GEMS Code,” private report, 2001 D. Li , “A User’s Guide to GEMS,” private report, 2002 D. Li, S. Venkateswaran, Jules Lindau and Charles L. Merkle, “A Unified Computational Formulation

for Multi-Component and Multi-Phase Flows,” 43rd AIAA aerospace sciences meeting and exhibit, AIAA 2005-1391, Reno, NV, January 10-13, 2005.

G, Xia, D. Li, and C. L., Merkle, “Modeling of Pulsed Detonation Tubes in Turbine Systems,” 43rd AIAA aerospace sciences meeting and exhibit, AIAA 2005-0225, Reno, NV, January 10-13, 2005.

D. Li, D. Keefer, R. Rhodes, C. L. Merkle, K. Kolokolnikov and R. Thibodeaux, “Analysis of MHD Generator Power Generation,” AIAA2003-5050, July 2003.

D. Li and C. L. Merkle, “Analysis of Real Fluid Flows in Converging Diverging Nozzles,” AIAA-2003-4132, July 2003.

X. Zeng, S. Venkateswaran, C. L. Merkle and D. Li, “Designing Dual-Time Algorithms for Steady-State Calculations,” AIAA-2003-3707, , Orlando Fl, June23-26, 2003.

S. Venkateswaran, D. Li and C. L. Merkle, “Influence of Stagnation Regions on Preconditioned Solutions at Low Speeds,” AIAA-2003-0435, Reno, NV, January 6-9, 2003.

D. Li, F. Hakhari, S. Venkateswaran, and Charles L. Merkle, “Convergence Assessment of General Fluid Equations on Unstructured Hybrid Grids,” AIAA 2001-2557, Anaheim, CA, June 11-14, 2001.

P.S. Pacheco, “A User’s Guide to MPI,” Dept. of Mathematics, University of San Francisco, San Francisco, CA 94117