climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/cise.pdf · are...

10
M ost scientists believe that the in- creasing concentrations of carbon dioxide and greenhouse gases from fossil fuel combustion and other human activities dramatically affect the Earth’s climate, but exactly what changes in cli- mate can we expect in the coming years? What will the impact be, for example, on agriculture, unmanaged ecosystems, and sea levels? Which regions of the world will be most affected? We can’t learn the answers to these questions in a lab setting, thus computer-based climate simulations are essential to understanding and forecasting potential climate changes. Climate simulation has come a long way since the 1970s when the US Department of Energy first started sponsoring research in the field. However, one thing hasn’t changed: the climate- modeling community still needs the capability to quickly run multiple simulations using ever more detailed and accurate coupled models. We have implemented an atmospheric general circulation model (AGCM) using a geodesic dis- cretization of the sphere (see Figure 1). Our model uses the message-passing interface and runs efficiently on massively parallel machines. We wanted to design and construct an architec- turally unified climate-modeling framework based on spherical geodesic grids. This frame- work lends itself to the creation of a compre- hensive, conservative, accurate, portable, and highly scalable coupled climate model. We be- lieve that this approach is the future of climate modeling. Models and methods Over the past several decades, meteorologists and oceanographers have developed various methods for solving the equations that describe fluid flow on a sphere. Such equations are used in both AGCMs and ocean general circulation models (OGCMs). Some AGCMs use finite-difference methods, in which the ratios of differences approximate the derivatives that appear in the governing dif- ferential equations. The so-called pole problem arises when we apply finite-difference methods on latitude–longitude grids, in which the merid- ians (lines of constant longitude) converge to points at the two poles (see Figure 2). Small 32 COMPUTING IN SCIENCE & ENGINEERING C LIMATE MODELING WITH S PHERICAL GEODESIC GRIDS A new approach to climate simulation uses geodesic grids generated from an icosahedron and could become an attractive alternative to current models. DAVID A. RANDALL, TODD D. RINGLER, AND ROSS P. HEIKES Colorado State University PHIL JONES AND JOHN BAUMGARDNER Los Alamos National Laboratory 1521-9615/02/$17.00 US Government Work Not Protected by US Copyright C LIMATE M ODELING

Upload: trinhliem

Post on 11-Feb-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

Most scientists believe that the in-creasing concentrations of carbondioxide and greenhouse gasesfrom fossil fuel combustion and

other human activities dramatically affect theEarth’s climate, but exactly what changes in cli-mate can we expect in the coming years? Whatwill the impact be, for example, on agriculture,unmanaged ecosystems, and sea levels? Whichregions of the world will be most affected? Wecan’t learn the answers to these questions in a labsetting, thus computer-based climate simulationsare essential to understanding and forecastingpotential climate changes.

Climate simulation has come a long way sincethe 1970s when the US Department of Energyfirst started sponsoring research in the field.However, one thing hasn’t changed: the climate-modeling community still needs the capabilityto quickly run multiple simulations using evermore detailed and accurate coupled models.

We have implemented an atmospheric generalcirculation model (AGCM) using a geodesic dis-cretization of the sphere (see Figure 1). Ourmodel uses the message-passing interface andruns efficiently on massively parallel machines.We wanted to design and construct an architec-turally unified climate-modeling frameworkbased on spherical geodesic grids. This frame-work lends itself to the creation of a compre-hensive, conservative, accurate, portable, andhighly scalable coupled climate model. We be-lieve that this approach is the future of climatemodeling.

Models and methods

Over the past several decades, meteorologistsand oceanographers have developed variousmethods for solving the equations that describefluid flow on a sphere. Such equations are usedin both AGCMs and ocean general circulationmodels (OGCMs).

Some AGCMs use finite-difference methods,in which the ratios of differences approximatethe derivatives that appear in the governing dif-ferential equations. The so-called pole problemarises when we apply finite-difference methodson latitude–longitude grids, in which the merid-ians (lines of constant longitude) converge topoints at the two poles (see Figure 2). Small

32 COMPUTING IN SCIENCE & ENGINEERING

CLIMATE MODELING WITHSPHERICAL GEODESIC GRIDS

A new approach to climate simulation uses geodesic grids generated from an icosahedronand could become an attractive alternative to current models.

DAVID A. RANDALL, TODD D. RINGLER, AND ROSS P. HEIKES

Colorado State UniversityPHIL JONES AND JOHN BAUMGARDNER

Los Alamos National Laboratory

1521-9615/02/$17.00 US Government Work Not Protected by US Copyright

C L I M A T EM O D E L I N G

Page 2: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

SEPTEMBER/OCTOBER 2002 33

time steps could be used, at great expense, tomaintain computational stability near the poles,but the high resolution in the east–west direc-tion near them would be wasted because themodel uses lower resolution elsewhere. Solu-tions to the pole problem are sometimes basedon longitudinal filtering, but these methods havesome unfortunate side effects and scale badly asmodel resolution increases.

OGCMs also use finite-difference methods.1

In the Parallel Ocean Program (POP),2 stretch-ing the spherical coordinates so that the coordi-nate system’s north pole is on a landmass (for ex-ample, Greenland) rather than in the ArcticOcean eliminates the pole problem. Althoughthis ad hoc approach alleviates the problem in theOGCM, the peculiarities of the stretched oceanmodel grid further complicate the coupling ofthe atmosphere, ocean, and land surface sub-models. Moreover, the stretched grid is tailoredfor the particular arrangement of the continentsand oceans on Earth and thus lacks generality.

Spectral AGCMs are based on spherical har-monic expansions.3 This approach also avoidsthe pole problem but has problems with almost-discontinuous functions such as steep terrain andthe sharp boundaries of clouds. Moreover, spec-tral models poorly simulate the transports ofhighly variable nonnegative scalars, such as the

concentration of moisture.4 This weakness hascaused most spectral AGCMs to evolve into hy-brid models in which grids represent most or allof the advective processes.

The spectral method (which is based on spher-ical harmonics) is not applicable to ocean mod-eling because of the ocean basins’ complicatedgeometry. Researchers are exploring spectral el-ements based on local expansions, generally ofhigh-order accuracy, as an alternative approachand for possible use in atmosphere modeling.5

We found that the best way to avoid the poleproblem is to use quasi-uniform spherical geo-desic grids—tessellations of the sphere gener-ated from icosahedra or other Platonic solids.6

Icosahedral grids, first tried in the 1960s, givealmost homogeneous and quasi-isotropic cov-erage of the sphere.7,8 Modern numerical algo-rithms including fast elliptic solvers andmimetic discretization schemes have madethese grids more attractive, renewing interestin the idea.9,10

Grid generation

To construct an icosahedral grid, we beginwith an icosahedron (see Figure 3), which hastriangular faces and vertices. Euler’s famous the-orem tells us that the number of edges is

Figure 1. An example of a geodesic grid with acolor-coded plot of the observed sea-surface temperature distribution. The continents are depicted in white. This grid has 10,242 cells, eachof which is roughly 240 km across. Twelve of thecells are pentagons; the rest are hexagons.

Figure 2. An example of a latitude–longitude grid.A pole is at the top. The black dots represent gridcell centers that are equally spaced in longitude(west to east) and latitude (south to north). Thered lines are the Equator and two lines ofconstant longitude.

Page 3: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

34 COMPUTING IN SCIENCE & ENGINEERING

E = F + V – 2, so E = 30.

Our next step is to construct Voronoi cellscentered on the 12 vertices. A Voronoi cell con-sists of the set of all points that are closer to agiven vertex than to any other vertex. TheVoronoi cells based on the icosahedron give usa discretization of the sphere that consists of 12pentagonal faces. However, 12 faces do not giveus enough resolution to represent the Earth sys-tem’s complicated processes; we need finer grids.

As Figure 4 shows, we can halve the linear celldimensions of each of the icosahedron’s triangu-lar faces and produce a finer grid. Bisecting theedges divides a triangular face into four smallertriangles and increases the number of faces by afactor of four. The bisection points become newvertices, so the number of double vertices in-creases by the number of edges. The new ver-tices are popped out onto the sphere (see Figure4), and the bisection process is repeated as

needed to generate arbitrarily fine grids. We cangeneralize the grid-generation algorithm out-lined earlier to allow a broad variety of possiblehorizontal resolutions.

The geodesic grid’s most obvious advantage isthat all grid cells are nearly the same size. Varia-tional tweaking of the grid can improve themesh’s uniformity (see Table 1; the shaded cellsrepresent different resolutions that we used forclimate simulation). The number of cells NC sat-isfies NC = 5 x 22n+1 + 2, where n ≥ 0 is the“counter” listed in the table’s first column. Be-cause of the uniform cell size, computational sta-bility for advection is not an issue, even withconservative finite-volume schemes. For exam-ple, with a grid cell 100 km across and a windspeed of 100 ms–1, the allowed time step is 1,000seconds. Time steps much longer than thiswould be incompatible with accurate simulationsof the day–night cycle and fast adjustmentprocesses (such as thunderstorms), which haveshort intrinsic timescales.

Even with a quasi-homogeneous grid, rapidlypropagating waves can still significantly limit thetime step. The fastest waves in our AGCM travelat about 300 ms–1. Semi-implicit time differenc-ing permits a 1,000-second time step withoutsacrificing the accurate simulation of large-scaleatmospheric circulation.

This semi-implicit method creates an ellipticproblem that must be solved at each time step.Having an efficient solution to minimize thecomputational overhead is also important. Oneversion of our geodesic AGCM uses a parallelmultigrid method to implement a semi-implicittime-differencing scheme.

Figure 3. An icosahedron, whichhas 20 triangular faces, 12 vertices, and 30 edges.

Icosahedron Bisect each edge and connect the dots Pop out onto the unit sphere

(a) (b) (c)

Figure 4. The first bisection of the icosahedron.

Page 4: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

SEPTEMBER/OCTOBER 2002 35

In all ocean models, realizing efficient time in-tegration requires splitting the fast barotropicdynamics from the slower baroclinic dynamics.In all ocean models, realizing efficient time in-tegration requires splitting the rapidly propa-gating deep waves from the slowly propagatingshallower waves. Our ocean model uses a simplesubcycling approach to advance the fasterbarotropic dynamics and a second-order-accu-rate staggered time-stepping scheme for theslower baroclinic modes. Within this comingyear, we plan to explore the reduced gravity ap-proach,11 which permits long time steps withoutsubcycling, solving an elliptic problem, or cre-ating significant communication overhead. Ourgoal is to use a time step of 20 to 30 minutes forboth the AGCM and OGCM.

Geodesic grids are quasi isotropic. Only threeregular polygons tile the plane: equilateral tri-angles, squares, and hexagons. Figure 5 showsplanar grids made up of each of these three pos-sible polygonal elements. On the triangular andsquare grids, some of a given cell’s neighbors liedirectly across cell walls whereas others lie acrosscell vertices. As a result, finite-difference opera-tors constructed on these grids tend to use wallneighbors and vertex neighbors in different ways.For example, the simplest second-order finite-difference approximation to the gradient, on asquare grid, uses only wall neighbors; vertexneighbors are ignored. Although constructingfinite-difference operators on square (and trian-gular) grids using the information from allneighboring cells is possible, these grids’ essen-tial asymmetries remain unavoidably manifestedin the forms of the finite-difference operators.In contrast, hexagonal grids have the propertythat all of a given cell’s neighbors lie across cellwalls; no vertex neighbors exist. Therefore, fi-nite-difference operators constructed on hexag-onal grids treat all neighboring cells in the same

way, making the operators as symmetrical andisotropic as possible. A geodesic grid on a spherehas 12 pentagonal cells plus many hexagonalcells, yet each geodesic grid cell has only wallneighbors; no vertex neighbors exist anywhereon the sphere.

We can cut a geodesic grid into 10 rectangularpanels, each corresponding to a pair of triangu-lar faces from the original icosahedron. Figure6 shows the 10 panels pared to make five logi-cally rectangular panels consisting of rows andcolumns that form a rectangular array. As thefigure shows, two leftover cells correspond to thepoints where the panels meet at the two poles,

(a)

(b)

(c)

6 neighbors,

8 neighbors,

12 neighbors, 3 wall neighbors

4 wall neighbors

6 wall neighbors

Figure 5. Small grids made up of(a) equilateral triangles, (b)squares, and (c) hexagons. These are the only regular polygons that tile the plane. Thehexagonal grid has the highestsymmetry. All neighboring cellsof a given hexagonal cell are located across cell walls: with either triangles or squares, someneighbors are across walls andother are across corners.

Table 1. A summary of the properties of geodesic grids of several different resolutions.

n Number Number Average cell Area ratio Average distance Ratio of smallestof cells of cells along area in km2 (smallest to largest) between cell to largest

Equator centers in km distance betweencell centers

0 12 0 4.25e7 1 3717.4 11 42 10 1.21e7 0.885 3717.4 0.8812 162 20 3.14e6 0.916 1909.5 0.8203 642 40 7.94e5 0.942 961.6 0.7994 2,562 80 1.99e5 0.948 481.6 0.7905 10,242 160 4.98e4 0.951 240.9 0.7896 40,962 320 1.24e4 0.952 120.5 0.788

Page 5: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

36 COMPUTING IN SCIENCE & ENGINEERING

and both of these points are pentagons. The rec-tangular panels in the figure correspond to thelogically rectangular organization of the griddeddata in the computer’s memory.

We recognize the need to make the geodesicgrid easy and convenient for others to work with.For this purpose, we have created “canned” rou-tines that can

• Generate geodesic grids • Interpolate data from conventional latitude–

longitude grids onto geodesic grids• Plot data that is defined on geodesic grids

These routines are intended to help newcomersadopt geodesic grids, for both modeling and dataanalysis (see http://kiwi.atmos.colostate.edu/BUGS/projects/geodesic).

Although we already have an AGCM based on

a geodesic grid, we are still developing anOGCM that can use a similar grid of higher res-olution. To our knowledge, this will be the firstOGCM based on a spherical geodesic grid.

Coding strategy

Currently, the codes we have developed arebeing used in an atmospheric model. We plan tosimulate the fluid motions of both the atmos-phere and ocean using the same source-codemodules. Some of these modules could also helpmodel sea ice dynamics. An advantage of usingsimilar discretizations across all components of acoupled model is that the various model compo-nents can share software components. We planto implement our geodesic climate model usinga layered architecture in which common utilitiesand data types are pushed to lower layers. Com-

Figure 6. We can cut (a) a spherical geodesic grid into (b) logically rectangular panels, which offers aconvenient way to organize the data in a computer’s memory.

Northpole

Southpole

(a)

(b)

Page 6: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

SEPTEMBER/OCTOBER 2002 37

ponent models comprise the upper layer and arebuilt using shared code from the lower layers,which consist of low-level utilities and machine-specific code (for example, timers and commu-nications primitives). The next layer containshigher-level shared components including mod-ules to support the common horizontal dis-cretization, common solvers, higher-level datamotion abstractions, and model-coupling utili-ties. Layers above this include the componentmodels and the full coupled climate model itself.

Such a layered approach requires well-definedinterfaces to perform stand-alone testing of in-dividual utilities and modules. Moreover, a well-defined interface lets us use standard utilities de-veloped through other efforts. Using sharedmodules for solvers and horizontal differencingwill permit performance optimization of well-defined kernels, thus improving all models andreducing workload.

Current computer architectures are typicallynetworked clusters of nodes consisting of moder-ate numbers of cache-based microprocessors.Due to hardware constraints, it’s often best to usea message-passing paradigm for the cluster leveland a shared memory paradigm (for example,OpenMP or other thread-based approaches)among the processors on a node. Typically, theselevels of parallelism should occur at a very highlevel in the code to provide enough work to amor-tize any overhead. Recent industry developmentsmotivated us to maintain support for vector su-percomputers. We believe that the data structuredescribed later gives us the needed flexibility.

We can represent a 2D field on the sphericalgeodesic grid by using a data decomposition thatconsists of a collection of logically rectangular2D arrays. Figure 7 depicts two examples of suchdata structures. In Figure 7a, we decompose the

grid into 40 blocks; Figure 7b uses 160. We canrepresent 2D fields as arrays of dimension (ni,nj, nsdm), where ni and nj are the lengths of ablock’s sides, and nsdm is the total number ofblocks. At present, we use square blocks of data,for which (ni = nj). This limits nsdm to the val-ues of 10, 40, 160, 640, and so on. If the needarises, we can generalize the algorithm to ac-commodate rectangular blocks of data for whichni ≠ nj.

We can distribute the nsdm blocks over nsdmprocesses by assigning one block of data to oneprocess. Alternatively, we can use fewer than nsdmprocesses and assign multiple blocks of data toeach process. Each process sees a data structureof size (ni, ni, psdm), where the value of psdm gen-erally varies from process to process, and thesum of psdm across all processes equals nsdm.The generalization from 2D to 3D fields is triv-ial; each process sees a data decomposition ofsize (ni, ni, nk, psdm), where nk is the number ofvertical levels.

This domain decomposition strategy has manyadvantages. First, the algorithm is equally ap-plicable to atmospheric and ocean modeling; infact, the Ocean Modeling Group at Los AlamosNational Laboratory has independently devel-oped and implemented the same domain de-composition strategy.12

Second, this approach limits the number ofmessages and the total size of the messages re-quired to update data at each block’s boundaries.This makes the algorithm somewhat insensitiveto the particular type of interconnect fabric used.

Third, we built a load-balancing mechanisminto the algorithm that lets each process own anarbitrary number of data blocks. Overloadedprocesses can give one or more blocks of data tounderused processes. Currently, we load balance

(a)

Figure 7. A 2D fielddecomposed into (a) 40logically square blocks of dataand (b) 160 logically squareblocks of data.

(b)

Page 7: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

38 COMPUTING IN SCIENCE & ENGINEERING

at compile time. In the future, we intend to ex-plore dynamic load balancing. Some blocks of datawill not contain any ocean points, so we can cullthese blocks to increase the algorithm’s efficiency.

Fourth, we can adjust block size to adapt to agiven machine’s memory hierarchy. For cache-based microprocessors, we can choose the blocksize to be small enough to remain cache residentfor most of the time step; for vector machines,we can choose the block size to be as large aspossible for better vector efficiency.

Finally, our approach allows the seamless useof multitasking threads (such as OpenMP) un-der the message-passing interface library. Be-cause every process owns psdm blocks of data,and each of these blocks can be manipulated in-dependently of all other blocks of data, we canmultitask over the psdm data’s index. Threads arespawned at the beginning of each time step andare not closed until near the end of the time step.This implies that each thread is open for a rela-tively long time, which mitigates the cost ofspawning the thread. The multitasking capabil-

ity lets us best exploit architectures that use bothshared and distributed memory paradigms.

Figure 8 shows the geophysical fluid dynamicsalgorithm’s scaling efficiency as implementedinto our atmospheric dynamical core. When wediscretize the sphere by using 2,562 grid cells, thenominal distance between grid cell centers is ap-proximately 450 km; the 10,242 and 40,962 gridshave grid spacings of approximately 225 km and112 km, respectively. As Figure 8 shows, themodel using the finest resolution consideredhere—40,962 cells—scales linearly out to ap-proximately 80 processes and shows a speed upof 110 when we use 160 processes. The superlin-ear speed up of the model using the 40,962 gridat around 20 processes is due to data caching.

Flux coupling

Climate models must include representationsof the atmosphere, ocean, sea ice, and land sur-face. For reasons having mainly to do with oceandynamics, the grid used to represent the oceanis usually finer than the one used to representthe atmosphere. Important exchanges of energy,mass, and momentum exist across the Earth’ssurface—for example, water evaporates from theocean and appears as water vapor in the atmos-phere. In a climate model, an interface routinecalled a flux coupler computes these exchanges(see Figure 9).

In an earlier stage of our work, we used a lati-tude–longitude finite-difference model coupledwith the stretched POP grid mentioned earlier.We then revised our atmospheric model to usethe geodesic grid, still with the POP grid for theocean. We designed a flux coupler that let thecurvilinear POP grid communicate with the ge-odesic AGCM grid accurately and conserva-

25 50 75 100 125 150

25

50

75

100

125

150 40,962 cells10,242 cells 2,562 cells

10 20 30 40Number of processes Number of processes

(a) (b)

10

20

30

40

Spe

ed u

p

Spe

ed u

p

40,962 cells10,242 cells2,562 cells

Figure 8. The currentatmospheric dynamical core’sscaling efficiency. Both panelsshow the same data: (a) thescaling out to 40 processes; (b)extending the x-axis to 160processes. We performed thesetests on an IBM SP-2.

AGCM

OGCM

Sea-ice model

Land-surface model

Flux coupler

Figure 9. A flux coupler handles communicationsbetween the atmosphere and the underlying surface, which includes land, ocean, and sea ice.The flux coupler must take into account differencesbetween the grids used for the atmosphere andthe surface.

Page 8: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

SEPTEMBER/OCTOBER 2002 39

tively. Figure 10 summarizes the grid overlapthat the flux coupler must handle. The POPcurvilinear grid represents the ocean, sea ice,land surface, and land ice submodels. We repre-sent the atmosphere using the spherical geodesicgrid. Surface fluxes—for example, evaporationfrom the ocean surface—are computed on thePOP grid. To do this, we must interpolate theatmospheric state variables from the geodesic at-mosphere grid to the surface grid and averagethe computed surface fluxes from the surfacegrid to the atmosphere grid. We performedthese interpolation and averaging steps by usingthe methods Philip W. Jones developed.13

An attraction of using geodesic grids for boththe atmosphere and the ocean models is thatthey greatly simplify the design of an efficient,parallel flux coupler. To ensure satisfactory cou-pling of the ocean and the atmosphere, we mustcompute the air–sea fluxes due to turbulence atthe ocean model’s higher resolution. We mustinterpolate the various atmospheric variablesneeded to compute these fluxes from the atmos-phere model’s coarse grid to the ocean model’sfine one. We must then average the fluxes backto the coarser atmosphere grid, in such a waythat the globally averaged surface flux is the sameon both grids. Moreover, we must interpolate at-mospheric fluxes due to precipitation and radia-tion computed on the atmosphere model’s coarsegrid to the ocean model’s fine grid in such a waythat the globally averaged surface flux is the sameon both. Similar comments apply to communi-cations between the atmosphere and land sur-face models. The computational machineryneeded to perform these various steps is encap-sulated in a flux coupler, which we have designedto facilitate communications and interactionsamong the various submodels.

In our geodesic climate model, the surface andatmosphere grids are both spherical geodesicgrids, albeit with different resolutions. Theocean, sea ice, land surface, and land ice sub-models share a single geodesic surface grid.

Each process owns one or more (generallymore) atmosphere grid blocks and one or morecorresponding surface grid blocks. Atmosphereand surface blocks nearly coincide—that is, onenearly congruent surface block underlies eachatmosphere block. The number of grid cells ina surface block is normally much larger than thenumber of grid cells in the atmosphere blockabove it because the surface grid is normallyfiner than the atmosphere grid.

In our model, each process owns N atmosphere

points and M surface points, where N and M arethe same for all processes. In most if not all ex-isting coupled models, this is virtually impossi-ble to arrange because the atmosphere and oceangrids completely differ from each other. In ourgeodesic climate model, we can make N and Mthe same for all processors because the atmos-phere and ocean grids have the same shape.

Multiple blocks that belong to a process neednot necessarily be geographically contiguous.For example, a particular process might own ablock in North America, a block in the south At-lantic, a block in Africa, and a block in the trop-ical Pacific.

Each process will work on both the atmos-phere above and the surface below; here, surfacemeans ocean and land surface. This is a single ex-ecutable paradigm. In contrast, a multiple executableparadigm might let the ocean, atmosphere, andland surface models run as distinct executables.Our single executable approach minimizes themessage passing needed to compute the fluxesbetween the atmosphere and the surface.

Because a process can own multiple atmos-phere blocks and multiple surface blocks, we canload balance the global model by allocatingblocks to processes so that similar mixes of landand ocean points are assigned to all processes.On average, each process will work on abouttwo-thirds of the ocean cells and one-third of theland cells, simply because the continents coverabout one-third of the Earth’s surface.

Figure 10. The surface grids used by the ParallelOcean Program (dashed lines) and the global atmospheric grid (solid lines). The grids communicate in an accurate and conservativemanner using a library called SCRIP.

Page 9: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

40 COMPUTING IN SCIENCE & ENGINEERING

Because a flux coupler naturally communicatesamong a climate model’s various components, itis a potential computational bottleneck. In par-ticular, when the atmosphere and ocean modelsare parallelized using 2D domain decomposi-tion, flux coupler parallelization itself becomesimportant. A fast flux coupler permits frequentcommunication among the component mod-els—for example, by permitting resolution of thediurnal cycle, which is crucial over land and isalso thought to be important for the atmosphericinteractions with the tropical upper ocean. Witha serial flux coupler, we can choose less frequent(for example, daily) communication to improvecomputational speed but at the cost of sacrific-ing physical realism. We intend to provide fre-quent flux coupling between our AGCM andOGCM to permit realistic simulations of the di-urnal cycle and other high-frequency variabilityaffecting both the atmosphere and ocean. Suchfrequent flux coupling is practical with the fluxcoupler design outlined earlier.

In addition to effective load balancing, an effi-cient flux coupler should require minimal inter-processor communication. Communication isrequired when a grid cell, generally at theprocessor’s horizontal grid domain boundary, isshared with one or more other processors. Thespherical geodesic grids are constructed so thatany grid cell on the fine (ocean) grid will over-lap with at most three grid cells from the coarse(atmosphere) grid. This property keeps inter-processor communication to a minimum and,along with effective load balancing, promotesthe flux coupler scaling to many processors.

Climate models improve with time, butthe model development process isnever finished. Geodesic grids cansolve some of our modeling problems,

but others remain. A particularly difficult prob-lem is realistically simulating the distributionsof water vapor and clouds and how these distri-butions change when the climate changes. Ournext round of model development will be aimedat improved representations of water vapor andclouds, in part through the use of floating modellayers that follow the moisture as it moves verti-cally through a column of air.

AcknowledgmentsThis research is supported through a cooperativeagreement between the Climate Change PredictionProgram of the US Department of Energy and Colorado

State University. Our co-investigators on this project areAkio Arakawa (University of California, Los Angeles),Albert J. Semtner, Jr. (US Naval Postgraduate School),Wayne Schubert (Colorado State University), and ScottFulton (Clarkson University).

References1. A.J. Semtner, “Modeling Ocean Circulation,” Science, vol. 269,

no. 5229, June 1995, pp. 1379–1385.

2. R.D. Smith, S. Kortas, and B. Meltz, Curvilinear Coordinates forGlobal Ocean Models, tech. report LA-UR-95–1146, Los AlamosNat’l Lab., Los Alamos, N.M., 1995.

3. M. Jarraud and A.J. Simmons, “The Spectral Technique,” Seminaron Numerical Methods for Weather Prediction, European Centrefor Medium Range Weather Prediction, Reading, UK, 1983, pp.1–59.

4. D.L. Williamson and P.J. Rasch, “Water Vapor Transport in theNCAR CCM2,” Tellus, vol. 46A, no. 1, Jan. 1994, pp. 34–51.

5. D.B. Haidvogel et al., “Global Modeling of the Ocean and At-mosphere Using the Spectral Element Method,” Numerical Meth-ods in Atmospheric and Oceanic Modeling, C.A. Lin, R. Laprise, andH. Ritchie, eds., European Centre for Medium Range WeatherForecasts, Reading, UK, 1999, pp. 505–531.

6. R.P. Heikes and D.A. Randall, “Numerical Integration of the Shal-low Water Equations on a Twisted Icosahedral Grid, Part 1: BasicDesign and Results of Tests,” Monthly Weather Rev., vol. 123, no.6, June 1995, pp. 1862–1880.

7. D.L. Williamson, “Integration of the Barotropic Vorticity Equa-tion on a Spherical Geodesic Grid” Tellus, vol. 20, no. 4, Oct.1968, pp. 642–653.

8. R. Sadourny, A. Arakawa, and Y. Mintz, “Integration of the Non-divergent Barotropic Equation with an Icosahedral HexagonalGrid for the Sphere,” Monthly Weather Rev., vol. 96, no. 6, June1968, pp. 351–356.

9. J.R. Baumgardner and P.O. Frederickson, “Icosahedral Dis-cretization of the Two-Sphere,” SIAM J. Numerical Analysis, vol.22, no. 6, June 1985, pp. 1107–1115.

10. T.D. Ringler and D.A. Randall, “A Potential Enstrophy and En-ergy Conserving Numerical Scheme for Solution of the Shallow-Water Equations on a Geodesic Grid,” Monthly Weather Rev., vol.130, no. 5, May 2002, pp. 1397–1410.

11. T.G. Jensen, “Artificial Retardation of Barotropic Waves in Lay-ered Ocean Models,” Monthly Weather Rev., vol. 124, no. 6, June1996, pp. 1272–1283.

12. P.W. Jones, How to Use the Parallel Ocean Program (POP): A Tuto-rial for Users and Developers, tech. report LA-UR-02-502, LosAlamos Nat’l Lab., Los Alamos, N.M., 2002.

13. P.W. Jones, “First- and Second-Order Conservative RemappingSchemes for Grids in Spherical Coordinates,” Monthly WeatherRev., vol. 127, no. 9, Sept. 1999, pp. 2204–2210.

David A. Randall is a professor of atmospheric scienceat Colorado State University. His research interests in-clude general circulation model design and the role ofclouds in climate. He received his PhD in atmosphericscience from the University of California, Los Angeles.He is a fellow of the American Meterological Society,the American Geophysical Union, and the AmericanAssociation for the Advancement of Science. Contacthim at the Dept. of Atmospheric Science, ColoradoState Univ., Fort Collins, CO 80523-1371; [email protected].

Page 10: Climate modeling with spherical geodesic grids - …kiwi.atmos.colostate.edu/pubs/CISE.pdf · are essential to understanding and forecasting ... 34 COMPUTING IN SCIENCE & ENGINEERING

Todd D. Ringler is a research scientist in the Depart-ment of Atmospheric Science at Colorado State Uni-versity. His research focus is on modeling the large-scale motions of the atmosphere and ocean bydeveloping numerical models of the climate system.He received a BS in aerospace engineering from WestVirginia University and an MS and PhD in aerospaceengineering from Cornell University. Contact him at theDept. of Atmospheric Science, Colorado State Univ., FortCollins, CO 80523-1371; [email protected].

Ross P. Heikes is a research scientist at Colorado StateUniversity. His research interests include numericalmethods and atmospheric general circulation. He re-ceived a BS in mathematics and atmospheric scienceand a PhD in atmospheric science from Colorado StateUniversity. Contact him at Dept. Atmospheric Science,Colorado State Univ., Fort Collins, CO 80523-1371;[email protected].

Phil Jones is a technical staff member in the Theoreti-cal Fluid Dynamics Group at Los Alamos National Lab-oratory. His research interests include coupled climatemodeling, ocean modeling, remapping, and interpo-

lation and computational performance of climatemodels. He holds a PhD in astrophysical, planetary,and atmospheric sciences from the University of Col-orado and a BS in physics and math from Iowa StateUniversity. He is a member of the American Geophys-ical Union and the American Meteorological Society.Contact him at T-3 MS B216, Los Alamos Nat’l Lab,Los Alamos, NM 87544; [email protected]; http://home.lanl.gov/pwjones.

John Baumgardner is a technical staff member in theFluid Dynamics Group of the Theoretical Division atLos Alamos National Laboratory. His research interestsinclude development of a hybrid vertical coordinateglobal ocean model and mantle dynamics modelingof terrestrial planets. He has a PhD in geophysics andspace physics from UCLA and is a member of theAmerican Geophysical Union. Contact him at MSB216, Los Alamos Nat’l Lab., Los Alamos, NM 87545;[email protected].

For more information on this or any other computingtopic, please visit our Digital Library at http://computer.org/publications/dlib.

Hard sciences such as medicine, physics, and astronomy share a common need for efficient

algorithms, system software, and computer architecture to address large computational problems.

However, useful advances in computational techniques that

could benefit many researchers are rarely shared.

Computing in Science & Engineering magazine’s goal is

to provide an easily accessible forum for such research.

Articles typically discuss developments in computation and

algorithms, high-performance computer paradigms, performance evaluation, and visualization

techniques. Departments cover technology news and reviews, scientific computing in education,

current visualization research, the latest in scientific programming, and the progress of grid or

Web computing.

SUBSCRIBE ONLINE TODAY!

http://ojps.aip.org/cise/subscrib.html or http://computer.org/subscribe