post-processing analysis of climate simulation data using python and mpi

Post on 22-Feb-2016

62 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Post-processing analysis of climate simulation data using Python and MPI. John Dennis ( dennis@ucar.edu ) Dave Brown ( dbrown@ucar.edu ) Kevin Paul ( kpaul@ucar.edu ) Sheri Mickelson ( mickelso@ucar.edu ). Motivation. - PowerPoint PPT Presentation

TRANSCRIPT

Post-processing analysis of climate simulation

data using Python and MPI

John Dennis (dennis@ucar.edu)Dave Brown (dbrown@ucar.edu)

Kevin Paul (kpaul@ucar.edu)Sheri Mickelson (mickelso@ucar.edu)

1

2

Motivation Post-processing consumes a surprisingly large

fraction of simulation time for high-resolution runs

Post-processing analysis is not typically parallelized

Can we parallelize post-processing using existing software?◦ Python ◦ MPI ◦ pyNGL: python interface to NCL graphics◦ pyNIO: python interface to NCL I/O library

3

Consider a “piece” of CESM post-processing workflow Conversion of time-slice to time-series Time-slice

◦ Generated by the CESM component model◦ All variables for a particular time-slice in one file

Time-series◦ Form used for some post-processing and CMIP◦ Single variables over a range of model time

Single most expensive post-processing step for CMIP5 submission

4

The experiment: Convert 10-years of monthly time-slice files

into time-series files Different methods:

◦ Netcdf Operators (NCO)◦ NCAR Command Language (NCL)◦ Python using pyNIO (NCL I/O library)◦ Climate Data Operators (CDO)◦ ncReshaper-prototype (Fortran + PIO)

5

Dataset characteristics:10-years of monthly outputdataset # of 2D vars # of 3D vars Input total size

(Gbytes)CAMFV-1.0 40 82 28.4CAMSE-1.0 43 89 30.8CICE-1.0 117 8.4CAMSE-0.25 101 97 1077.1CLM-1.0 297 9.0CLM-0.25 150 84.0CICE-0.1 114 569.6POP-0.1 23 11 3183.8POP-1.0 78 36 194.4

6

Duration: Serial NCO

14 hours!

5 hours

7

Throughput: Serial methods

8

Approaches to Parallelism Data-parallelism:

◦ Divide single variable across multiple ranks◦ Parallelism used by large simulation codes: CESM,

WRF, etc◦ Approach used by ncReshaper-prototype code

Task-parallelism:◦ Divide independent tasks across multiple ranks◦ Climate models output large number of different

variables T, U, V, W, PS, etc..

◦ Approach used by python + MPI code

9

Single source Python approach Create dictionary which describes which

tasks need to be performed Partition dictionary across MPI ranks Utility module ‘parUtils.py’ only difference

between parallel and serial execution

10

Example python code import parUtils as par…rank = par.GetRank()# construct global dictionary ‘varsTimeseries’ for all

variablesvarsTimeseries = ConstructDict()…# Partition dictionary into local piecelvars = par.Partition(varsTimeseries)# Iterate over all variables assigned to MPI rankfor k,v in lvars.iteritems():

….

11

Throughput: Parallel methods(4 nodes, 16 cores)

task-parallelism data-parallelism

12

Throughput:pyNIO + MPI w/compression

13

Duration: NCO versus pyNIO + MPI w/compression

7.9x (3 nodes)

35x speedup (13 nodes)

14

Conclusions Large amounts of “easy-parallelism”

present in post-processing operations Single source python scripts can be written

to achieve task-parallel execution Factors of 8 – 35x speedup is possible Need ability to exploit both task and data

parallelism Exploring broader use within CESM workflow

Expose entire NCL capability to python?

top related