toward a supernodal sparse direct solver over dag...

43
July 30, 2012 Toward a supernodal sparse direct solver over DAG runtimes CEMRACS’2012 Luminy X. Lacoste, P. Ramet, M. Faverge, I. Yamazaki, G. Bosilca, J. Dongarra Pierre RAMET BACCHUS team Inria Bordeaux Sud-Ouest

Upload: phungkien

Post on 13-Mar-2018

235 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

July 30, 2012

Toward a supernodal sparse directsolver over DAG runtimesCEMRACS’2012 LuminyX. Lacoste, P. Ramet, M. Faverge, I. Yamazaki, G. Bosilca, J. Dongarra

Pierre RAMETBACCHUS teamInria Bordeaux Sud-Ouest

Page 2: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Guideline

Context and goals

KernelsPanel factorizationTrailing supernodes update (CPU version)Sparse GEMM on GPU

Runtimes

ResultsMatrices and MachinesMulticore resultsGPU results

Conclusion and extra tools

Pierre RAMET - CEMRACS’2012 July 30, 2012- 2

Page 3: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

1Context and goals

Page 4: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Mixed/Hybrid direct-iterative methods

The “spectrum” of linear algebra solvers

I Robust/accurate for generalproblems

I BLAS-3 based implementationI Memory/CPU prohibitive for

large 3D problemsI Limited parallel scalability

I Problem dependent efficiency/controlledaccuracy

I Only mat-vec required, fine graincomputation

I Less memory consumption, possibletrade-off with CPU

I Attractive “build-in” parallel features

Pierre RAMET - CEMRACS’2012 July 30, 2012- 5

Page 5: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Possible solutions for Many-CoresI Multi-Cores : PaStiX already finely tuned to use MPI and

P-Threads;I Multiple-GPU and many-cores, two solutions:

I Manually handle GPUs:I lot of work;I heavy maintenance.

I Use dedicated runtime:I May loose the performance obtained on many-core;I Easy to add new computing devices.

Elected solution, runtime:I StarPU: RUNTIME – Inria Bordeaux Sud-Ouest;I DAGuE: ICL – University of Tennessee, Knoxville.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 6

Page 6: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Major steps for solving sparse linear systems

1. Analysis: matrix is preprocessed to improve its structuralproperties (A′x ′ = b′ with A′ = PnPDr ADcQPT )

2. Factorization: matrix is factorized as A = LU, LLT or LDLT

3. Solve: the solution x is computed by means of forward andbackward substitutions

Pierre RAMET - CEMRACS’2012 July 30, 2012- 7

Page 7: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Symmetric matrices and graphsI Assumptions: A symmetric, pivots are chosen on the diagonalI Structure of A symmetric represented by the graph

G = (V ,E )I Vertices are associated to columns: V = {1, ..., n}I Edges E are defined by: (i , j) ∈ E ↔ aij 6= 0I G undirected (symmetry of A)

I Number of nonzeros in column j = |AdjG(j)|I Symmetric permutation ≡ renumbering the graph

3

4

25

11

2

3

4

5

1 2 3 4 5

Symmetric matrix Corresponding graphPierre RAMET - CEMRACS’2012 July 30, 2012- 8

Page 8: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Fill-in theorem and Elimination tree

TheoremAny Aij = 0 will become a non-null entry Lij or Uij 6= 0 in A = LUif and only if it exists a path in GA(V ,E ) from vertex i to vertex jthat only goes through vertices with a lower number than i and j .

DefinitionLet A be a symmetric positive-definite matrix, G+(A) is the filledgraph (graph of L + LT) where A = LLT (Cholesky factorization)

DefinitionThe elimination tree of A is a spanning tree of G+(A) satisfyingthe relation PARENT [j] = min{i > j |lij 6= 0}.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 9

Page 9: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Direct Method

Pierre RAMET - CEMRACS’2012 July 30, 2012- 10

Page 10: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

PaStiX Features

I LLt, LDLt, LU : supernodal implementation (BLAS3)I Static pivoting + Refinement: CG/GMRESI Simple/Double precision + Float/Complex operationsI Require only C + MPI + Posix Thread (PETSc driver)

I MPI/Threads (Cluster/Multicore/SMP/NUMA)I Dynamic scheduling NUMA (static mapping)I Support external ordering library (PT-Scotch/METIS)

I Multiple RHS (direct factorization)I Incomplete factorization with ILU(k) preconditionnerI Schur computation (hybrid method MaPHYS or HIPS)I Out-of Core implementation (shared memory only)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 11

Page 11: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Direct Solver Highlights (MPI)

Main usersI Electomagnetism and structural mechanics at CEA-DAMI MHD Plasma instabilities for ITER at CEA-CadaracheI Fluid mechanics at Bordeaux

TERA CEA supercomputerThe direct solver PaStiX has been successfully used to solve a hugesymmetric complex sparse linear system arising from a 3Delectromagnetism code

I 45 millions unknowns: required 1.4 Petaflops and wascompleted in half an hour on 2048 processors.

I 83 millions unknowns: required 5 Petaflops and wascompleted in 5 hours on 768 processors.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 12

Page 12: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Direct Solver Highlights (multicore)SGI 160-cores

Name N NNZA Fill ratio FactAudi 9.44×105 3.93×107 31.28 float LLT

10M 1.04×107 8.91×107 75.66 complex LDLT

Audi 8 64 128 2x64 4x32 8x16 160Facto (s) 103 21.1 17.8 18.6 13.8 13.4 17.2Mem (Gb) 11.3 12.7 13.4 2x7.68 4x4.54 8x2.69 14.5Solve (s) 1.16 0.31 0.40 0.32 0.21 0.14 0.49

10M 10 20 40 80 160Facto (s) 3020 1750 654 356 260Mem (Gb) 122 124 127 133 146Solve (s) 24.6 13.5 3.87 2.90 2.89

Pierre RAMET - CEMRACS’2012 July 30, 2012- 13

Page 13: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Static Scheduling Gantt Diagram

I 10Million test case on IDRIS IBM Power6 with 4 MPI processof 32 threads (color is level in the tree)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 14

Page 14: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Context and goals

Dynamic Scheduling Gantt Diagram

I Reduces time by 10-15% (will increase with NUMA factor)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 15

Page 15: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

2Kernels

Page 16: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Kernels Panel factorization

Panel factorization (CPU only)I Factorization of the diagonal block (xxTRF);I TRSM on the extra-diagonal blocks (ie. solves

X × bd = bi ,i>d – where bd is the diagonal block).

Panel

Panel storage

1: xxTRF

2: TRSM

Figure: Panel update

Pierre RAMET - CEMRACS’2012 July 30, 2012- 18

Page 17: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Kernels Trailing supernodes update (CPU version)

Trailing supernodes update

I One global GEMM in a temporary buffer;I Scatter addition (many AXPY).

P1

P2

P1 P2

Compacted in memory

Figure: Panel update

Pierre RAMET - CEMRACS’2012 July 30, 2012- 19

Page 18: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Kernels Sparse GEMM on GPU

Why a new kernel ?

I A BLAS call ⇒ a CUDA startup paid;I Many AXPY calls ⇒ loss of performance.

⇒ need a GPU kernel to compute all the updates from P1 on P2at once.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 20

Page 19: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Kernels Sparse GEMM on GPU

How ?auto-tunning GEMM CUDA kernel

I Auto-tunning done by the framework ASTRA developped byJakub Kurzak for MAGMA and inspired from ATLAS;

I computes C ← αAX + βB, AX split into a 2D tiled grid;I a block of threads computes each tile;I each thread computes several entries of the tile in the shared

memory and substract it from C in the global memory.

Sparse GEMM cuda kernelI Based on auto-tuning GEMM CUDA kernel;I Added two arrays giving first and last line of each blocks of P1

and P2;I Computes an offset used when adding to the global memory.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 21

Page 20: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Kernels Sparse GEMM on GPU

Sparse GEMM on GPU

Tiled A × X

P2

fr1,1

lr1,1

fr1,2

fr1,2

fr1,3

fr1,3

fr2,1

lr2,1

fr2,2

fr2,2

blocknbr = 3;blocktab = [ fr1,1, lr1,1,

fr1,2, lr1,2,fr1,3, lr1,3 ];

fblocknbr = 2;fblocktab = [ fr2,1, lr2,1,

fr2,2, lr2,2];

sparse gemm cuda( char TRANSA, char TRANSB, int m, int n, int k,cuDoubleComplex alpha,const cuDoubleComplex *d A, int lda,const cuDoubleComplex *d B, int ldb,cuDoubleComplex beta,cuDoubleComplex *d C, int ldc,int blocknbr, const int *blocktab,int fblocknbr, const int *fblocktab,CUstream stream );

Figure: Panel update on GPU

Pierre RAMET - CEMRACS’2012 July 30, 2012- 22

Page 21: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Kernels Sparse GEMM on GPU

GPU kernel experimentation

×

A11

AT11

NcolA

NrowA

NrowA11

NcolB

ParametersI NcolA = 100;I NcolB = NrowA11 = 100;I NrowA varies from 100 to 2000;I Random number and size of

blocks in A;I Random blocks in B matching

A;I Get mean time of 10 runs for a

fixed NrowA with differentblocks distribution.

Figure: GPU kernel experimentation

Pierre RAMET - CEMRACS’2012 July 30, 2012- 23

Page 22: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Kernels Sparse GEMM on GPU

GPU kernel performance

0 1,000 2,00010−5

10−4

10−3

10−2

Number of rows

Tim

e(s

)

GPU timeGPU time with transfer

CPU time

Figure: Sparse kernel timing with 100 columns.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 24

Page 23: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

3Runtimes

Page 24: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Runtimes

Runtimes

I Task-based programming model;I Tasks scheduled on computing units (CPUs, GPUs, . . . );I Data transfers management;I Dynamicaly build models for kernels;I Add new scheduling strategies with plugins;I Get informations on idle times and load balances.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 27

Page 25: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Runtimes

StarPU Tasks submission

Algorithm 1: StarPU tasks submissionforall the Supernode S1 do

submit panel (S1);/* update of the panel */

forall the extra diagonal block Bi of S1 doS2 ← supernode in front of (Bi );submit gemm (S1,S2);/* sparse GEMM Bk,k≥i × BT

i substracted from

S2 */

endend

Pierre RAMET - CEMRACS’2012 July 30, 2012- 28

Page 26: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Runtimes

DAGuE’s parametrized taskgraph

panel(j) [high priority = on]/* execution space */j = 0 .. cblknbr-1/* Extra parameters */firstblock = diagonal block of( j )lastblock = last block of( j )lastbrow = last brow of( j ) /* Last block generating an update on j *//* Locality */:A(j)RW A ← leaf ? A(j) : C gemm(lastbrow)

→ A gemm(firstblock+1..lastblock)→ A(j)

Figure: Panel factorization description in DAGuE

Pierre RAMET - CEMRACS’2012 July 30, 2012- 29

Page 27: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

4Results

Page 28: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Results Matrices and Machines

Matrices and Machines

Matrices

Name N NNZA Fill ratio OPC FactMHD 4.86×105 1.24×107 61.20 9.84×1012 Float LUAudi 9.44×105 3.93×107 31.28 5.23×1012 Float LLT

10M 1.04×107 8.91×107 75.66 1.72×1014 Complex LDLT

Machines

Processors Frequency GPUs RAMAMD Opteron 6180 SE (4× 12) 2.50 GHz Tesla T20 (×2) 256 GiB

Pierre RAMET - CEMRACS’2012 July 30, 2012- 32

Page 29: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Results Multicore results

CPU only results on Audi

1 2 4 6 12 24 36 480

200

400

600

800

Number of Threads

Fact

oriza

tion

Tim

e(s

) PaStiXPaStiX with StarPUPaStiX with DAGuE

Figure: LLT decomposition on Audi (double precision)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 33

Page 30: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Results Multicore results

CPU only results on MHD

1 2 4 6 12 24 36 480

500

1,000

1,500

Number of Threads

Fact

oriza

tion

Tim

e(s

) PaStiXPaStiX with StarPUPaStiX with DAGuE

Figure: LU decomposition on MHD (double precision)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 34

Page 31: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Results Multicore results

CPU only results on 10 Millions

1 2 4 6 12 24 36 480

20,000

40,000

Number of Threads

Fact

oriza

tion

Tim

e(s

) PaStiXPaStiX with StarPUPaStiX with DAGuE

Figure: LDLT decomposition on 10M (double complex)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 35

Page 32: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Results GPU results

Audi: GPU results on Romulus (StarPU)

1 2 4 6 12 24 360

200

400

600

800

Number of Threads

Fact

oriza

tion

Tim

e(s

) 0 CUDA device1 CUDA device2 CUDA devices

Figure: Audi LLt decomposition with GPU on Romulus (doubleprecision)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 36

Page 33: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Results GPU results

MHD: GPU results on Romulus (StarPU)

1 2 4 6 12 24 360

500

1,000

1,500

Number of Threads

Fact

oriza

tion

Tim

e(s

) 0 CUDA device1 CUDA device2 CUDA devices

Figure: MHD LU decomposition with GPU on Romulus (doubleprecision)

Pierre RAMET - CEMRACS’2012 July 30, 2012- 37

Page 34: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

5Conclusion and extra tools

Page 35: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

ConclusionI Timing equivalent to PaStiX with medium size test cases;I Quite good scaling;I Speedup obtained with one GPU and little number of cores;I released in PaStiX 5.2

(http://pastix.gforge.inria.fr).

Futur worksI Study the effect of the block size for GPUs;I Write solve step with runtime;I Distributed implementation (MPI);I Panel factorization on GPU;I Add context to reduce the number of candidates for each task;I Bit-compatibilty for a same number of processors ?

Pierre RAMET - CEMRACS’2012 July 30, 2012- 40

Page 36: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

Block ILU(k): supernode amalgamation algorithmDerive a block incomplete LU factorization from thesupernodal parallel direct solver

I Based on existing package PaStiXI Level-3 BLAS incomplete factorization implementationI Fill-in strategy based on level-fill among block structures

identified thanks to the quotient graphI Amalgamation strategy to enlarge block size

HighlightsI Handles efficiently high level-of-fillI Solving time can be 2-4 faster than with scalar ILU(k)I Scalable parallel implementation

Pierre RAMET - CEMRACS’2012 July 30, 2012- 41

Page 37: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

Block ILU(k): some results on AUDI matrix(N = 943, 695, NNZ = 39, 297, 771)

Numerical behaviour

Pierre RAMET - CEMRACS’2012 July 30, 2012- 42

Page 38: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

Block ILU(k): some results on AUDI matrix(N = 943, 695, NNZ = 39, 297, 771)

Preconditioner setup time

Pierre RAMET - CEMRACS’2012 July 30, 2012- 43

Page 39: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

HIPS : hybrid direct-iterative solver

Based on a domain decomposition : interface one node-wide(no overlap in DD lingo)

(AB FE AC

)B : Interior nodes of subdomains (direct factorization).C : Interface nodes.

Special decomposition and ordering of the subset C :Goal : Building a global Schur complement preconditioner (ILU)from the local domain matrices only.

Pierre RAMET - CEMRACS’2012 July 30, 2012- 44

Page 40: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

HIPS: preconditioners

Main featuresI Iterative or “hybrid” direct/iterative method are implemented.I Mix direct supernodal (BLAS-3) and sparse ILUT

factorization in a seamless manner.I Memory/load balancing : distribute the domains on the

processors (domains > processors).

Pierre RAMET - CEMRACS’2012 July 30, 2012- 45

Page 41: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

HIPS vs Additive Schwarz (from PETSc)

Experimental conditionsThese curves compare HIPS (Hybrid) with Additive Schwarz fromPETSc.Parameters were tuned to compare the result with a very similarfill-in

Haltere MHD

Pierre RAMET - CEMRACS’2012 July 30, 2012- 46

Page 42: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Conclusion and extra tools

BACCHUS softwaresGraph/Mesh partitioner and ordering :

http://scotch.gforge.inria.fr

Sparse linear system solvers :

http://pastix.gforge.inria.fr

http://hips.gforge.inria.fr

Pierre RAMET - CEMRACS’2012 July 30, 2012- 47

Page 43: Toward a supernodal sparse direct solver over DAG …calcul.math.cnrs.fr/Documents/Ecoles/CEMRACS2012/cemracs_PRamet.pdfToward a supernodal sparse direct solver over DAG runtimes

Thanks !

Pierre RAMETINRIA Bacchus team

CEMRACS’2012 - Luminy