parallel sparse matrix-vector and matrix- transpose-vector ...aydin/talks/csb-spaa.pdf · 1...

21
1 Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector Multiplication using Compressed Sparse Blocks Aydın Buluç, UCSB Jeremy T. Fineman (MIT) Matteo Frigo (Cilk Arts) John R. Gilbert (UCSB) Charles E. Leiserson (MIT & Cilk Arts)

Upload: others

Post on 23-Mar-2020

32 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

1

Parallel Sparse Matrix-Vector and Matrix-

Transpose-Vector Multiplication using

Compressed Sparse Blocks

Aydın Buluç, UCSBJeremy T. Fineman (MIT)

Matteo Frigo (Cilk Arts)

John R. Gilbert (UCSB)

Charles E. Leiserson (MIT & Cilk Arts)

Page 2: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

2

Sparse Matrix-Dense Vector

Multiplication (SpMV)

Applications:

• Iterative methods for solving linear systems: Krylovsubspace methods based on Lanczos biorthogonalization:

Biconjugate gradients (BCG) & quasi-minimal residual (QMR)

• Graph analysis: Betweenness centrality computation

y A xy ATx

A is an n-by-n sparse matrix with nnz << n2 nonzeros

Page 3: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

3

The Landscape:

Where does our work fit?

Equally fast

y=Ax and y=ATx

(simultaneously)

Plenty of parallelism

(for any nonzero distribution)

Hardware specific

optimizations

(prefetching, TLB

blocking, vectorization)

Matrix specific optimizations

(permutations, index/value

compression, register blocking)

This is our plane of focus !

Our

Contribution

Page 4: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

4

Theoretical and Experimental:

Main Results

Our parallel algorithms for y Ax and y ATx using the

new compressed sparse blocks (CSB) layout has

• span, and work,

• yielding parallelism.)lg/( nnnnz

)(nnz)lg( nn

0

100

200

300

400

500

600

1 2 3 4 5 6 7 8

MFlops/sec

Processors

Our CSB

algorithms

Star-P

(CSR+blockrow

distribution)

Serial

(Naïve CSR)

Page 5: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

5

Compressed Sparse Rows (CSR):

A Standard Layout

• Stores entries in row-major order

• Uses bits of index data.

• Reading rows in parallel is easy, but columns is hard.

Row pointers

data

810

2 3colindn×n matrix with nnz nonzeroes

n

0 2 3 4 0 1 5 7 3 4 5 4 5 6 7

04

1112131617

Dense collection of “sparse rows”

n nnznnzn lglg

Page 6: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

6

Parallelizing SpMV_T is hard using

the standard CSR format

CSR_SPMV_T(A,x,y)

for i0 to n-1

do for kA.rowptr[i] to A.rowptr[i+1]-1

do y[A.colind[k]] y[A.colind[k]] + A.data[k]∙x[i]

1. Parallelize the outer loop?

× Race conditions on vector y.

a. Locking on y is not scalable.

b. Using p copies of y is work-inefficient

2. Parallelize the inner loop?

× Span is ,

thus parallelism at most

)(n

)/( nnnz

y AT x

Page 7: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

7

Compressed Sparse Blocks

(CSB)

• Store blocks in row-major order (*)

• Store entries within block in Z-morton order (*)

• For , matches CSR on storage.

Reading blockrows or blockcolumns in parallel is now easy.

n×n matrix with nnz nonzeroes,

in β×β blocksasd

n

β =4

0,0 0,1

1,0 1,1

0,00,11,01,1

Block pointer

data

0 1 1 0 0 2 2 3 0 1 1 0 1 2 2 2 3rowind

0 0 1 2 3 2 3 3 0 1 3 0 1 0 1 2 3colind

Dense collection of “sparse blocks”

n

Page 8: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

8

CSB Matrix-Vector Multiplication

Our algorithm uses three levels of parallelism=

1) Multiply each blockrow in parallel, each writing to a disjoint output subvector.

=

2) If a blockrow is “dense,” parallelize the blockrow multiplication. = +

3) If a single block is dense, parallelize the block multiplication. =

Page 9: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

9

= = =

= +

recurse in parallel

then sum results

n

Blockrow-Vector Multiplication

• Divide-and-conquer based on the nonzero count, not spatial.

• Allocation & accumulation costs of temporary vectors are amortized.

• Lemma: For , our parallel blockrow-vector multiplication has work and span on a blockrow containing nonzeroes.

Until thingsnnz

rnonzeros

n

nr

)lg( nnO)(r

Page 10: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

10

Block-Vector Multiplication

• With Z-morton ordering, spatial division to quadrants takes time on a (sub)block using three binary searches

• Lemma: For , our parallel block-vector multiplication has work and span on a block with r nonzeroes.

For any (sub)block, first

perform A00 and A11 in

parallel; then A01 and

A10 in parallel.

dimdimdimlg

n)(r )( nO

Updates on y are

race-free

A00 A01

A10 A11

Page 11: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

11

Block-Vector Multiplication

• With Z-morton ordering, spatial division to quadrants takes time on a (sub)block using three binary searches

• Lemma: For , our parallel block-vector multiplication has work and span on a block with r nonzeroes.

dimdimdimlg

n)(r )( nO

Updates on y are

race-free

For any (sub)block, first

perform A00 and A11 in

parallel; then A01 and

A10 in parallel.

A00 A01

A10 A11

Page 12: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

12

Block-Vector Multiplication

• With Z-morton ordering, spatial division to quadrants takes time on a (sub)block using three binary searches.

• Lemma: For , our parallel block-vector multiplication has work and span on a block with r nonzeroes.

dimdimdimlg

n)(r )( nO

Updates on y are

race-free

For any (sub)block, first

perform A00 and A11 in

parallel; then A01 and

A10 in parallel.

A00 A01

A10 A11

Page 13: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

Main Theorem and

The Choice of β

13

Theorem: Our parallel matrix-vector multiplication has work and span , yielding on an CSB matrix containing nonzeroes.

Sensitivity to β in theory Sensitivity to β in practice

)lg( nnO )( 22 nnznO

For ,this yields a parallelism of

On our test matrices, parallelism ranges from 186 to 3498

n )lg/( nnnnz

nn nnnz

n

Page 14: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

14

Ax and ATx perform

equally well

4-socket dual-core 2.2GHz AMD Opteron 8214

Most test matrices had similar performance when

multiplying by the matrix and its transpose.

Matrix-Vector Product Matrix-Transpose-Vector Product

0

100

200

300

400

500

600

700

800

MFlops/sec

1 proc 2 procs

4 procs 8 procs

0

100

200

300

400

500

600

700

800 1 proc 2 procs

4 procs 8 procs

Page 15: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

15

0

100

200

300

400

500

600

1 2 3 4 5 6 7 8

MFlops/sec

Processors

CSB_SpMV CSB_SpMV_T Star-P_SpMV

Star-P_SpMV_T Serial CSR_SPMV Serial CSR_SpMV_T

Test Matrices and

Performance Overview

Page 16: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

16

Reality Check and

Related Work

FACT: Sparse matrix-dense vector multiplication (and the transpose) is bandwidth limited.

This work: motivated by multicore/manycore architectures where parallelism and memory bandwidth are key resources.

• Previous work mostly focused on reducing communication volume in distributed memory, often by using graph or hypergraph partitioning [Catalyurek & Aykanat ’99].

• Great optimization work for SpMV on multicores by Williams, et al.‘09, but without parallelism guarantees or multiplication with the transpose (SpMV_T).

• Blocking for sparse matrices is not new, but mostly for cache performance, not parallelism [Im et al.’04, Nishtala et al. ’07].

Page 17: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

17

Good Speedup Until Bandwidth Limit

• Slowed down processors (artificially introduced extra instructions) for test to hide memory issues.

• Shows algorithm scales well given sufficient memory bandwidth.0

2

4

6

8

10

12

14

16

0 2 4 6 8 10 12 14 16

Sp

eed

up

Processors

Ran on the smallest (and one of the most irregular) test matrix

Page 18: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

18

All about Bandwidth:

Harpertown vs Nehalem

Intel Xeon X5460 @3.16Ghz

Dual-socket Quad-core

FSB @1333Mhz

Intel Core i7 920 @2.66Ghz

Single-socket Quad-core

Quickpath+Hyperthreading

Page 19: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

Conclusions & Future Work

• CSB allows for efficient multiplication of a sparse matrix and its transpose by a dense vector.

Future Work:

• Does CSB work well with other computations? Sparse LU decomposition? Sparse matrix-matrix multiplication?

• For a symmetric matrix, need only store upper triangle of matrix. Can we multiply with one read (i.e., ½ the bandwidth)?

Code (in C++ and Cilk++) available from:

http://gauss.cs.ucsb.edu/∼aydin/software.html

Page 20: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

Thank You !

Page 21: Parallel Sparse Matrix-Vector and Matrix- Transpose-Vector ...aydin/talks/csb-spaa.pdf · 1 Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication using Compressed

CSB Space Usage

Lemma: For , CSB uses bits of index data, matching CSR.

Proof:

n blocks => n block pointers.Block pointer

rowind

colind

• Each block pointer uses bits, for total

• Each row (or column) offsetrequires bits, for

total.

n n nnznnzn lglg

nn

nnzlgnnzn lg

nnz

nlglg nnnz lg)2/(