database tuning in ppt

167
© Dennis Shasha, Philippe Bonnet - 2002 1 Database Tuning Principles, Experiments and Troubleshooting Techniques http://www.mkp.com/dbtune Dennis Shasha ([email protected]) Philippe Bonnet ([email protected])

Upload: tess98

Post on 10-May-2015

3.554 views

Category:

Documents


25 download

TRANSCRIPT

Page 1: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

1

Database TuningPrinciples, Experiments and

Troubleshooting Techniques http://www.mkp.com/dbtune

Dennis Shasha ([email protected])

Philippe Bonnet ([email protected])

Page 2: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

2

Database Tuning

Database Tuning is the activity of making a database application run more quickly. “More quickly” usually means higher throughput, though it may mean lower

response time for time-critical applications.

Page 3: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

3

ApplicationProgrammer

(e.g., business analyst,Data architect)

SophisticatedApplicationProgrammer

(e.g., SAP admin)

DBA,Tuner

Hardware[Processor(s), Disk(s), Memory]

Operating System

Concurrency Control Recovery

Storage SubsystemIndexes

Query Processor

Application

Page 4: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

4

Outline

1. Basic Principles2. Tuning the guts3. Indexes4. Relational Systems5. Application Interface6. Ecommerce Applications7. Data warehouse Applications8. Distributed Applications9. Troubleshooting

Page 5: database tuning in ppt

1 - Basic Principles © Dennis Shasha, Philippe Bonnet - 2002

5

Goal of the Tutorial

• To show:– Tuning principles that port from one system to

the other and to new technologies– Experimental results to show the effect of these

tuning principles.– Troubleshooting techniques for chasing down

performance problems.

Page 6: database tuning in ppt

1 - Basic Principles © Dennis Shasha, Philippe Bonnet - 2002

6

Tuning Principles Leitmotifs

• Think globally, fix locally (does it matter?)

• Partitioning breaks bottlenecks (temporal and spatial)

• Start-up costs are high; running costs are low (disk transfer, cursors)

• Be prepared for trade-offs (indexes and inserts)

Page 7: database tuning in ppt

1 - Basic Principles © Dennis Shasha, Philippe Bonnet - 2002

7

Experiments -- why and where

• Simple experiments to illustrate the performance impact of tuning principles.

• http://www.diku.dk/dbtune/experiments to get the SQL scripts, the data and a tool to run the experiments.

Page 8: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

8

Experimental DBMS and Hardware

• Results presented throughout this tutorial obtained with:

– SQL Server 7, SQL Server 2000, Oracle 8i, Oracle 9i, DB2 UDB 7.1

– Three configurations:1. Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller

from Adaptec (80Mb) 2 Ultra 160 channels, 4x18Gb drives (10000RPM), Windows 2000.

2. Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

3. Pentium III (1 GHz, 256 Kb), 1Gb RAM, Adapter 39160 with 2 channels, 3x18Gb drives (10000RPM), Linux Debian 2.4.

Page 9: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

9

Tuning the Guts

• Concurrency Control– How to minimize lock contention?

• Recovery– How to manage the writes to the log (to dumps)?

• OS– How to optimize buffer size, process scheduling, …

• Hardware– How to allocate CPU, RAM and disk subsystem

resources?

Page 10: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

10

Isolation

• Correctness vs. Performance– Number of locks held by each transaction– Kind of locks– Length of time a transaction holds locks

Page 11: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

11

Isolation Levels• Read Uncomitted (No lost update)

– Exclusive locks for write operations are held for the duration of the transactions

– No locks for read

• Read Committed (No dirty retrieval)– Shared locks are released as soon as the read operation terminates.

• Repeatable Read (no unrepeatable reads for read/write )– Two phase locking

• Serializable (read/write/insert/delete model)– Table locking or index locking to avoid phantoms

Page 12: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

12

Snapshot isolation

T1

T2

T3

TIME

R(Y) re

turns 1

R(Z) returns 0

R(X) re

turns 0

W(Y:=1)

W(X:=2, Z

:=3)

X=Y=Z=0

• Each transaction executes against the version of the data items that was committed when the transaction started:– No locks for read– Costs space (old copy of data must

be kept)

• Almost serializable level:– T1: x:=y– T2: y:= x– Initially x=3 and y =17– Serial execution:

x,y=17 or x,y=3– Snapshot isolation:

x=17, y=3 if both transactions start at the same time.

Page 13: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

13

Value of Serializability -- Data

Settings:accounts( number, branchnum, balance);

create clustered index c on accounts(number);

– 100000 rows

– Cold buffer; same buffer size on all systems.

– Row level locking

– Isolation level (SERIALIZABLE or READ COMMITTED)

– SQL Server 7, DB2 v7.1 and Oracle 8i on Windows 2000

– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

Page 14: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

14

Value of Serializability -- transactions

Concurrent Transactions:– T1: summation query [1 thread]

select sum(balance) from accounts;

– T2: swap balance between two account numbers (in order of scan to avoid deadlocks) [N threads]

valX:=select balance from accounts where number=X;valY:=select balance from accounts where number=Y;update accounts set balance=valX where number=Y;update accounts set balance=valY where number=X;

Page 15: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

15

Value of Serializability -- results

• With SQL Server and DB2 the scan returns incorrect answers if the read committed isolation level is used (default setting)

• With Oracle correct answers are returned (snapshot isolation), but beware of swapping

Oracle

0

0.2

0.4

0.6

0.8

1

0 2 4 6 8 10

Concurrent update threads

Rat

io o

f cor

rect

an

swer

s

read committed

serializable

SQLServer

0

0.2

0.4

0.6

0.8

1

0 2 4 6 8 10

Concurrent update threads

Rat

io o

f cor

rect

an

swer

s

read committed

serializable

Page 16: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

16

Cost of Serializability

Because the update conflicts with the scan, correct answers are obtained at the cost of decreased concurrency and thus decreased throughput.

Oracle

0 2 4 6 8 10

Concurrent Update Threads

Thro

ughp

ut

(tra

ns/s

ec)

read committed

serializable

SQLServer

0 2 4 6 8 10

Concurrent Update Threads

Thro

ughp

ut

(tra

ns/s

ec)

read committed

serializable

Page 17: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

17

Locking Overhead -- data

Settings:accounts( number, branchnum, balance);

create clustered index c on accounts(number);

– 100000 rows– Cold buffer– SQL Server 7, DB2 v7.1 and Oracle 8i on Windows 2000– No lock escalation on Oracle; Parameter set so that there is no

lock escalation on DB2; no control on SQL Server.

– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

Page 18: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

18

Locking Overhead -- transactions

No Concurrent Transactions:– Update [10 000 updates]update accounts set balance = Val;

– Insert [10 000 transactions], e.g. typical one:insert into accounts values(664366,72255,2296.12);

Page 19: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

19

Locking Overhead

Row locking is barely more expensive than table locking because recovery overhead is higher than row locking overhead– Exception is updates on

DB2 where table locking is distinctly less expensive than row locking.

0

0.2

0.4

0.6

0.8

1

update insert

Thro

ughp

ut ra

tio

(row

lock

ing/

tabl

e lo

ckin

g)

db2

sqlserver

oracle

Page 20: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

20

Logical Bottleneck: Sequential Key generation

• Consider an application in which one needs a sequential number to act as a key in a table, e.g. invoice numbers for bills.

• Ad hoc approach: a separate table holding the last invoice number. Fetch and update that number on each insert transaction.

• Counter approach: use facility such as Sequence (Oracle)/Identity(MSSQL).

Page 21: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

21

Counter Facility -- data

Settings:

– default isolation level: READ COMMITTED; Empty tables

– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

accounts( number, branchnum, balance);create clustered index c on accounts(number);

counter ( nextkey );insert into counter values (1);

Page 22: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

22

Counter Facility -- transactions

No Concurrent Transactions:– System [100 000 inserts, N threads]

• SQL Server 7 (uses Identity column)insert into accounts values (94496,2789);

• Oracle 8iinsert into accounts values (seq.nextval,94496,2789);

– Ad-hoc [100 000 inserts, N threads]begin transaction NextKey:=select nextkey from counter; update counter set nextkey = NextKey+1;commit transactionbegin transaction insert into accounts values(NextKey,?,?);commit transaction

Page 23: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

23

Avoid Bottlenecks: Counters

• System generated counter (system) much better than a counter managed as an attribute value within a table (ad hoc).

• The Oracle counter can become a bottleneck if every update is logged to disk, but caching many counter numbers is possible.

• Counters may miss ids.

SQLServer

0 10 20 30 40 50

Number of concurrent insertion threads

Th

rou

gh

pu

t (s

tate

me

nts

/se

c)

system

ad-hoc

Oracle

0 10 20 30 40 50

Numbe r of concurre nt inse rtion thre ads

Th

rou

gh

pu

t (s

tate

me

nts

/se

c)

system

ad-hoc

Page 24: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

24

Insertion Points -- transactions

No Concurrent Transactions:– Sequential [100 000 inserts, N threads]

Insertions into account table with clustered index on ssnumData is sorted on ssnumSingle insertion point

– Non Sequential [100 000 inserts, N threads]Insertions into account table with clustered index on ssnum

Data is not sorted (uniform distribution)100 000 insertion points

– Hashing Key [100 000 inserts, N threads]Insertions into account table with extra attribute att with clustered index on

(ssnum, att)Extra attribute att contains hash key (1021 possible values)1021 insertion points

Page 25: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

25

Insertion Points

• Page locking: single insertion point is a source of contention (sequential key with clustered index, or heap)

• Row locking: No contention between successive insertions.

• DB2 v7.1 and Oracle 8i do not support page locking.

Row Locking

0

200

400

600

800

0 10 20 30 40 50 60

Number of concurrent insertion threads

Th

rou

gh

pu

t (t

up

les/

sec)

sequential

non sequential

hashing key

Page Locking

0

20

40

60

80

100

120

140

0 10 20 30 40 50 60

Num ber of concurre nt ins ertion threads

Th

rou

gh

pu

t (t

up

les/

sec) sequential

non sequential

hashing key

Page 26: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

26

Atomicity and Durability

• Every transaction either commits or aborts. It cannot change its mind

• Even in the face of failures:– Effects of committed

transactions should be permanent;

– Effects of aborted transactions should leave no trace.

ACTIVE(running, waiting)

ABORTED

COMMITTEDCOMMIT

ROLLBACK

ØBEGINTRANS

Page 27: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

27

LOG DATADATADATA

STABLE STORAGE

UNSTABLE STORAGE

WRITE log records before commit

WRITE modified pages after commit

RECOVERY

Pi Pj

DATABASE BUFFERLOG BUFFER

lri lrj

Page 28: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

28

Log IO -- data

Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,

L_LINENUMBER , L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );

– READ COMMITTED isolation level

– Empty table

– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

Page 29: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

29

Log IO -- transactions

No Concurrent Transactions:

Insertions [300 000 inserts, 10 threads], e.g., insert into lineitem values (1,7760,401,1,17,28351.92,0.04,0.02,'N','O','1996-03-13','1996-02-12','1996-03-22','DELIVER IN PERSON','TRUCK','blithely regular ideas caj');

Page 30: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

30

Group Commits

• DB2 UDB v7.1 on Windows 2000

• Log records of many transactions are written together– Increases throughput by

reducing the number of writes

– at cost of increased minimum response time.

0

50

100

150

200

250

300

350

1 25

Size of Group Commit

Th

rou

gh

pu

t (t

up

les/

sec)

Page 31: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

31

Put the Log on a Separate Disk

• DB2 UDB v7.1 on Windows 2000

• 5 % performance improvement if log is located on a different disk

• Controller cache hides negative impact– mid-range server, with

Adaptec RAID controller (80Mb RAM) and 2x18Gb disk drives.

0

50

100

150

200

250

300

350

controller cache no controller cache

Th

rou

gh

pu

t (t

up

les/

sec)

Log on same disk

Log on separate disk

Page 32: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

32

Tuning Database Writes

• Dirty data is written to disk– When the number of dirty pages is greater than

a given parameter (Oracle 8)– When the number of dirty pages crosses a given

threshold (less than 3% of free pages in the database buffer for SQL Server 7)

– When the log is full, a checkpoint is forced. This can have a significant impact on performance.

Page 33: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

33

Tune Checkpoint Intervals

• Oracle 8i on Windows 2000

• A checkpoint (partial flush of dirty pages to disk) occurs at regular intervals or when the log is full:– Impacts the performance of

on-line processing

+ Reduces the size of log

+ Reduces time to recover from a crash

0

0.2

0.4

0.6

0.8

1

1.2

0 checkpoint 4 checkpoints

Thro

ughp

ut R

atio

Page 34: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

34

Database Buffer Size

• Buffer too small, then hit ratio too small

hit ratio = (logical acc. - physical acc.) / (logical acc.)

• Buffer too large, paging• Recommended strategy:

monitor hit ratio and increase buffer size until hit ratio flattens out. If there is still paging, then buy memory. LOG DATADATA

RAM

Paging Disk

DATABASE PROCESSES

DATABASEBUFFER

Page 35: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

35

Buffer Size -- data

Settings:employees(ssnum, name, lat, long, hundreds1,

hundreds2);

clustered index c on employees(lat); (unused)– 10 distinct values of lat and long, 100 distinct values of hundreds1

and hundreds2

– 20000000 rows (630 Mb);

– Warm Buffer

– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000 RPM), Windows 2000.

Page 36: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

36

Buffer Size -- queries

Queries:– Scan Queryselect sum(long) from employees;

– Multipoint queryselect * from employees where lat = ?;

Page 37: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

37

Database Buffer Size

• SQL Server 7 on Windows 2000

• Scan query:– LRU (least recently used) does

badly when table spills to disk as Stonebraker observed 20 years ago.

• Multipoint query:– Throughput increases with

buffer size until all data is accessed from RAM.

Multipoint Query

0

40

80

120

160

0 200 400 600 800 1000

Buffer Size (Mb)

Th

rou

gh

pu

t (Q

uer

ies/

sec)

Scan Query

0

0.02

0.04

0.06

0.08

0.1

0 200 400 600 800 1000

Buffer Size (Mb)

Th

rou

gh

pu

t (Q

uer

ies/

sec)

Page 38: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

38

Scan Performance -- data

Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,

L_LINENUMBER , L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );

– 600 000 rows – Lineitem tuples are ~ 160 bytes long– Cold Buffer– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller

from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

Page 39: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

39

Scan Performance -- queries

Queries:select avg(l_discount) from lineitem;

Page 40: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

40

Prefetching

• DB2 UDB v7.1 on Windows 2000

• Throughput increases up to a certain point when prefetching size increases.

0

0.05

0.1

0.15

0.2

32Kb 64Kb 128Kb 256Kb

Prefetching

Thro

ughp

ut (T

rans

/sec

)

scan

Page 41: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

41

Usage Factor

• DB2 UDB v7.1 on Windows 2000

• Usage factor is the percentage of the page used by tuples and auxilliary data structures (the rest is reserved for future)

• Scan throughput increases with usage factor.

0

0.05

0.1

0.15

0.2

70 80 90 100

Usage Factor (%)

Thro

ughp

ut (T

rans

/sec

)

scan

Page 42: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

42

RAID Levels

• RAID 0: striping (no redundancy)• RAID 1: mirroring (2 disks)• RAID 5: parity checking

– Read: stripes read from multiple disks (in parallel)– Write: 2 reads + 2 writes

• RAID 10: striping and mirroring• Software vs. Hardware RAID:

– Software RAID: run on the server’s CPU– Hardware RAID: run on the RAID controller’s CPU

Page 43: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

43

Why 4 read/writes when updating a single stripe using RAID 5?

• Read old data stripe; read parity stripe (2 reads)

• XOR old data stripe with replacing one.

• Take result of XOR and XOR with parity stripe.

• Write new data stripe and new parity stripe (2 writes).

Page 44: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

44

RAID Levels -- data

Settings:accounts( number, branchnum, balance);create clustered index c on accounts(number);

– 100000 rows

– Cold Buffer

– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

Page 45: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

45

RAID Levels -- transactions

No Concurrent Transactions:– Read Intensive: select avg(balance) from accounts;

– Write Intensive, e.g. typical insert:insert into accounts values (690466,6840,2272.76);

Writes are uniformly distributed.

Page 46: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

46

RAID Levels• SQL Server7 on Windows

2000 (SoftRAID means striping/parity at host)

• Read-Intensive:– Using multiple disks

(RAID0, RAID 10, RAID5) increases throughput significantly.

• Write-Intensive:– Without cache, RAID 5

suffers. With cache, it is ok.

Write-Intensive

0

40

80

120

160

Soft-RAID5

RAID5 RAID0 RAID10 RAID1 SingleDisk

Th

rou

gh

pu

t (t

up

les/

sec)

Read-Intensive

0

20000

40000

60000

80000

Soft-RAID5

RAID5 RAID0 RAID10 RAID1 SingleDisk

Th

rou

gh

pu

t (t

up

les/

sec)

Page 47: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

47

RAID Levels

• Log File– RAID 1 is appropriate

• Fault tolerance with high write throughput. Writes are synchronous and sequential. No benefits in striping.

• Temporary Files– RAID 0 is appropriate.

• No fault tolerance. High throughput.

• Data and Index Files– RAID 5 is best suited for read intensive apps or if the

RAID controller cache is effective enough.– RAID 10 is best suited for write intensive apps.

Page 48: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

48

Controller Prefecthing no, Write-back yes.

• Read-ahead:– Prefetching at the disk controller level. – No information on access pattern.– Better to let database management system do it.

• Write-back vs. write through:– Write back: transfer terminated as soon as data is

written to cache. • Batteries to guarantee write back in case of power failure

– Write through: transfer terminated as soon as data is written to disk.

Page 49: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

49

SCSI Controller Cache -- data

Settings:employees(ssnum, name, lat, long, hundreds1,hundreds2);

create clustered index c on employees(hundreds2);

– Employees table partitioned over two disks; Log on a separate disk; same controller (same channel).

– 200 000 rows per table– Database buffer size limited to 400 Mb.– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID

controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

Page 50: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

50

SCSI (not disk) Controller Cache -- transactions

No Concurrent Transactions:update employees set lat = long, long = lat where hundreds2 = ?;

– cache friendly: update of 20,000 rows (~90Mb)

– cache unfriendly: update of 200,000 rows (~900Mb)

Page 51: database tuning in ppt

2 - Tuning the Guts © Dennis Shasha, Philippe Bonnet - 2002

51

SCSI Controller Cache

• SQL Server 7 on Windows 2000.

• Adaptec ServerRaid controller:– 80 Mb RAM– Write-back mode

• Updates• Controller cache increases

throughput whether operation is cache friendly or not.– Efficient replacement policy!

2 Disks - Cache Size 80Mb

0

500

1000

1500

2000

cache friendly (90Mb) cache unfriendly (900Mb)

Th

rou

gh

pu

t (t

up

les

/se

c)

no cache

cache

Page 52: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

52

Index Tuning

• Index issues– Indexes may be better or worse than scans – Multi-table joins that run on for hours, because

the wrong indexes are defined– Concurrency control bottlenecks– Indexes that are maintained and never used

Page 53: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

53

Clustered / Non clustered index

• Clustered index (primary index)– A clustered index on

attribute X co-locates records whose X values are near to one another.

• Non-clustered index (secondary index)– A non clustered index does

not constrain table organization.

– There might be several non-clustered indexes per table.

Records Records

Page 54: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

54

Dense / Sparse Index

• Sparse index– Pointers are associated to

pages

• Dense index– Pointers are associated to

records

– Non clustered indexes are dense

P1 PiP2 record

recordrecord

Page 55: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

55

Index Implementations in some major DBMS

• SQL Server– B+-Tree data structure– Clustered indexes are

sparse– Indexes maintained as

updates/insertions/deletes are performed

• DB2– B+-Tree data structure,

spatial extender for R-tree– Clustered indexes are dense– Explicit command for index

reorganization

• Oracle– B+-tree, hash, bitmap,

spatial extender for R-Tree

– clustered index• Index organized table

(unique/clustered)

• Clusters used when creating tables.

• TimesTen (Main-memory DBMS)– T-tree

Page 56: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

56

Types of Queries

• Point Query

SELECT balanceFROM accountsWHERE number = 1023;

• Multipoint Query

SELECT balanceFROM accountsWHERE branchnum = 100;

• Range Query

SELECT numberFROM accountsWHERE balance > 10000 and balance <= 20000;

• Prefix Match Query

SELECT *FROM employeesWHERE name = ‘J*’ ;

Page 57: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

57

More Types of Queries

• Extremal Query

SELECT *FROM accountsWHERE balance = max(select balance from accounts)

• Ordering Query

SELECT *FROM accountsORDER BY balance;

• Grouping Query

SELECT branchnum, avg(balance)FROM accountsGROUP BY branchnum;

• Join Query

SELECT distinct branch.adresseFROM accounts, branchWHERE accounts.branchnum =

branch.numberand accounts.balance > 10000;

Page 58: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

58

Index Tuning -- data

Settings:employees(ssnum, name, lat, long, hundreds1,

hundreds2);

clustered index c on employees(hundreds1) with fillfactor = 100;

nonclustered index nc on employees (hundreds2);

index nc3 on employees (ssnum, name, hundreds2);

index nc4 on employees (lat, ssnum, name);– 1000000 rows ; Cold buffer

– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb), 4x18Gb drives (10000RPM), Windows 2000.

Page 59: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

59

Index Tuning -- operations

Operations:– Update:

update employees set name = ‘XXX’ where ssnum = ?;– Insert:

insert into employees values (1003505,'polo94064',97.48,84.03,4700.55,3987.2);

– Multipoint query: select * from employees where hundreds1= ?; select * from employees where hundreds2= ?;

– Covered query: select ssnum, name, lat from employees;

– Range Query: select * from employees where long between ? and ?;

– Point Query: select * from employees where ssnum = ?

Page 60: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

60

Clustered Index

• Multipoint query that returns 100 records out of 1000000.

• Cold buffer• Clustered index is

twice as fast as non-clustered index and orders of magnitude faster than a scan.

0

0.2

0.4

0.6

0.8

1

SQLServer Oracle DB2

Th

rou

gh

pu

t ra

tio

clustered nonclustered no index

Page 61: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

61

Index “Face Lifts”

• Index is created with fillfactor = 100.

• Insertions cause page splits and extra I/O for each query

• Maintenance consists in dropping and recreating the index

• With maintenance performance is constant while performance degrades significantly if no maintenance is performed.

SQLServer

0

20

40

60

80

100

0 20 40 60 80 100

% Increase in Table Size

Th

rou

gh

pu

t (q

uer

ies/

sec)

No maintenance

Maintenance

Page 62: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

62

Index Maintenance

• In Oracle, clustered index are approximated by an index defined on a clustered table

• No automatic physical reorganization

• Index defined with pctfree = 0

• Overflow pages cause performance degradation

Oracle

0

5

10

15

20

0 20 40 60 80 100

% Increase in Table Size

Th

rou

gh

pu

t (q

ue

rie

s/s

ec

)

Nomaintenance

Page 63: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

63

Covering Index - defined

• Select name from employee where department = “marketing”

• Good covering index would be on (department, name)

• Index on (name, department) less useful.

• Index on department alone moderately useful.

Page 64: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

64

Covering Index - impact

• Covering index performs better than clustering index when first attributes of index are in the where clause and last attributes in the select.

• When attributes are not in order then performance is much worse.

0

10

20

30

40

50

60

70

SQLServer

Th

rou

gh

pu

t (q

uer

ies/

sec) covering

covering - notordered

non clustering

clustering

Page 65: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

65

Scan Can Sometimes Win

• IBM DB2 v7.1 on Windows 2000

• Range Query• If a query retrieves 10% of

the records or more, scanning is often better than using a non-clustering non-covering index. Crossover > 10% when records are large or table is fragmented on disk – scan cost increases.

0 5 10 15 20 25

% of selected records

Th

rou

gh

pu

t (q

uer

ies/

sec)

scan

non clustering

Page 66: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

66

Index on Small Tables

• Small table: 100 records, i.e., a few pages.

• Two concurrent processes perform updates (each process works for 10ms before it commits)

• No index: the table is scanned for each update. No concurrent updates.

• A clustered index allows to take advantage of row locking.

0

24

68

1012

14

16

18

no index index

Th

rou

gh

pu

t (u

pd

ates

/sec

)

Page 67: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

67

Bitmap vs. Hash vs. B+-Tree

Settings:employees(ssnum, name, lat, long, hundreds1,

hundreds2);create cluster c_hundreds (hundreds2 number(8)) PCTFREE 0;create cluster c_ssnum(ssnum integer) PCTFREE 0 size 60;

create cluster c_hundreds(hundreds2 number(8)) PCTFREE 0 HASHKEYS 1000 size 600;

create cluster c_ssnum(ssnum integer) PCTFREE 0 HASHKEYS 1000000 SIZE 60;

create bitmap index b on employees (hundreds2);create bitmap index b2 on employees (ssnum);

– 1000000 rows ; Cold buffer– Dual Xeon (550MHz,512Kb), 1Gb RAM, Internal RAID controller from Adaptec (80Mb),

4x18Gb drives (10000RPM), Windows 2000.

Page 68: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

68

Multipoint query: B-Tree, Hash Tree, Bitmap

• There is an overflow chain in a hash index

• In a clustered B-Tree index records are on contiguous pages.

• Bitmap is proportional to size of table and non-clustered for record access.

Multipoint Queries

0

5

10

15

20

25

B-Tree Hash index Bitmap index

Th

rou

gh

pu

t (q

uer

ies/

sec)

Page 69: database tuning in ppt

3 - Index Tuning © Dennis Shasha, Philippe Bonnet - 2002

69

• Hash indexes don’t help when evaluating range queries

• Hash index outperforms B-tree on point queries

Range Queries

0

0.1

0.2

0.3

0.4

0.5

B-Tree Hash index Bitmap index

Th

rou

gh

pu

t (q

ue

rie

s/s

ec

)

B-Tree, Hash Tree, Bitmap

Point Queries

0

10

20

30

40

50

60

B-Tree hash index

Th

rou

gh

pu

t(q

ue

rie

s/s

ec

)

Page 70: database tuning in ppt

4 - Relational Systems © Dennis Shasha, Philippe Bonnet - 2002

70

Tuning Relational Systems

• Schema Tuning– Denormalization

• Query Tuning– Query rewriting– Materialized views

Page 71: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

71

Denormalizing -- data

Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,

L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );

region( R_REGIONKEY, R_NAME, R_COMMENT );

nation( N_NATIONKEY, N_NAME, N_REGIONKEY, N_COMMENT,);

supplier( S_SUPPKEY, S_NAME, S_ADDRESS, S_NATIONKEY, S_PHONE, S_ACCTBAL, S_COMMENT);

– 600000 rows in lineitem, 25 nations, 5 regions, 500 suppliers

Page 72: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

72

Denormalizing -- transactions

lineitemdenormalized ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY, L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT, L_REGIONNAMEL_REGIONNAME);

– 600000 rows in lineitemdenormalized

– Cold Buffer

– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

Page 73: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

73

Queries on Normalized vs. Denormalized Schemas

Queries:select L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY,

L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_RETURNFLAG, L_LINESTATUS, L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT, R_NAME

from LINEITEM, REGION, SUPPLIER, NATIONwhereL_SUPPKEY = S_SUPPKEYand S_NATIONKEY = N_NATIONKEYand N_REGIONKEY = R_REGIONKEYand R_NAME = 'EUROPE';

select L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_RETURNFLAG, L_LINESTATUS, L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT, L_REGIONNAME

from LINEITEMDENORMALIZEDwhere L_REGIONNAME = 'EUROPE';

Page 74: database tuning in ppt

4 - Relational Systems © Dennis Shasha, Philippe Bonnet - 2002

74

Denormalization

• TPC-H schema• Query: find all lineitems

whose supplier is in Europe.

• With a normalized schema this query is a 4-way join.

• If we denormalize lineitem and add the name of the region for each lineitem (foreign key denormalization) throughput improves 30%

0

0.0005

0.001

0.0015

0.002

normalized denormalized

Th

rou

gh

pu

t (Q

uer

ies/

sec)

Page 75: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

75

Queries

Settings: employee(ssnum, name, dept, salary, numfriends);student(ssnum, name, course, grade);techdept(dept, manager, location);

clustered index i1 on employee (ssnum);nonclustered index i2 on employee (name);nonclustered index i3 on employee (dept);

clustered index i4 on student (ssnum);nonclustered index i5 on student (name);

clustered index i6 on techdept (dept);

– 100000 rows in employee, 100000 students, 10 departments; Cold buffer– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives

(10000RPM), Windows 2000.

Page 76: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

76

Queries – View on Join

View Techlocation:create view techlocation as select ssnum, techdept.dept, location from employee, techdept where employee.dept = techdept.dept;

Queries:– Original:

select dept from techlocation where ssnum = ?;

– Rewritten:select dept from employee where ssnum = ?;

Page 77: database tuning in ppt

4 - Relational Systems © Dennis Shasha, Philippe Bonnet - 2002

77

Query Rewriting - Views

• All systems expand the selection on a view into a join

• The difference between a plain selection and a join (on a primary key-foreign key) followed by a projection is greater on SQL Server than on Oracle and DB2 v7.1.

0

10

20

30

40

50

60

70

80

view

Th

rou

gh

pu

t im

pro

vem

ent

per

cen

t

SQLServer 2000

Oracle 8i

DB2 V7.1

Page 78: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

78

Queries – Correlated SubqueriesQueries:

– Original:select ssnum from employee e1 where salary = (select max(salary) from employee e2 where e2.dept = e1.dept);

– Rewritten:select max(salary) as bigsalary, deptinto TEMP from employee group by dept;

select ssnumfrom employee, TEMPwhere salary = bigsalaryand employee.dept = temp.dept;

Page 79: database tuning in ppt

4 - Relational Systems © Dennis Shasha, Philippe Bonnet - 2002

79

Query Rewriting – Correlated Subqueries

• SQL Server 2000 does a good job at handling the correlated subqueries (a hash join is used as opposed to a nested loop between query blocks)– The techniques implemented

in SQL Server 2000 are described in “Orthogonal Optimization of Subqueries and Aggregates” by C.Galindo-Legaria and M.Joshi, SIGMOD 2001.

-10

0

10

20

30

40

50

60

70

80

correlated subquery

Th

rou

gh

pu

t im

pro

vem

ent p

erce

nt

SQLServer 2000

Oracle 8i

DB2 V7.1

> 10000> 1000

Page 80: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

80

Aggregate Maintenance -- data

Settings:orders( ordernum, itemnum, quantity, purchaser, vendor );create clustered index i_order on orders(itemnum);

store( vendor, name );

item(itemnum, price);create clustered index i_item on item(itemnum);

vendorOutstanding( vendor, amount);

storeOutstanding( store, amount);

– 1000000 orders, 10000 stores, 400000 items; Cold buffer– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives

(10000RPM), Windows 2000.

Page 81: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

81

Aggregate Maintenance -- triggers

Triggers for Aggregate Maintenancecreate trigger updateVendorOutstanding on orders for insert asupdate vendorOutstandingset amount =

(select vendorOutstanding.amount+sum(inserted.quantity*item.price)from inserted,itemwhere inserted.itemnum = item.itemnum)

where vendor = (select vendor from inserted) ;

create trigger updateStoreOutstanding on orders for insert asupdate storeOutstandingset amount =

(select storeOutstanding.amount+sum(inserted.quantity*item.price) from inserted,item where inserted.itemnum = item.itemnum)

where store = (select store.name from inserted, store where inserted.vendor = store.vendor) ;

Page 82: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

82

Aggregate Maintenance -- transactions

Concurrent Transactions:– Insertions

insert into orders values (1000350,7825,562,'xxxxxx6944','vendor4');

– Queries (first without, then with redundant tables)select orders.vendor, sum(orders.quantity*item.price)from orders,itemwhere orders.itemnum = item.itemnumgroup by orders.vendor;

vs. select * from vendorOutstanding;

select store.name, sum(orders.quantity*item.price)from orders,item, storewhere orders.itemnum = item.itemnum

and orders.vendor = store.vendorgroup by store.name;

vs. select * from storeOutstanding;

Page 83: database tuning in ppt

4 - Relational Systems © Dennis Shasha, Philippe Bonnet - 2002

83

Aggregate Maintenance

• SQLServer 2000 on Windows 2000

• Using triggers for view maintenance

• If queries frequent or important, then aggregate maintenance is good.

pect. of gain with aggregate maintenance

21900

31900

- 62.2

-5000

0

5000

10000

15000

20000

25000

30000

35000

insert vendor total store total

Page 84: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

84

Superlinearity -- data

Settings:sales( id, itemid, customerid, storeid, amount, quantity);item (itemid);customer (customerid);store (storeid);

A sale is successful if all foreign keys are present.

successfulsales(id, itemid, customerid, storeid, amount, quantity);

unsuccessfulsales(id, itemid, customerid, storeid, amount, quantity);

tempsales( id, itemid, customerid, storeid, amount,quantity);

Page 85: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

85

Superlinearity -- indexes

Settings (non-clustering, dense indexes):index s1 on item(itemid);

index s2 on customer(customerid);

index s3 on store(storeid);

index succ on successfulsales(id);

– 1000000 sales, 400000 customers, 40000 items, 1000 stores

– Cold buffer

– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

Page 86: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

86

Superlinearity -- queries

Queries:– Insert/create indexdelete

insert into successfulsalesselect sales.id, sales.itemid, sales.customerid,

sales.storeid, sales.amount, sales.quantityfrom sales, item, customer, storewhere sales.itemid = item.itemidand sales.customerid = customer.customeridand sales.storeid = store.storeid;

insert into unsuccessfulsales select * from sales;godelete from unsuccessfulsaleswhere id in (select id from successfulsales)

Page 87: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

87

Superlinearity -- batch queries

Queries:– Small batches

DECLARE @Nlow INT;DECLARE @Nhigh INT;DECLARE @INCR INT;set @INCR = 100000set @NLow = 0set @Nhigh = @INCRWHILE (@NLow <= 500000)BEGIN

insert into tempsales select * from sales where id between @NLow and @Nhighset @Nlow = @Nlow + @INCRset @Nhigh = @Nhigh + @INCRdelete from tempsales

where id in (select id from successfulsales);insert into unsuccessfulsales select * from tempsales;delete from tempsales;

END

Page 88: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

88

Superlinearity -- outer join

Queries:– outerjoin

insert into successfulsalesselect sales.id, item.itemid, customer.customerid, store.storeid, sales.amount,

sales.quantityfrom ((sales left outer join item on sales.itemid = item.itemid)left outer join customer on sales.customerid = customer.customerid)left outer join store on sales.storeid = store.storeid;

insert into unsuccessfulsalesselect *from successfulsaleswhere itemid is nullor customerid is nullor storeid is null;godelete from successfulsaleswhere itemid is nullor customerid is nullor storeid is null

Page 89: database tuning in ppt

4 - Relational Systems © Dennis Shasha, Philippe Bonnet - 2002

89

Circumventing Superlinearity

• SQL Server 2000• Outer join achieves the

best response time.• Small batches do not

help because overhead of crossing the application interface is higher than the benefit of joining with smaller tables.

0

20

40

60

80

100

120

140

small large

Re

sp

on

se

Tim

e (

se

c)

insert/delete no indexinsert/delete indexedsmall batchesouter join

Page 90: database tuning in ppt

5 - Tuning the API © Dennis Shasha, Philippe Bonnet - 2002

90

Tuning the Application Interface

• 4GL– Power++, Visual basic

• Programming language + Call Level Interface– ODBC: Open DataBase Connectivity– JDBC: Java based API– OCI (C++/Oracle), CLI (C++/ DB2), Perl/DBI

• In the following experiments, the client program is located on the database server site. Overhead is due to crossing the application interface.

Page 91: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

91

Looping can hurt -- data

Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,

L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );

– 600 000 rows; warm buffer.

– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

Page 92: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

92

Looping can hurt -- queries• Queries:

– No loop:sqlStmt = “select * from lineitem where l_partkey <= 200;”odbc->prepareStmt(sqlStmt);odbc->execPrepared(sqlStmt);

– Loop:sqlStmt = “select * from lineitem where l_partkey = ?;”odbc->prepareStmt(sqlStmt);for (int i=1; i<100; i++){

odbc->bindParameter(1, SQL_INTEGER, i);odbc->execPrepared(sqlStmt);

}

Page 93: database tuning in ppt

5 - Tuning the API © Dennis Shasha, Philippe Bonnet - 2002

93

Looping can Hurt

• SQL Server 2000 on Windows 2000

• Crossing the application interface has a significant impact on performance.

• Why would a programmer use a loop instead of relying on set-oriented operations: object-orientation?

0

100

200

300

400

500

600

loop no loop

thro

ug

hp

ut

(rec

ord

s/se

c)

Page 94: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

94

Cursors are Death -- data

Settings:employees(ssnum, name, lat, long, hundreds1,

hundreds2);

– 100000 rows ; Cold buffer

– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

Page 95: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

95

Cursors are Death -- queries

Queries:– No cursor

select * from employees;

– Cursor

DECLARE d_cursor CURSOR FOR select * from employees;

OPEN d_cursorwhile (@@FETCH_STATUS = 0)

BEGIN

FETCH NEXT from d_cursorEND

CLOSE d_cursor

go

Page 96: database tuning in ppt

5 - Tuning the API © Dennis Shasha, Philippe Bonnet - 2002

96

Cursors are Death

• SQL Server 2000 on Windows 2000

• Response time is a few seconds with a SQL query and more than an hour iterating over a cursor.

0

1000

2000

3000

4000

5000

cursor SQL

Th

rou

gh

pu

t (r

eco

rds/

sec)

Page 97: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

97

Retrieve Needed Columns Only - data

Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,

L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );

create index i_nc_lineitem on lineitem (l_orderkey, l_partkey, l_suppkey, l_shipdate, l_commitdate);

– 600 000 rows; warm buffer.– Lineitem records are ~ 10 bytes long– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives

(10000RPM), Windows 2000.

Page 98: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

98

Retrieve Needed Columns Only - queries

Queries:– All

Select * from lineitem;

– Covered subsetSelect l_orderkey, l_partkey, l_suppkey, l_shipdate, l_commitdate from lineitem;

Page 99: database tuning in ppt

5 - Tuning the API © Dennis Shasha, Philippe Bonnet - 2002

99

Retrieve Needed Columns Only

• Avoid transferring unnecessary data

• May enable use of a covering index.

• In the experiment the subset contains ¼ of the attributes.– Reducing the amount of

data that crosses the application interface yields significant performance improvement.

0

0.25

0.5

0.75

1

1.25

1.5

1.75

no index index

Th

rou

gh

pu

t (q

uer

ies/

mse

c)

all

covered subset

Experiment performed on Oracle8iEE on Windows 2000.

Page 100: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

100

Bulk Loading Data

Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY,

L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );

– Initially the table is empty; 600 000 rows to be inserted (138Mb)

– Table sits one disk. No constraint, index is defined.

– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

Page 101: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

101

Bulk Loading Queries

Oracle 8isqlldr directpath=true control=load_lineitem.ctl data=E:\Data\lineitem.tbl

load data infile "lineitem.tbl"into table LINEITEM appendfields terminated by '|' (

L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_RETURNFLAG, L_LINESTATUS, L_SHIPDATE DATE "YYYY-MM-DD", L_COMMITDATE DATE "YYYY-MM-DD", L_RECEIPTDATE DATE "YYYY-MM-DD", L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT

)

Page 102: database tuning in ppt

5 - Tuning the API © Dennis Shasha, Philippe Bonnet - 2002

102

Direct Path

• Direct path loading bypasses the query engine and the storage manager. It is orders of magnitude faster than for conventional bulk load (commit every 100 records) and inserts (commit for each record).

650

10000

20000

30000

40000

50000

conventional direct path insert

Th

rou

gh

pu

t (r

ec/s

ec)

Experiment performed on Oracle8iEE on Windows 2000.

Page 103: database tuning in ppt

5 - Tuning the API © Dennis Shasha, Philippe Bonnet - 2002

103

Batch Size

• Throughput increases steadily when the batch size increases to 100000 records.Throughput remains constant afterwards.

• Trade-off between performance and amount of data that has to be reloaded in case of problem.

0

1000

2000

3000

4000

5000

0 100000 200000 300000 400000 500000 600000

Th

rou

gh

pu

t (r

eco

rds/

sec)

Experiment performed on SQL Server 2000 on Windows 2000.

Page 104: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

104

Tuning E-Commerce Applications

Database-backed web-sites:– Online shops– Shop comparison portals– MS TerraServer

Page 105: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

105

E-commerce Application Architecture

Clients Web servers Application servers Database server

Web

cac

heW

eb c

ache

Web

cac

he

DB

cac

heD

B c

ache

Page 106: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

106

E-commerce Application Workload

• Touristic searching (frequent, cached)– Access the top few pages. Pages may be personalized.

Data may be out-of-date.

• Category searching (frequent, partly cached and need for timeliness guarantees)– Down some hierarchy, e.g., men’s clothing.

• Keyword searching (frequent, uncached, need for timeliness guarantees)

• Shopping cart interactions (rare, but transactional)• Electronic purchasing (rare, but transactional)

Page 107: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

107

Design Issues• Need to keep historic information

– Electronic payment acknowledgements get lost.

• Preparation for variable load– Regular patterns of web site accesses during the day, and

within a week.

• Possibility of disconnections– State information transmitted to the client (cookies)

• Special consideration for low bandwidth• Schema evolution

– Representing e-commerce data as attribute-value pairs (IBM Websphere)

Page 108: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

108

Caching• Web cache:

– Static web pages– Caching fragments of dynamically created web pages

• Database cache (Oracle9iAS, TimesTen’s FrontTier)– Materialized views to represent cached data. – Queries are executed either using the database cache or

the database server. Updates are propagated to keep the cache(s) consistent.

– Note to vendors: It would be good to have queries distributed between cache and server.

Page 109: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

109

Ecommerce -- setting

Settings:shoppingcart( shopperid, itemid, price, qty);

– 500000 rows; warm buffer

– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

Page 110: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

110

Ecommerce -- transactions

Concurrent Transactions:– Mix

insert into shoppingcart values (107999,914,870,214);update shoppingcart set Qty = 10 where shopperid = 95047 and itemid = 88636;

delete from shoppingcart where shopperid = 86123 and itemid = 8321;

select shopperid, itemid, qty, price from shoppingcart where shopperid = ?;

– Queries Onlyselect shopperid, itemid, qty, price from shoppingcart where shopperid = ?;

Page 111: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

111

Connection Pooling (no refusals)• Each thread establishes a

connection and performs 5 insert statements.

• If a connection cannot be established the thread waits 15 secs before trying again.

• The number of connection is limited to 60 on the database server.

• Using connection pooling, the requests are queued and serviced when possible. There are no refused connections.

0

50

100

150

200

250

20 40 60 80 100 120 140

Number of client threads

Res

po

nse

Tim

e

connection pooling

simple connections

Experiment performed on Oracle8i on Windows 2000

Page 112: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

112

Indexing

• Using a clustered index on shopperid in the shopping cart provides:– Query speed-up

– Update/Deletion speed-up

Experiment performed on SQL Server 2000 on Windows 2000

0

10

20

30

40

50

60

70

80

90

Mix Queries Only

Th

rou

gh

pu

t (q

uer

ies/

sec) no index

clustered index

Page 113: database tuning in ppt

6 - Ecommerce © Dennis Shasha, Philippe Bonnet - 2002

113

Capacity Planning

• Arrival Rate– A1 is given as an assumption– A2 = (0.4 A1) + (0.5 A2)– A3 = 0.1 A2

• Service Time (S)– S1, S2, S3 are measured

• Utilization– U = A x S

• Response Time– R = U/(A(1-U)) = S/(1-U)(assuming Poisson arrivals)

Entry (S1) 0.4

Search (S2)

Checkout (S3)

0.5

0.1

Getting the demand assumptions rightis what makes capacity planning hard

Page 114: database tuning in ppt

7 - Datawarehouse © Dennis Shasha, Philippe Bonnet - 2002

114

Datawarehouse Tuning

• Aggregate (strategic) targeting:– Aggregates flow up from a wide selection of

data, and then – Targeted decisions flow down

• Examples:– Riding the wave of clothing fads– Tracking delays for frequent-flyer customers

Page 115: database tuning in ppt

7 - Datawarehouse © Dennis Shasha, Philippe Bonnet - 2002

115

Data Warehouse Workload

• Broad– Aggregate queries over ranges of values, e.g., find

the total sales by region and quarter.

• Deep – Queries that require precise individualized

information, e.g., which frequent flyers have been delayed several times in the last month?

• Dynamic (vs. Static)– Queries that require up-to-date information, e.g.

which nodes have the highest traffic now?

Page 116: database tuning in ppt

7 - Datawarehouse © Dennis Shasha, Philippe Bonnet - 2002

116

Tuning Knobs

• Indexes

• Materialized views

• Approximation

Page 117: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

117

Bitmaps -- data

Settings:lineitem ( L_ORDERKEY, L_PARTKEY , L_SUPPKEY, L_LINENUMBER,

L_QUANTITY, L_EXTENDEDPRICE , L_DISCOUNT, L_TAX , L_RETURNFLAG, L_LINESTATUS , L_SHIPDATE, L_COMMITDATE, L_RECEIPTDATE, L_SHIPINSTRUCT , L_SHIPMODE , L_COMMENT );

create bitmap index b_lin_2 on lineitem(l_returnflag);

create bitmap index b_lin_3 on lineitem(l_linestatus);

create bitmap index b_lin_4 on lineitem(l_linenumber);

– 100000 rows ; cold buffer

– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives (10000RPM), Windows 2000.

Page 118: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

118

Bitmaps -- queries

Queries:– 1 attribute

select count(*) from lineitem where l_returnflag = 'N';

– 2 attributesselect count(*) from lineitem where l_returnflag = 'N' and l_linenumber > 3;

– 3 attributesselect count(*) from lineitem where l_returnflag =

'N' and l_linenumber > 3 and l_linestatus = 'F';

Page 119: database tuning in ppt

7 - Datawarehouse © Dennis Shasha, Philippe Bonnet - 2002

119

Bitmaps

• Order of magnitude improvement compared to scan.

• Bitmaps are best suited for multiple conditions on several attributes, each having a low selectivity.A

NR

l_returnflagOF

l_linestatus

0

2

4

6

8

10

12

1 2 3

Th

rou

gh

pu

t (Q

ue

rie

s/s

ec

)

linear scan

bitmap

Page 120: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

120

Multidimensional Indexes -- data

Settings:create table spatial_facts( a1 int, a2 int, a3 int, a4 int, a5 int, a6 int, a7 int, a8 int, a9 int, a10 int, geom_a3_a7 mdsys.sdo_geometry );create index r_spatialfacts on spatial_facts(geom_a3_a7) indextype is mdsys.spatial_index;create bitmap index b2_spatialfacts on

spatial_facts(a3,a7);

– 500000 rows ; cold buffer– Dual Pentium II (450MHz, 512Kb), 512 Mb RAM, 3x18Gb drives

(10000RPM), Windows 2000.

Page 121: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

121

Multidimensional Indexes -- queries

Queries:– Point Queries

select count(*) from fact where a3 = 694014 and a7 = 928878;

select count(*) from spatial_facts where SDO_RELATE(geom_a3_a7, MDSYS.SDO_GEOMETRY(2001, NULL, MDSYS.SDO_POINT_TYPE(694014,928878, NULL), NULL, NULL), 'mask=equal querytype=WINDOW') = 'TRUE';

– Range Queriesselect count(*) from spatial_facts where SDO_RELATE(geom_a3_a7,

mdsys.sdo_geometry(2003,NULL,NULL, mdsys.sdo_elem_info_array(1,1003,3),mdsys.sdo_ordinate_array(10,800000,1000000,1000000)), 'mask=inside querytype=WINDOW') = 'TRUE';

select count(*) from spatial_facts where a3 > 10 and a3 < 1000000 and a7 > 800000 and a7 < 1000000;

Page 122: database tuning in ppt

7 - Datawarehouse © Dennis Shasha, Philippe Bonnet - 2002

122

Multidimensional Indexes

• Oracle 8i on Windows 2000

• Spatial Extension:– 2-dimensional data– Spatial functions used

in the query

• R-tree does not perform well because of the overhead of spatial extension.

0

2

4

6

8

10

12

14

point query range query

Res

po

nse

Tim

e (s

ec) bitmap - two

attributes

r-tree

Page 123: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

123

Multidimensional Indexes

R-Tree

SELECT STATEMENT

SORT AGGREGATE

TABLE ACCESS BY INDEX ROWID SPATIAL_FACTS

DOMAIN INDEX R_SPATIALFACTS

Bitmaps

SELECT STATEMENT

SORT AGGREGATE

BITMAP CONVERSION COUNT

BITMAP AND

BITMAP INDEX SINGLE VALUE B_FACT7

BITMAP INDEX SINGLE VALUE B_FACT3

Page 124: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

124

Materialized Views -- data

Settings:orders( ordernum, itemnum, quantity, purchaser, vendor );

create clustered index i_order on orders(itemnum);

store( vendor, name );

item(itemnum, price);

create clustered index i_item on item(itemnum);

– 1000000 orders, 10000 stores, 400000 items; Cold buffer

– Oracle 9i

– Pentium III (1 GHz, 256 Kb), 1Gb RAM, Adapter 39160 with 2 channels, 3x18Gb drives (10000RPM), Linux Debian 2.4.

Page 125: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

125

Materialized Views -- data

Settings:create materialized view vendorOutstanding

build immediate

refresh complete

enable query rewrite

as

select orders.vendor, sum(orders.quantity*item.price)

from orders,item

where orders.itemnum = item.itemnum

group by orders.vendor;

Page 126: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

126

Materialized Views -- transactions

Concurrent Transactions:– Insertions

insert into orders values (1000350,7825,562,'xxxxxx6944','vendor4');

– Queriesselect orders.vendor,

sum(orders.quantity*item.price)from orders,itemwhere orders.itemnum = item.itemnumgroup by orders.vendor;

select * from vendorOutstanding;

Page 127: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

127

Materialized Views

• Graph:– Oracle9i on Linux

– Total sale by vendor is materialized

• Trade-off between query speed-up and view maintenance:– The impact of incremental

maintenance on performance is significant.

– Rebuild maintenance achieves a good throughput.

– A static data warehouse offers a good trade-off.

0

400

800

1200

1600

MaterializedView (fast on

commit)

MaterializedView (complete

on demand)

No MaterializedView

Th

rou

gh

pu

t (s

tate

men

ts/s

ec)

0

2

4

6

8

10

12

14

Materialized View No Materialized View

Res

po

nse

Tim

e (s

ec)

Page 128: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

128

Materialized View Maintenance

• Problem when large number of views to maintain.

• The order in which views are maintained is important:– A view can be computed

from an existing view instead of being recomputed from the base relations (total per region can be computed from total per nation).

• Let the views and base tables be nodes v_i

• Let there be an edge from v_1 to v_2 if it possible to compute the view v_2 from v_1. Associate the cost of computing v_2 from v_1 to this edge.

• Compute all pairs shortest path where the start nodes are the set of base tables.

• The result is an acyclic graph A. Take a topological sort of A and let that be the order of view construction.

Page 129: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

129

Approximations -- data

Settings:– TPC-H schema– Approximations

insert into approxlineitem

select top 6000 *from lineitemwhere l_linenumber = 4;

insert into approxordersselect O_ORDERKEY, O_CUSTKEY, O_ORDERSTATUS, O_TOTALPRICE, O_ORDERDATE, O_ORDERPRIORITY, O_CLERK, O_SHIPPRIORITY, O_COMMENT

from orders, approxlineitemwhere o_orderkey = l_orderkey;

Page 130: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

130

Approximations -- queriesinsert into approxsupplierselect distinct S_SUPPKEY,

S_NAME ,S_ADDRESS,S_NATIONKEY,S_PHONE,S_ACCTBAL,S_COMMENT

from approxlineitem, supplierwhere s_suppkey = l_suppkey;

insert into approxpartselect distinct P_PARTKEY,

P_NAME , P_MFGR , P_BRAND , P_TYPE , P_SIZE ,P_CONTAINER , P_RETAILPRICE , P_COMMENT

from approxlineitem, partwhere p_partkey = l_partkey;

insert into approxpartsuppselect distinct PS_PARTKEY,

PS_SUPPKEY,PS_AVAILQTY,PS_SUPPLYCOST,PS_COMMENT

from partsupp, approxpart, approxsupplierwhere ps_partkey = p_partkey and

ps_suppkey = s_suppkey;

insert into approxcustomerselect distinct C_CUSTKEY,

C_NAME ,C_ADDRESS,C_NATIONKEY,C_PHONE ,C_ACCTBAL,C_MKTSEGMENT,C_COMMENT

from customer, approxorderswhere o_custkey = c_custkey;insert into approxregion select * from

region;insert into approxnation select * from

nation;

Page 131: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

131

Approximations -- more queries

Queries:– Single table query on lineitemselect l_returnflag, l_linestatus, sum(l_quantity) as sum_qty,

sum(l_extendedprice) as sum_base_price,

sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,

sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,

avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order

from lineitem

where datediff(day, l_shipdate, '1998-12-01') <= '120'

group by l_returnflag, l_linestatus

order by l_returnflag, l_linestatus;

Page 132: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

132

Approximations -- still more

Queries:– 6-way joinselect n_name, avg(l_extendedprice * (1 - l_discount)) as

revenuefrom customer, orders, lineitem, supplier, nation, regionwhere c_custkey = o_custkey

and l_orderkey = o_orderkeyand l_suppkey = s_suppkeyand c_nationkey = s_nationkeyand s_nationkey = n_nationkeyand n_regionkey = r_regionkeyand r_name = 'AFRICA'and o_orderdate >= '1993-01-01'and datediff(year, o_orderdate,'1993-01-01') < 1

group by n_nameorder by revenue desc;

Page 133: database tuning in ppt

7 - Datawarehouse © Dennis Shasha, Philippe Bonnet - 2002

133

Approximation accuracy

• Good approximation for query Q1 on lineitem

• The aggregated values obtained on a query with a 6-way join are significantly different from the actual values -- for some applications may still be good enough.

TPC-H Query Q5: 6 way join

-40

-20

0

20

40

60

1 2 3 4 5

Groups returned in the query

Ap

pro

xim

ated

ag

gre

gat

ed v

alu

es

(% o

f B

ase

Ag

gre

gat

e V

alu

e)

1% sample

10% sample

TPC-H Query Q1: lineitem

-40

-20

0

20

40

60

1 2 3 4 5 6 7 8

Aggregate values

Ap

pro

xim

ated

A

gg

reg

ate

Val

ues

(%

of

Bas

e A

gg

reg

ate

Val

ue)

1% sample

10% sample

Page 134: database tuning in ppt

7 - Datawarehouse © Dennis Shasha, Philippe Bonnet - 2002

134

Approximation Speedup

• Aqua approximation on the TPC-H schema– 1% and 10% lineitem

sample propagated.

• The query speed-up obtained with approximated relations is significant.

00.5

11.5

22.5

33.5

4

Q1 Q5

TPC-H Queries

Res

po

nse

Tim

e (s

ec)

base relations

1 % sample

10% sample

Page 135: database tuning in ppt

8 - Distributed Apps © Dennis Shasha, Philippe Bonnet - 2002

135

Tuning Distributed Applications

• Queries across multiple databases– Federated Datawarehouse– IBM’s DataJoiner now integrated at DB2 v7.2

• The source should perform as much work as possible and return as few data as possible for processing at the federated server

• Processing data across multiple databases

Page 136: database tuning in ppt

8 - Distributed Apps © Dennis Shasha, Philippe Bonnet - 2002

136

A puzzle

• Two databases X and Y– X records inventory data to be used for restocking– Y contains sales data about shipments

• You want to improve shipping– Certain data of X should be postprocessed on Y shortly

after it enters X– You are allowed to add/change transactions on X and Y– You want to avoid losing data from X and you want to

avoid double-processing data on Y, even in the face of failures.

Page 137: database tuning in ppt

8 - Distributed Apps © Dennis Shasha, Philippe Bonnet - 2002

137

Two-Phase Commit

X

Y

commit

commit

Source(coordinator& participant)

Destination(participant)

+ Commits are coordinated between the source and the destination

- If one participant fails then blocking can occur

- Not all db systems support prepare-to-commit interface

Page 138: database tuning in ppt

8 - Distributed Apps © Dennis Shasha, Philippe Bonnet - 2002

138

Replication Server

+ Destination within a few seconds of being up-to-date

+ Decision support queries can be asked on destination db

- Administrator is needed when network connection breaks!

X

Y

Source

Destination

Page 139: database tuning in ppt

8 - Distributed Apps © Dennis Shasha, Philippe Bonnet - 2002

139

Staging Tables

+ No specific mechanism is necessary at source or destination

- Coordination of transactions on X and Y

X

Y

Source

Destination

Staging table

Page 140: database tuning in ppt

8 - Distributed Apps © Dennis Shasha, Philippe Bonnet - 2002

140

Database X Database Y

Table S Table I

YunprocessedYunprocessed

Yunprocessed

M1

Read from source tableM2

Write to destination

Staging Tables

Table S

YunprocessedYunprocessed

Yunprocessed

Database Y

Table I

unprocessedunprocessedunprocessed

Database X

STEP 1 STEP 2

Page 141: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

141

Staging Tables

Database X Database Y

Table S Table I

YunprocessedYunprocessed

Yunprocessed

Table S

YunprocessedYunprocessed

Yunprocessed

Database Y

Table I

processedunprocessedunprocessed

Database X

STEP 3 STEP 4

M3

Update on destination

processedunprocessedunprocessed

M4

Delete source; then dest YXact: Update source siteXact:update destination site

Page 142: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

142

Troubleshooting Techniques (Alberto Lerner)

Consumer-producer chain framework

Query plan monitors

Performance monitors

Event monitors

Page 143: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

143

A Consumer-Producer Chain of a DBMS’s Resources

High LevelConsumers

IntermediateResources/Consumers

PrimaryResources

PARSEROPTIMIZERPARSER

OPTIMIZER

EXECUTIONSUBSYSTEM

EXECUTIONSUBSYSTEM DISK

SYBSYSTEM DISK

SYBSYSTEM

CACHEMANAGERCACHE

MANAGER

LOGGINGSUBSYSTEMLOGGING

SUBSYSTEM

LOCKINGSUBSYSTEM LOCKING

SUBSYSTEM

NETWORKDISK/

CONTROLLERCPUMEMORY

sql commands

probingspotsfor

indicators

Page 144: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

144

Recurrent Patterns of Problems

• An overloading high-level consumer

• A poorly parameterized subsystem

• An overloaded primary resource

Effects are not always felt first where the cause is!

Page 145: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

145

A Systematic Approach to Monitoring

• Question 1: Are critical queries being served in the most efficient manner?

• Question 2: Are subsystems making optimal use of resources?

• Question 3: Are there enough primary resources available?

Extract indicators to answer the following questions

Page 146: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

146

Investigating High Level Consumers

• Answer question 1: “Are critical queries being served in the most efficient manner?”

1. Identify the critical queries

2. Analyze their access plans

3. Profile their execution

Page 147: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

147

Identifying Critical Queries

Critical queries are usually those that:

• Take a long time

• Are frequently executed

Often, a user complaint will tip us off.

Page 148: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

148

Event Monitors to Identify Critical Queries

• If no user complains...• Capture usage measurements at end of each

query and then sort by usage x time.• Less overhead than other type of tools

because indicators are usually by-product of operations monitored.

• Typical measures include CPU used, IO used, locks obtained etc.

Page 149: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

149

An example Event Monitor

• CPU indicatorssorted by Oracle’sTrace Data Viewer

• Similar tools: DB2’s Event Monitor and MSSQL’s Server Profiler

Page 150: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

150

An example Plan Explainer

• Access plan according to MSSQL’s Query Analyzer

• Similar tools: DB2’s Visual Explain and Oracle’s SQL AnalyzeTool

Page 151: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

151

Finding Strangeness in Access Plans

What to pay attention to in a plan

• Access paths for each table

• Sorts or intermediary results

• Order of operations

• Algorithms used in the operators

Page 152: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

152

To Index or not to index?

select c_name, n_name from CUSTOMER join NATIONon c_nationkey=n_nationkey where c_acctbal > 0

Which plan performs best? (nation_pk is an non-clustered index over n_nationkey, and similarly for acctbal_ix over c_acctbal)

Page 153: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

153

Non-clustering indexes can be trouble

For a low selectivity predicate, each access to the index generates a random access to the table – possibly duplicate! It ends up that the number of pages read from the table is greater than its size, i.e., a table scan isway better

Table Scan Index Scan

5 sec143,075 pages

6,777 pages136,319 pages

7 pages

76 sec272,618 pages131,425 pages273,173 pages

552 pages

CPU timedata logical reads

data physical readsindex logical reads

index physical reads

Page 154: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

154

An example Performance Monitor (query level)

• Details of buffer and CPU consumption for a query according to DB2’s Benchmark tool

• Similar tools: MSSQL’s SET STATISTICS switch and Oracle’s SQL Analyze Tool

Statement number: 1

select C_NAME, N_NAME

from DBA.CUSTOMER join DBA.NATION on C_NATIONKEY = N_NATIONKEY

where C_ACCTBAL > 0

Number of rows retrieved is: 136308

Number of rows sent to output is: 0

Elapsed Time is: 76.349 seconds

Buffer pool data logical reads = 272618

Buffer pool data physical reads = 131425

Buffer pool data writes = 0

Buffer pool index logical reads = 273173

Buffer pool index physical reads = 552

Buffer pool index writes = 0

Total buffer pool read time (ms) = 71352

Total buffer pool write time (ms) = 0

Summary of Results

==================

Elapsed Agent CPU Rows Rows

Statement # Time (s) Time (s) Fetched Printed

1 76.349 6.670 136308 0

Statement number: 1

select C_NAME, N_NAME

from DBA.CUSTOMER join DBA.NATION on C_NATIONKEY = N_NATIONKEY

where C_ACCTBAL > 0

Number of rows retrieved is: 136308

Number of rows sent to output is: 0

Elapsed Time is: 76.349 seconds

Buffer pool data logical reads = 272618

Buffer pool data physical reads = 131425

Buffer pool data writes = 0

Buffer pool index logical reads = 273173

Buffer pool index physical reads = 552

Buffer pool index writes = 0

Total buffer pool read time (ms) = 71352

Total buffer pool write time (ms) = 0

Summary of Results

==================

Elapsed Agent CPU Rows Rows

Statement # Time (s) Time (s) Fetched Printed

1 76.349 6.670 136308 0

Page 155: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

155

An example Performance Monitor (system level)

• An IO indicator’s consumption evolution (qualitative and quantitative) according to DB2’s System Monitor

• Similar tools: Window’s Performance Monitor and Oracle’s Performance Manager

Page 156: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

156

Investigating High Level Consumers: Summary

Find criticalqueries

Found any?Investigatelower levels

Answer Q1over them

Overcon-sumption?

Tune problematicqueries

yes

yesno

no

Page 157: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

157

Investigating Primary Resources

• Answer question 3:“Are there enough primary resources available for a DBMS to consume?”

• Primary resources are: CPU, disk/controllers, memory, and network

• Analyze specific OS-level indicators to discover bottlenecks.

• A system-level Performance Monitor is the right tool here

Page 158: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

158

CPU Consumption Indicators at the OS Level

100%

CPU% of

utilization

70%

time

Sustained utilizationover 70% should trigger the alert.

System utilization shouldn’t be more

than 40%.DBMS (on a non-

dedicated machine)should be getting a decent time share.

total usage

system usage

Page 159: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

159

Disk Performance Indicators at the OS Level

Wait queue

Average Queue Size

New requestsDisk Transfers

/second

Should beclose to zero

Wait timesshould also be close to

zeroIdle disk with pending requests?Check controller

contention.Also, transfers

should be balanced amongdisks/controllers

Page 160: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

160

Memory Consumption Indicators at the OS Level

pagefile

real memory

virtualmemory

Page faults/timeshould be close

to zero. If paginghappens, at least

not DB cache pages.

% of pagefile inuse (it’s used a fixedfile/partition) will tell

you how much memory is “lacking”.

Page 161: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

161

Investigating Intermediate Resources/Consumers

• Answer question 2:“Are subsystems making optimal use of resources?”

• Main subsystems: Cache Manager, Disk subsystem, Lock subsystem, and Log/Recovery subsystem

• Similarly to Q3, extract and analyze relevant Performance Indicators

Page 162: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

162

Cache Manager Performance Indicators

Tablescan

readpage()

Free Page slots

Page reads/writes

Pickvictim

strategy Data Pages

CacheManager

If page is not in the cache, readpage

(logical) generate an actual IO (physical).

Ratio of readpages that did not generate

physical IO should be 90% or more

Pages are regularly saved to disk to make

free space.# of free slots should

always be > 0

Page 163: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

163

Disk Manager Performance Indicators

rows

page

extent

file

Storage Hierarchy (simplified)

disk

Row displacement: should kept under 5% of rows

Free space fragmentation: pages with few space should

not be in the free list

Data fragmentation: ideally files that store DB objects

(table, index) should be in one or few (<5) contiguous extents

File position: should balance workload evenly among all

disks

Page 164: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

164

Lock Manager Performance Indicators

Lock request

Object Lock Type TXN ID

LockList

Lockspending list

Deadlocks and timeouts should seldom happen (no

more then 1% of the transactions)

Lock wait time for a transaction should be a

small fraction of the whole transaction time.

Number of locks on waitshould be a small fraction of the number of locks on

the lock list.

Page 165: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

165

Investigating Intermediate and Primary Resources: Summary

Answer Q3

Problems at

OS level?Answer Q2

Tune low-levelresources

yesno

Problematic

subsystems?Tune

subsystemsInvestigateupper level

yes no

Page 166: database tuning in ppt

9 - Troubleshooting © Dennis Shasha, Philippe Bonnet - 2002

166

Troubleshooting Techniques

• Monitoring a DBMS’s performance should be based on queries and resources.– The consumption chain helps distinguish

problems’ causes from their symptoms– Existing tools help extracting relevant

performance indicators

Page 167: database tuning in ppt

© Dennis Shasha, Philippe Bonnet - 2002

167

Recall Tuning Principles

• Think globally, fix locally (troubleshoot to see what matters)

• Partitioning breaks bottlenecks (find parallelism in processors, controllers, caches, and disks)

• Start-up costs are high; running costs are low (batch size, cursors)

• Be prepared for trade-offs (unless you can rethink the queries)