collaborate vdb performance

74
Performance boosting with Database Virtualization Kyle Hailey http://dboptimizer.com

Upload: kyle-hailey

Post on 27-Jan-2015

129 views

Category:

Technology


11 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Collaborate vdb performance

Performance boosting with Database Virtualization

Kyle Haileyhttp://dboptimizer.com

Page 2: Collaborate vdb performance

• Technology• Full Cloning• Thin Provision Cloning• Database Virtualization

• IBM & Delphix Benchmark• OLTP• DSS• Concurrent databases

• Problems, Solutions, Tools• Oracle• Network• I/O

Page 3: Collaborate vdb performance

Problem

Developers

QA and UAT

Reports

First copy

Production

• CERN -  European Organization for Nuclear Research

• 145 TB database• 75 TB growth each year• Dozens of developers want copies.

Page 4: Collaborate vdb performance

Clone 1 Clone 3

99% of blocks are Identical

Clone 2

Page 5: Collaborate vdb performance
Page 6: Collaborate vdb performance

Clone 1 Clone 2 Clone 3

Thin Provision

Page 7: Collaborate vdb performance

I. clonedbII. Copy on Write

a) EMC BCV b) EMC SRDFc) Vmware

III. Allocate on Writea) Netapp (EMC VNX)b) ZFSc) DxFS

2. Thin Provision Cloning

Page 8: Collaborate vdb performance

RMAN backup

dNFSsparse file

I. clonedb

Page 9: Collaborate vdb performance

NetApp FilerNetApp Filer

ProductionDatabase

Database Luns

snapshot

Target A

Target B

Target C

Clone 1

Clone 2

Clone 3

Clone 4

File system level

snapshot

clonesSnap mirror

Snapshot Manager for Oracle

Flexclone

a) NetappIII. Allocate on Write

Page 10: Collaborate vdb performance

Oracle ZFS Appliance + RMAN

1. physical

ZFS Storage Appliance

RMANcopy

RMANCopyto NFS mount

Target A

Clone 1

Clone 1

Snapshot

NFS

b) ZFS III. Allocate on Write

Page 11: Collaborate vdb performance

1. Full Cloning2. Thin Provision

I. clonedbII. Copy on WriteIII. Allocate on Write

a) Netapp ( also EMC VNX)b) ZFSc) DxFS

3. Database Virtualization SMU Delphix

Review : Part I

Page 12: Collaborate vdb performance

Virtualization Layer

12

Virtualization

SMU

Page 13: Collaborate vdb performance

Virtualization Layer

x86 hardware

Allocate StorageAny type

SMU

Could be NetappBut Netapp not automatedNetapp AFAIK doesn’t shared blocks in memory

Page 14: Collaborate vdb performance

One time backup of source database

Database

Production

Instance

File system

RMAN APIs

Page 15: Collaborate vdb performance

Delphix Compress Data

Database

Production

Instance

File system

Data is compressed typically 1/3 size

Page 16: Collaborate vdb performance

Incremental forever change collection

Database

Production

Instance

File system

Changes are collected automatically foreverData older than retention widow freed

Page 17: Collaborate vdb performance

Typical Architecture

Database

File system

Production

Instance

Database

File system

Development

Instance

Database

File system

QA

Instance

Database

UAT

Instance

File system

Page 18: Collaborate vdb performance

Clones share duplicate blocks

Development

Instance

Database

Production

Instance

File system

vDatabase

QA

Instance

UAT

Instance

vDatabase vDatabaseNFS

Source Database Clone Copies of Source Database

Fiber Channel

Page 19: Collaborate vdb performance

Benchmark

IBM 3690 X5 Intel Xeon E7 @ 2.00 GHz2 sockets 10 cores, 256 GB RAM

EMC clariion CX4-120 3GB memory read cache, 600MB write cache5 366GB Seagate ST314635 CLAR146 disks on 4GB Fiber Channel

Page 20: Collaborate vdb performance

200GBCache

200GBDatabase

DatabaseVirtualizationlayer

3GB cache

200GBDatabase

3GB cache

Page 21: Collaborate vdb performance

200GBCache

Both Databases Share same cache

Page 22: Collaborate vdb performance
Page 23: Collaborate vdb performance

Tests with Swingbench

• OLTP on original vs virtual• OLTP on 2 original vs 2 virtual• DSS on original vs virtual• DSS on 2 virtual

Page 24: Collaborate vdb performance

IBM 3690 256GM RAM

Vmware ESX 5.1

Install Vmware5.1

Page 25: Collaborate vdb performance

IBM 3690 256GM RAM

Vmware ESX 5.1

EMC Clariion 5 Disks striped8Gb FC

Page 26: Collaborate vdb performance

IBM 3690 256GM RAM

Vmware ESX 5.1

Linux Source 20GB 4 vCPU

Oracle 11.2.0.3

1. Create Linux host• RHEL 6.2

2. Install Oracle 11.2.0.3

Page 27: Collaborate vdb performance

IBM 3690 256GM RAM

Vmware ESX 5.1

Linux Source 20GB 4 vCPU

Swingbench60 GB dataset180 GB datafiles

Oracle 11.2.0.3

1. Create 180 GB Swingbench database

Page 28: Collaborate vdb performance

IBM 3690 256GM RAM

Vmware ESX 5.1

Delphix 192 GB RAM 4 vCPU

1. Install Delphix 192GB RAM

Linux Source 20GB 4 vCPU

Page 29: Collaborate vdb performance

IBM 3690 256GM RAM

Vmware ESX 5.1

Delphix 192 GB RAM 4 vCPU

RMAN API

1. Link to Source Database (copy is compressed by 1/3 on average)

Linux Source 20GB 4 vCPU

Page 30: Collaborate vdb performance

IBM 3690 256GM RAMVmware ESX 5.1

Delphix 192 GB RAM 4vCPU

Original

Linux Source 20GB 4 vCPU Linux Target 20GB 4 vCPU

1. Provision a “virtual database” on target LINUX

Page 31: Collaborate vdb performance

IBM 3690 256GM RAMVmware ESX 5.1

Delphix 192 GB RAM 4vCPU

Benchmark setup ready

Linux Source 20GB 4 vCPU Linux Target 20GB 4 vCPU

Run “physical” benchmark on source database

Run “virtual” benchmark on target virtual database

Page 32: Collaborate vdb performance

charbench -cs 172.16.101.237:1521:ibm1 # machine:port:SID -dt thin # driver -u soe # username -p soe # password -uc 100 # user count -min 10 # min think time -max 200 # max think time -rt 0:1 # run time -a # run automatic -v users,tpm,tps # collect statistics

http://dominicgiles.com/commandline.html

Page 33: Collaborate vdb performance

Author : Dominic GilesVersion : 2.4.0.845

Results will be written to results.xml.

Time Users TPM TPS3:11:51 PM [0/30] 0 03:11:52 PM [30/30] 49 493:11:53 PM [30/30] 442 3933:11:54 PM [30/30] 856 4143:11:55 PM [30/30] 1146 2903:11:56 PM [30/30] 1355 2093:11:57 PM [30/30] 1666 3113:11:58 PM [30/30] 2015 3493:11:59 PM [30/30] 2289 2743:12:00 PM [30/30] 2554 2653:12:01 PM [30/30] 2940 3863:12:02 PM [30/30] 3208 2683:12:03 PM [30/30] 3520 3123:12:04 PM [30/30] 3835 315

Page 34: Collaborate vdb performance

Users

Tran

sacti

ons

Per M

inut

e (T

PM)

OLTP physical vs virtual, cold cache

Page 35: Collaborate vdb performance

Users

Tran

sacti

ons

Per M

inut

e (T

PM)

OLTP physical vs virtual , warm cache

Page 36: Collaborate vdb performance

IBM 3690 256GM RAMVmware ESX 5.1

Linux Source 20GB

Delphix 192 GB RAM

Linux Target 20GB

Linux Target 20GBLinux Source 20GB

• 2 Source databases• 2 virtual database that

share the same common blocks

Part Two: 2 physical vs 2 virtual

Page 37: Collaborate vdb performance

Users

2 concurrent:PhysicalVsVirtual

Page 38: Collaborate vdb performance

Physical vs Virtual : Full Table Scans (DSS) se

cond

s

Page 39: Collaborate vdb performance

Two virtual databases : Full Table Scans se

cond

s

Page 40: Collaborate vdb performance

Problems

rm /dev/randomln -s /dev/urandom /dev/random

Services iptables stopChkconfig iptables offIptables –FService iptables save

swingbench connections time out

couldn’t connect via listener

Page 41: Collaborate vdb performance

Tools : on Github• Oracle– oramon.sh – Oracle I/O latency– moats.sql – Oracle Monitor, Tanel Poder

• I/O– fio.sh – benchmark disks– ioh.sh – show nfs, zfs, io latency, throughput

• Network– netio – benchmark network (not on github)

• Netperf• ttcp

– tcpparse.sh – parse tcpdumps

http://github.com/khailey

Page 42: Collaborate vdb performance

MOATS: The Mother Of All Tuning Scripts v1.0 by Adrian Billington & Tanel Poder http://www.oracle-developer.net & http://www.e2sn.com

+ INSTANCE SUMMARY ------------------------------------------------------------------------------------------+| Instance: V1 | Execs/s: 3050.1 | sParse/s: 205.5 | LIOs/s: 28164.9 | Read MB/s: 46.8 || Cur Time: 18-Feb 12:08:22 | Calls/s: 633.1 | hParse/s: 9.1 | PhyRD/s: 5984.0 | Write MB/s: 12.2 || History: 0h 0m 39s | Commits/s: 446.7 | ccHits/s: 3284.6 | PhyWR/s: 1657.4 | Redo MB/s: 0.8 |+------------------------------------------------------------------------------------------------------------+| event name avg ms 1ms 2ms 4ms 8ms 16ms 32ms 64ms 128ms 256ms 512ms 1s 2s+ 4s+ || db file scattered rea .623 1 || db file sequential re 1.413 13046 8995 2822 916 215 7 1 || direct path read 1.331 25 13 3 1 || direct path read temp 1.673 || direct path write 2.241 15 12 14 3 || direct path write tem 3.283 || log file parallel wri || log file sync |

+ TOP SQL_ID (child#) -----+ TOP SESSIONS ---------+ + TOP WAITS -------------------------+ WAIT CLASS -+| 19% | () | | | 60% | db file sequential read | User I/O || 19% | c13sma6rkr27c (0) | 245,147,374,386,267 | | 17% | ON CPU | ON CPU || 17% | bymb3ujkr3ubk (0) | 131,10,252,138,248 | | 15% | log file sync | Commit || 9% | 8z3542ffmp562 (0) | 133,374,252,250 | | 6% | log file parallel write | System I/O || 9% | 0yas01u2p9ch4 (0) | 17,252,248,149 | | 2% | read by other session | User I/O |+--------------------------------------------------+ +--------------------------------------------------+

+ TOP SQL_ID ----+ PLAN_HASH_VALUE + SQL TEXT ---------------------------------------------------------------+| c13sma6rkr27c | 2583456710 | SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, CATEGORY || | | _ID, WEIGHT_CLASS, WARRANTY_PERIOD, SUPPLIER_ID, PRODUCT_STATUS, LIST_P |+ ---------------------------------------------------------------------------------------------------------- +| bymb3ujkr3ubk | 494735477 | INSERT INTO ORDERS(ORDER_ID, ORDER_DATE, CUSTOMER_ID, WAREHOUSE_ID) VAL || | | UES (ORDERS_SEQ.NEXTVAL + :B3 , SYSTIMESTAMP , :B2 , :B1 ) RETURNING OR |+ ---------------------------------------------------------------------------------------------------------- +| 8z3542ffmp562 | 1655552467 | SELECT QUANTITY_ON_HAND FROM PRODUCT_INFORMATION P, INVENTORIES I WHERE || | | I.PRODUCT_ID = :B2 AND I.PRODUCT_ID = P.PRODUCT_ID AND I.WAREHOUSE_ID |+ ---------------------------------------------------------------------------------------------------------- +| 0yas01u2p9ch4 | 0 | INSERT INTO ORDER_ITEMS(ORDER_ID, LINE_ITEM_ID, PRODUCT_ID, UNIT_PRICE, || | | QUANTITY) VALUES (:B4 , :B3 , :B2 , :B1 , 1) |+ ---------------------------------------------------------------------------------------------------------- +

MOATS

Page 43: Collaborate vdb performance

RUN_TIME=-1COLLECT_LIST=FAST_SAMPLE=iolatencyTARGET=172.16.102.209:V2DEBUG=0

Connected, starting collect at Wed Dec 5 14:59:24 EST 2012starting stats collecting single block logfile write multi block direct read direct read temp direct write temp ms IOP/s ms IOP/s ms IOP/s ms IOP/s ms IOP/s ms IOP/s 3.53 .72 16.06 .17 4.64 .00 115.37 3.73 .00 0 1.66 487.33 2.66 138.50 4.84 33.00 .00 .00 0 1.71 670.20 3.14 195.00 5.96 42.00 .00 .00 0 2.19 502.27 4.61 136.82 10.74 27.00 .00 .00 0 1.38 571.17 2.54 177.67 4.50 20.00 .00 .00 0

single block logfile write multi block direct read direct read temp direct write temp ms IOP/s ms IOP/s ms IOP/s ms IOP/s ms IOP/s ms IOP/s 3.22 526.36 4.79 135.55 .00 .00 .00 0 2.37 657.20 3.27 192.00 .00 .00 .00 0 1.32 591.17 2.46 187.83 .00 .00 .00 0 2.23 668.60 3.09 204.20 .00 .00 .00 .00 0

oramon.sh

Page 44: Collaborate vdb performance

Oracle

NFS

TCP

Network

TCP

NFS

Cache/SAN

Fibre Channel

Cache/spindle

netio

fio.sh

Benchmark : network and I/O

Page 45: Collaborate vdb performance

netioServer

netio –t –sClient

netio –t server_name

Packet size 1k bytes: 51.30 MByte/s Tx, 6.17 MByte/s Rx.Packet size 2k bytes: 100.10 MByte/s Tx, 12.29 MByte/s Rx.Packet size 4k bytes: 96.48 MByte/s Tx, 18.75 MByte/s Rx.Packet size 8k bytes: 114.38 MByte/s Tx, 30.41 MByte/s Rx.Packet size 16k bytes: 112.20 MByte/s Tx, 19.46 MByte/s Rx.Packet size 32k bytes: 114.53 MByte/s Tx, 35.11 MByte/s Rx.

Client send receive

Page 46: Collaborate vdb performance

netperf.sh mss: 1448 local_recv_size (beg,end): 128000 128000 local_send_size (beg,end): 49152 49152 remote_recv_size (beg,end): 87380 3920256 remote_send_size (beg,end): 16384 16384

mn_ms av_ms max_ms s_KB r_KB r_MB/s s_MB/s <100u <500u <1ms <5ms <10ms <50ms <100m <1s >1s p90 p99 .08 .12 10.91 15.69 83.92 .33 .38 .01 .01 .12 .54 .10 .16 12.25 8 48.78 99.10 .30 .82 .07 .08 .15 .57 .10 .14 5.01 8 54.78 99.04 .88 .96 .15 .60 .22 .34 63.71 128 367.11 97.50 1.57 2.42 .06 .07 .01 .35 .93 .43 .60 16.48 128 207.71 84.86 11.75 15.04 .05 .10 .90 1.42 .99 1.30 412.42 1024 767.03 .05 99.90 .03 .08 .03 1.30 2.25 1.77 2.28 15.43 1024 439.20 99.27 .64 .73 2.65 5.35

Page 47: Collaborate vdb performance

fio.sh test users size MB ms IOPS 50us 1ms 4ms 10ms 20ms 50ms .1s 1s 2s 2s+ read 1 8K r 28.299 0.271 3622 99 0 0 0 read 1 32K r 56.731 0.546 1815 97 1 1 0 0 0 read 1 128K r 78.634 1.585 629 26 68 3 1 0 0 read 1 1M r 91.763 10.890 91 14 61 14 8 0 0 read 8 1M r 50.784 156.160 50 3 25 31 38 2 read 16 1M r 52.895 296.290 52 2 24 23 38 11 read 32 1M r 55.120 551.610 55 0 13 20 34 30 read 64 1M r 58.072 1051.970 58 3 6 23 66 0 randread 1 8K r 0.176 44.370 22 0 1 5 2 15 42 20 10randread 8 8K r 2.763 22.558 353 0 2 27 30 30 6 1randread 16 8K r 3.284 37.708 420 0 2 23 28 27 11 6randread 32 8K r 3.393 73.070 434 1 20 24 25 12 15randread 64 8K r 3.734 131.950 478 1 17 16 18 11 33 write 1 1K w 2.588 0.373 2650 98 1 0 0 0 write 1 8K w 26.713 0.289 3419 99 0 0 0 0 write 1 128K w 11.952 10.451 95 52 12 16 7 10 0 0 0 write 4 1K w 6.684 0.581 6844 90 9 0 0 0 0 write 4 8K w 15.513 2.003 1985 68 18 10 1 0 0 0 write 4 128K w 34.005 14.647 272 0 34 13 25 22 3 0 write 16 1K w 7.939 1.711 8130 45 52 0 0 0 0 0 0 write 16 8K w 10.235 12.177 1310 5 42 27 15 5 2 0 0 write 16 128K w 13.212 150.080 105 0 0 3 10 55 26 0 2

Page 48: Collaborate vdb performance
Page 49: Collaborate vdb performance
Page 50: Collaborate vdb performance

ß

Page 51: Collaborate vdb performance

Oracle

NFS

TCP

Network

TCP

NFS

Cache/spindle

Fibre Channel

Cache/spindle

oramon.sh

ioh.sh

Measurements

ZFS

Page 52: Collaborate vdb performance

ioh.sh

date: 1335282287 , 24/3/2012 15:44:47TCP out: 8.107 MB/s, in: 5.239 MB/s, retrans: MB/s ip discards:---------------- | MB/s| avg_ms| avg_sz_kb| count ------------|-----------|----------|----------|--------------------R | io:| 0.005 | 24.01 | 4.899 | 1 R | zfs:| 7.916 | 0.05 | 7.947 | 1020 C | nfs_c:| | | | . R | nfs:| 7.916 | 0.09 | 8.017 | 1011 - W | io:| 9.921 | 11.26 | 32.562 | 312 W | zfssync:| 5.246 | 19.81 | 11.405 | 471 W | zfs:| 0.001 | 0.05 | 0.199 | 3 W | nfs:| | | | . W |nfssyncD:| 5.215 | 19.94 | 11.410 | 468 W |nfssyncF:| 0.031 | 11.48 | 16.000 | 2

Cache/SAN

NFS

ZFS

Cache/SAN

NFS

ZFS

Page 53: Collaborate vdb performance

Oracle

NFS

TCP

Network

TCP

NFS

Cache/SAN

Fibre Channel

Cache/spindle

LINUXms

Solarisms

Oracle 58 47NFS /TCP ? ?

Network ? ?

TCP/NFS ? ?

NFS server .1 2

Page 54: Collaborate vdb performance

TCP

Network

TCP

snoop / tcpdump

snoop

Virtualiation layerNFS Server

Oracle

NFS

Page 55: Collaborate vdb performance

Wireshark : analyze TCP dumps• yum install wireshark• wireshark + perl – find common NFS requests • NFS client• NFS server

– display times for• NFS Client• NFS Server• Delta

https://github.com/khailey/tcpdump/blob/master/parsetcp.pl

Page 56: Collaborate vdb performance

Parsing nfs server trace: nfs_server.captype avg ms count READ : 44.60, 7731

Parsing client trace: client.captype avg ms count READ : 46.54, 15282

==================== MATCHED DATA ============READtype avg ms server : 48.39, client : 49.42, diff : 1.03, Processed 9647 packets (Matched: 5624 Missed: 4023)

Page 57: Collaborate vdb performance

Parsing NFS server trace: nfs_server.captype avg ms count READ : 1.17, 9042

Parsing client trace: client.captype avg ms count READ : 1.49, 21984

==================== MATCHED DATA ============READtype avg ms count server : 1.03 client : 1.49 diff : 0.46

Page 58: Collaborate vdb performance

Oracle on Linux

Oracle on Solaris tool latency data

source

Oracle 58 ms 47 ms oramon.sh“db file sequential read” wait (which is basically a timing of “pread” for 8k random reads specifically

TCP trace NFS Client

1.5 45 ms tcpparse.shtcpdump on LINUX snoop on Solaris

network 0.5 1 ms Delta

TCP trace NFS Server

1 ms 44 ms tcpparse.sh snoop

NFS Server .1 ms 2 ms DTrace dtrace nfs:::op-read-

start/op-read-done

  

Oracle

NFS

TCP

Network

TCP

NFS

Page 59: Collaborate vdb performance

Issues: LINUX rpc queue On LINUX, in /etc/sysctl.conf modify

sunrpc.tcp_slot_table_entries = 128

then do

sysctl -p

then check the setting using

sysctl -A | grep sunrpc

NFS partitions will have to be unmounted and remountedNot persistent across reboot

Page 60: Collaborate vdb performance

Issues: Solaris NFS Server threads

sharectl get -p servers nfs sharectl set -p servers=512 nfssvcadm refresh nfs/server

Page 61: Collaborate vdb performance

Linux tools: iostat.py $ ./iostat.py -1

 172.16.100.143:/vdb17 mounted on /mnt/external:    op/s   rpc bklog   4.20      0.00

read:   ops/s    kB/s    kB/op    retrans   avg RTT (ms) avg exe (ms)        0.000   0.000    0.000   0 (0.0%)     0.000           0.000write:   ops/s    kB/s    kB/op    retrans   avg RTT (ms) avg exe (ms)        0.000   0.000    0.000   0 (0.0%)     0.000           0.000

Page 62: Collaborate vdb performance

Memory Prices

• EMC sells $1000/GB• X86 memory $30/1GB

• TB RAM on a x86 costs around $32,000 • TB RAM on a VMAX 40k costs around $1,000,000

Page 63: Collaborate vdb performance

200GB 200GB 200GB 200GB 200GB

Memory on Hosts

Page 64: Collaborate vdb performance

1000 GB

Memory on SAN

Page 65: Collaborate vdb performance

200GB

Memory on Virtualization Layer

Page 66: Collaborate vdb performance

Memory Location vs Price vs Perf

memory price speedHosts 1000 GB $32K < 1us Off load SANVirtual layer

200 GB $6K < 500us Off load SANShared diskfast clone

SAN 1000 GB $1000K < 100us

72% of all Delphix customers are on 1TB or below databasesOf the databases buffer cache represents 0.5% of database size, 5GB

Page 67: Collaborate vdb performance

IBM 3690 256GM RAMVmware ESX 5.1

Linux Source 20GB

Delphix 192 GB RAM

Linux Target 20GB

Linux Target 20GBLinux Source 20GB

Leverage new solid state storage more efficiently

Smaller space

Page 68: Collaborate vdb performance

Oracle 12c

Page 69: Collaborate vdb performance

80MB buffer cache ?

Page 70: Collaborate vdb performance

5000

Tnxs

/ m

inLa

tenc

y

300 ms

1 5 10 20 30 60 100 200

with

1 5 10 20 30 60 100 200Users

Page 71: Collaborate vdb performance

200GBCache

Page 72: Collaborate vdb performance

5000

Tnxs

/ m

inLa

tenc

y

300 ms

1 5 10 20 30 60 100 200Users

1 5 10 20 30 60 100 200

Page 73: Collaborate vdb performance

200GBCache

Page 74: Collaborate vdb performance

8000

Tnxs

/ m

inLa

tenc

y

600 ms

1 5 10 20 30 60 100 200Users

1 5 10 20 30 60 100 200