data life cycle orcan lc cern€¦ · data lifecycle management motivated by large data volumes...

54
Data Life Cycle Management for Oracle @ CERN with partitioning Oracle @ CERN with partitioning, compression, and archive. Luca Canali, CERN Orcan Conference, Stockholm, May 2010

Upload: others

Post on 02-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Data Life Cycle Management forOracle @ CERN with partitioningOracle @ CERN with partitioning, compression, and archive.

Luca Canali, CERNOrcan Conference, Stockholm, May 2010y

Page 2: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Outline

Physics DB Services at CERNy Motivations for Data Life Cycle Management

activities Techniques used Sharing our experience with examplesg p p

2Data Life Cycle Management @ CERN, Luca Canali

Page 3: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

LHC dataBalloon(30 Km)

CD stack with

LHC d t d t b t

CD stack with1 year LHC data!(~ 20 Km)

LHC data correspond to about 20 million CDs each year!

Concorde(15 Km)

RDBMS play a key role for p y ythe analysis of LHC data

Mt. Blanc(4.8 Km)

3Data Life Cycle Management @ CERN, Luca Canali

Page 4: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Databases and LHC Relational DBs play today a key role in the LHC

production chainsproduction chains online acquisition, offline production, data

(re)processing, data distribution, analysis • SCADA, conditions, geometry, alignment, calibration, file

bookkeeping, file transfers, etc..

Grid Infrastructure and Operation servicesp• Monitoring, Dashboards, User-role management, ..

Data Management Services Fil t l fil t f d t t• File catalogues, file transfers and storage management, …

Metadata and transaction processing for custom tape storage system of physics datag y p y

Accelerator logging and monitoring systems4Data Life Cycle Management @ CERN, Luca Canali

Page 5: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

The RDBMS Workload

Most applications are of OLTP type Oracle used mainly for its transactional engine Oracle used mainly for its transactional engine Concurrency and multi-user environment

Index based access paths and NL joins veryIndex based access paths and NL joins very important

Sequential workload / DW type queries are notSequential workload / DW type queries are not the main use case

Another way of looking at it:y g RDBMS stores ‘metadata’

5Data Life Cycle Management @ CERN, Luca Canali

Page 6: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Data Life Cycle Management @ CERN, Luca Canali 6

Page 7: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

CERN Databases in Numbers

CERN databases services – global numbers Global users community of several thousand users ~ 100 Oracle RAC database clusters (2 – 6 nodes)

Currently over 3300 disk spindles providing more than Currently over 3300 disk spindles providing more than 1PB raw disk space (NAS and SAN)

Some notable DBs at CERNSome notable DBs at CERN Experiment databases – 13 production databases

• Currently between 1 and 9 TB in size• Expected growth between 1 and 19 TB / year

LHC accelerator logging database (ACCLOG) – ~30 TB• Expected growth up to 30 TB / year• Expected growth up to 30 TB / year

... Several more DBs on the range 1-2 TBData Life Cycle Management @ CERN, Luca Canali 7

Page 8: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Data Lifecycle Management

Motivated by large data volumes produced by CLHC experiments

Large amounts of data will be collected and stored for several yearsseveral years

Different requirements on performance and SLA can often be found for ‘current’ and ‘old’ data sets

Proactively attack ‘issues’ of databases that grow ‘too large’ Administration Performance

C Cost

8Data Life Cycle Management @ CERN, Luca Canali

Page 9: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Digression

What is a VLDB? “… VLDB means bigger than you are comfortable

managing” (Cary Millsap)

There is a part of it that has to do with the share DB sizeThe threshold seems to be moving with time and technology The threshold seems to be moving with time and technology progress

Not too long ago 1TB Oracle DBs were classes as high end..

9Data Life Cycle Management @ CERN, Luca Canali

Page 10: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Administration of VLDB VLDB and consolidation advantages:

Data consolidation, application consolidationpp Some data sets are very large by nature

VLDB and consolidation disadvantages DB-wide operations can become slow (backup,

stats gathering, full scan of largest tables)D d i b t li ti Dependencies between applications

TaskIdentif ho to get ad antages of consolidation Identify how to get advantages of consolidation coexist with the idea of having a ‘lean’ DBs for manageability and performance

10Data Life Cycle Management @ CERN, Luca Canali

Page 11: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Attack problem from multiple sides

No out of the box solutions available Attack the problem where possible Attack the problem where possible

Applications Oracle and DB features Oracle and DB features HW architecture

Application layer:Application layer: focus on discussing with developers build life cycle concepts in the applicationsy p pp

Oracle layer Leverage partitioning and compression Movement of data to an external ‘archival DB’

11Data Life Cycle Management @ CERN, Luca Canali

Page 12: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Commodity HW Dual-CPU quad-core 2950 DELL servers 16GB memory Dual-CPU quad-core 2950 DELL servers, 16GB memory,

Intel 5400-series “Harpertown”; 2.33GHz clock Dual power supplies, mirrored local disks, 4 NIC (2 private/

2 public), dual HBAs, “RAID 1+0 like” with ASM

12Data Life Cycle Management @ CERN, Luca Canali

Page 13: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

High capacity storage, resiliency and low costy

Low cost HA storage with ASM Latest HW acquisition:Latest HW acquisition:

492 disks of 2TB each -> almost 1 PB of raw storage SATA disk for price/perf and high capacityp p g p y

ASM can take care mirroring Destroking can be used (external part for data)Destroking can be used (external part for data)

DATA DG1DATA DG1

Failgroup4Failgroup4Failgroup2Failgroup2 Failgroup3Failgroup3Failgroup1Failgroup1

__RECO_DG1RECO_DG1

Failgroup4Failgroup4Failgroup2Failgroup2 Failgroup3Failgroup3Failgroup1Failgroup1

13Data Life Cycle Management @ CERN, Luca Canali

Page 14: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Backup challenges

Backup/recovery over LAN becoming problem with databases exceeding tens of TBdatabases exceeding tens of TB Days required to complete backup or recovery Some storage managers support so-called LAN-free backup

• Backup data flows to tape drives directly over SAN• Backup data flows to tape drives directly over SAN• Media management server used only to register backups• Very good performance observed during tests (FC saturation, e.g. 400MB/s)

Alternative – using 10Gb Ethernet Alternative – using 10Gb Ethernet

Backup data

Metadata1GbE1GbE

Media ManagerServer

FCFCFCFC

Data Life Cycle Management @ CERN, Luca Canali 14

Database

Server

Tape drives

Page 15: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Application Layer Data Life Cycle policies cannot be easily

implemented from the DBA side onlyy We make sure to discuss with application

developers and application owners To reduced amount of data produced To allow for DB structure that can more easily

ll hi iallow archiving Define data availability agreements for online

data and archive Joint sessions to identify how to leverage Oracle

features

15Data Life Cycle Management @ CERN, Luca Canali

Page 16: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Use Case: Transactional application with historical dataapplication with historical data Data has an active part (high DML activity) Older data is made read-only (or read-mostly) As data ages, becomes less and less used

Data Life Cycle Management @ CERN, Luca Canali 16

Page 17: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Active Dataset Many Physics applications are structured as

write-once read-many y At a given time typically only a subset of data is

actively usedN t l ti i ti h i l t f Natural optimization: having large amounts of data that are set read only

Can be used to simplify administrationCan be used to simplify administration Replication and backup can profit too

Problem Not all app are ready for this type of optimization

17Data Life Cycle Management @ CERN, Luca Canali

Page 18: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Time-Organized data Several key database tables are naturally

time organizedg this leads to range-based partitioning Other solution is ‘manual split’ i.e. multiple

i il t bl i diff t hsimilar tables in different schemas Advantages

P titi b t t d t t bl f Partitions can be treated as separate tables for bulk operations

Full scan operation, if they happen, do not span u sca ope at o , t ey appe , do ot spaall tables

18Data Life Cycle Management @ CERN, Luca Canali

Page 19: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Techniques: Oracle Partitioning

Range partitioning on timestamp attributes Range partitioning on timestamp attributes Note: unique indexes and local partitioning

Partitioning key must be part of index Partitioning key must be part of index Partitions for ‘future time ranges’

Currently pre allocated Currently pre-allocated 11g interval partitions will come handy

11g reference partitioning11g reference partitioning Not used yet although interesting, will be tested

19Data Life Cycle Management @ CERN, Luca Canali

Page 20: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Partitioning Issues Index strategy

Indexes need to be local partitioned in the ideal pcase to fully make use of ‘partition isolation’

Not always possible, depends on applicationS i l b l i i i b f Sometimes global partitioning better for performance

Data movement issues Data movement issues Using ‘Transportable tablespace’ for single

partitions is not straightforward g Query tuning

App owners and DBAs need to make sure there are no ‘stray queries’ that run over multiple partitions by mistake

20Data Life Cycle Management @ CERN, Luca Canali

Page 21: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

‘Manual’ Partitioning Range partitioning obtained by creating

multiple schemas and sets of tables Flexible, does not require partitioning option

• And is not subject to partitioning limitations

M k i t th li ti l More work goes into the application layer• Application needs to keep track of ‘catalog’ of partitions

CERN Production examples PVSS (commercial SCADA system)SS (co e c a SC syste ) COMPASS (custom development at CERN)

21Data Life Cycle Management @ CERN, Luca Canali

Page 22: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Details of PVSS Main ‘event tables’ are monitored to stay

within a configurable maximum sizeg A new table is created after the size threshold is

reachedPVSS t d t k t k f t d PVSS metadata keep track of current and historical tables

Additional partitioning by list on sys id for insertAdditional partitioning by list on sys_id for insert performance

Historical data can be post-processed with compression

Current size (Atlas offline): 4.6 TB

22Data Life Cycle Management @ CERN, Luca Canali

Page 23: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Details of COMPASS

Each week of data is a separate tableI t h t In a separate schema too

• DST and RAW also separated

IOT table usedIOT table used Up to 4.6 billion rows per table Key compression used for IOT

Current size: ~10 TB

23Data Life Cycle Management @ CERN, Luca Canali

Page 24: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Details for PANDA Archive migration to Oraclemigration to Oracle

‘Jobsarchived’ table consolidates many smaller tables previously in MySQLsmaller tables previously in MySQL historical data coming from production Range partitioning by time Range partitioning by time One partition per month One tablespace per yearp p y

Performance Application modified to add time range in all g

queries -> to use partition pruning All indexes are local

C i h i i d Compromise change-> unique index on panda_id changed to be non-unique

24Data Life Cycle Management @ CERN, Luca Canali

Page 25: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Details of MDT DCS DATA tables

Contain live and historical data Range partitioned 1 partition per month Additional table compression for historical data

Indexes primary key is local partitioned (prefixed with

time column) Other indexes are local too Other indexes are local too Key compression on indexes

Current size: ~300GBCurrent size: 300GB

25Data Life Cycle Management @ CERN, Luca Canali

Page 26: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Details of LCG SAME and GRIDVIEW

Critical tables are partitioned Range partitioned using timestampg p g p 1 partition per month Contain live and historical data All indexes are local LCG_SAME makes use of partitioned LOBs

C t i Current size 1.7 TB for LCG_SAME and 1.2 TB for

GRIDVIEWGRIDVIEW

26Data Life Cycle Management @ CERN, Luca Canali

Page 27: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Techniques: Schema ReorganizationReorganization

When downtime of part of the application can be affordedcan be afforded Alter table move (or CTAS) Alter index rebuild Alter index rebuild

More sophisticated DBMS REDEFINITIONDBMS_REDEFINITION Allows to reorganization of tables online (add

partitioning for example) Users experience, works well but it has let us

down a couple of times in presence of high transaction ratestransaction rates

• hard to debug and test ahead

27Data Life Cycle Management @ CERN, Luca Canali

Page 28: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Use Case: Archive DB

Move data from production to a separate archive DBseparate archive DB Cost reduction: archive DB is sized

for capacity instead of IOPSfor capacity instead of IOPS Maintenance: reduces impact of production DB

growth Operations: archive DB is less critical for HA

than production

Data Life Cycle Management @ CERN, Luca Canali 28

Page 29: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Archive DB service Proposal of archive DB

First presented Q4 2008, production Q4 2009p p It’s an additional DB service to archive pieces of

applications D i hi DB i l f d l kl d Data in archive DB mainly for read-only workload (with a lower performance)

How to move data to archive and possible How to move data to archive and possible back in case of restore? Most details depend on application ownerost deta s depe d o app cat o o e It’s complicated by referential constraints Area of custom work, still in progress

29Data Life Cycle Management @ CERN, Luca Canali

Page 30: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Archive DB in Practice Detach ‘old’ partitions form prod and load them on the

archive DB Can use partition exchange to tableCan use partition exchange to table Also transportable tablespace is a tool that can help Archive DB post-move jobs can implement compression for

archive (exadata 11gR2)archive (exadata 11gR2) Post-move jobs may be implemented to drop indexes

Difficult point: One needs to move a consistent set of data Applications need to be developed to support this move

• Access to data of archive need to be validated with applicationAccess to data of archive need to be validated with application owners/developers

• New releases of software need to be able to read archived data

Data Life Cycle Management @ CERN, Luca Canali 30

Page 31: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Example of Data Movement

31Data Life Cycle Management @ CERN, Luca Canali

Page 32: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Techniques: Data Movement impdp/expdp

Very useful although performance issues foundy g p Impdp over DB link in particular

Partitioning and data movement Exchange partition with table

Transportable tablespaces Very fast Oracle-oracle data movement Requires TBS to be set read only

C b bl i d ti• Can be a problem in production

Workarounds: • Use of standby or a restored backup to move data fromy p

32Data Life Cycle Management @ CERN, Luca Canali

Page 33: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Compression

The ‘high level’ view: Databases are growing fast, beyond TB scale CPUs are becoming faster There is a opportunity to reduce storage cost by using

compression techniques Gaining in performance while doing that too Gaining in performance while doing that too Oracle provides compression in the RDBMS

33Data Life Cycle Management @ CERN, Luca Canali

Page 34: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Making it Work in Real World Evaluate gains case by case

Not all applications can profit Not all data models can allow for it Compression can give significant

gains for some applicationsgains for some applications In some other cases applications can be

modified to take advantage of compressiong p Comment:

Our experience of deploying partitioning goes on the same track

Implementation involves developers and DBAs

Data Life Cycle Management @ CERN, Luca Canali 34

Page 35: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Evaluating Compression Benefits

Compressing segments in Oracle Save disk spaceSave disk space

• Can save cost in HW• Beware that capacity in often not as important as

number of disks which determine max IOPSnumber of disks, which determine max IOPS

Compressed segments need less blocks so• Less physical IO required for full scan• Less logical IO / space occupied in buffer cache• Beware compressed segments will make you consume

more CPU

Data Life Cycle Management @ CERN, Luca Canali 35

Page 36: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Compression and Expectations A 10TB DB can be shrunk to 1TB of storage

with a 10x compression? Not really unless one can get rid of indexes (!) Data warehouse-like with only FULL SCAN

operationsoperations Data very rarely read (data on demand, almost

taken offline) Licensing costs

Advanced compression option required for hi b b i ianything but basic compression

Exadata storage required for hybrid columnar compression

Data Life Cycle Management @ CERN, Luca Canali

compression

36

Page 37: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Archive DB and Compression Non-active data can be compressed

To save storage space In some cases speed up queries for full scans Compression can be applied as post-processing

• On read-mostly data partitions• On read-mostly data partitions – (Ex: Atlas’ PVSS, Atlas MDT DCS, LCG’s SAME)

• With alter table move or online redefinition

Active data Non compressed or compressed for OLTP (11g)

Data Life Cycle Management @ CERN, Luca Canali 37

Page 38: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Data Archive and Indexes Indexes do not compress well

Drop indexes in archive when possible Risk archive compression factor dominated by

index segments Important details when using partitioning Important details when using partitioning

Local partitioned indexes preferred • for ease of maintenance and performancep• Note limitation: columns in unique indexes need be

superset of partitioning key• May require some index change for the archiveMay require some index change for the archive

– Disable PKs in the archive table (Ex: Atlas PANDA) – Or change PK to add partitioning key

Data Life Cycle Management @ CERN, Luca Canali 38

Page 39: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Oracle Segment Compression -What is Available

Heap table compression: Basic (from 9i) For OLTP (from 11gR1) 11gR2 hybrid columnar (11gR2 exadata)

Oth i t h l i Other compression technologies Index compression

• Key factoringKey factoring• Applies also to IOTs

Secure files (LOB) compression• 11g compression and deduplication

Compressed external tables (11gR2)Details not covered here

Data Life Cycle Management @ CERN, Luca Canali

Details not covered here

39

Page 40: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Table Compression In Practice

Measured compression factors for tables: About 3x for BASIC and OLTP About 3x for BASIC and OLTP

• In prod at CERN, example: PVSS, LCG SAME, Atlas TAGs, Atlas MDT DCS

10-20x for hybrid columnar (archive) more details in the following

C i b f h l f th Compression can be of help for the use cases described above Worth investigating more the technology Worth investigating more the technology Compression for archive is very promising

Data Life Cycle Management @ CERN, Luca Canali 40

Page 41: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Compression for Archive – Some Tests and Results

Tests of hybrid columnar compression

Exadata V1, ½ rack Oracle 11gR2 Courtesy of Oracle y

Remote access to a test machinein Reading (Q3 2009)

Data Life Cycle Management @ CERN, Luca Canali 41

Page 42: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Advanced Compression Tests

Representative subsets of data from production exported to Exadata V1 Machine:to Exadata V1 Machine: Applications: PVSS (slow control system for the detector and

accelerator)GRID it i li ti GRID monitoring applications

File transfer applications (PANDA) Log application for ATLAS Exadata machine accessed remotely to Reading, UK for a 2-week

test

Tests focused on :Tests focused on : OLTP and Hybrid columnar compression factors Query speedup

42Data Life Cycle Management @ CERN, Luca Canali

Page 43: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Hybrid Columnar Compression on Oracle 11gR2 and Exadata

Measured Compression factor for selected Physics Apps.

50

60

70

fact

or

30

40

50

ompr

essi

on

0

10

20

Columnar for Query Low

Columnar for Query High

Columnar for Archive LowColumnar for Archive HighC

o

No compression

OLTP compression

PVSS (261M rows, 18GB) LCG GRID Monitoring (275M rows, 7GB)

Data from Svetozar Kapusta – Openlab (CERN).

LCG TESTDATA 2007 (103M rows, 75GB) ATLAS PANDA FILESTABLE (381M rows, 120GB)

ATLAS LOG MESSAGES (323M rows, 66GB)

43Data Life Cycle Management @ CERN, Luca Canali

Page 44: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Full Scan Speedup –a Basic Testa Basic Test

Full table scan speedup of compressed tables for count(*) operation (speed=1 for ‘no compression’)

3

3.5

can

1.5

2

2.5

up o

f ful

l sc

0

0.5

1

BASICQUERY LOW

QUERY HIGHARCHIVE LOW

ARCHIVE HIGH

Spee

d

NO COMPRESSION

OLTP

PVSS (261M rows, 18GB) LCG GRID Monitoring (275M rows, 7GB)

ATLAS PANDA FILESTABLE (381M, 120GB) ATLAS LOG MESSAGES (323M rows, 78GB)( , ) ( , )

LCG TESTDATA (103M rows, 75GB) Data from Svetozar Kapusta – Openlab (CERN).

44Data Life Cycle Management @ CERN, Luca Canali

Page 45: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

IO Reduction for Full Scan Operations on Compressed Tableson Compressed Tables

Hypothetical full scan speed up for count(*) operationsObtained by disabling cell offloading in exadata.

25

30

can

15

20

25

up o

f ful

l sc

0

5

10

BASICQUERY LOW

QUERY HIGHARCHIVE LOW

ARCHIVE HIGH

Spee

d

NO COMPRESSION

OLTP

PVSS (261M rows, 18GB) LCG GRID Monitoring (275M rows, 7GB)

ATLAS PANDA FILESTABLE (381M, 120GB) LCG TESTDATA (103M rows, 75GB)ATLAS PANDA FILESTABLE (381M, 120GB) LCG TESTDATA (103M rows, 75GB)

ATLAS LOG MESSAGES (323M rows, 78GB)Data from Svetozar Kapusta – Openlab (CERN).

45Data Life Cycle Management @ CERN, Luca Canali

Page 46: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Time to Create Compressed TablesCompressed Tables

Table creation time for various compression types of various physics applications. T = 1 for ‘no compression’.

40

45

ess

20

25

30

35

or to

com

pre

0

5

10

15

OLTPBASIC

QUERY LOWQUERY HIGH

ARCHIVE LOWARCHIVE HIGH

Tim

e fa

cto

NO COMPRESSION

OLTP

PVSS (261M rows, 18GB) LCG GRID Monitoring (275M rows, 7GB)

LCG TESTDATA (103M rows, 75GB) ATLAS PANDA FILESTABLE (381M, 120GB)

ATLAS LOG MESSAGES (323M rows, 78GB)Data from Svetozar Kapusta – Openlab (CERN).

46Data Life Cycle Management @ CERN, Luca Canali

Page 47: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

A Closer Look at Hybrid Columnar Compression p

Data is stored in compression units (CUs), a collection of blocks (around 32K)E h i it t d t i t ll ‘b l ’ Each compression unit stores data internally ‘by column’: This enhances compression

CU HEADER

BLOCK HEADER BLOCK HEADER BLOCK HEADER BLOCK HEADER

C3 C7

Logical Compression Unit

CU HEADER C3

C4C1C2

C7 C5

C6 C8C8

Data Life Cycle Management @ CERN, Luca Canali 47

Picture and info from: B. Hodak, Oracle (OOW09 presentation on OTN).

Page 48: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Compression Factors An artificial tests to put compression

algorithms into work Two different types of test table

Constant: each row contains a random stringR d h t i titi f Random: each row contains a repetition of a given string

100M rows of about 200 bytes each100M rows of about 200 bytes each Details of the test in the notes for this slide

Data Life Cycle Management @ CERN, Luca Canali 48

Page 49: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Compression FactorsCompression

Table TypeCompression Type Blocks Used Comp Factor

Constant Table no compression 2637824 1Constant Table comp basic 172032 15 3Constant Table comp basic 172032 15.3Constant Table comp for oltp 172032 15.3

Constant Tablecomp for archive high 3200 824.3Constant Table high 3200 824.3

Constant Table comp for query high 3200 824.3

--------------------- -------------------- -------------------- --------------------Random Table no compression 2711552 1Random Table comp basic 2708352 1.0Random Table comp for oltp 2708352 1.0p p

Random Tablecomp for archive high 1277952 2.1comp for query

Data Life Cycle Management @ CERN, Luca Canali 49

Random Table high 1449984 1.9

Page 50: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Hybrid Columnar and gzip Compression for archive reaches high

compression How does it compare with gzip? A simple test to give a ‘rough idea’

Test: used a table populated with dba objects Test: used a table populated with dba_objects• Results ~20x compression in both cases

Method Uncompressed Compressed Ratiogzip -9 13763946 bytes 622559 bytes 22compress for archive high 896 blocks 48 blocks 19

Data Life Cycle Management @ CERN, Luca Canali 50

Page 51: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Compression and DML What happens when running row updates on

compressed tables? What about locks? BASIC and OLTP: BASIC and OLTP:

the updated row stays in the compressed block ‘usual’ Oracle’s row-level locks usual Oracle s row-level locks

Hybrid columnar: Updated row is moved as in a delete + insertUpdated row is moved, as in a delete + insert

• How to see that? With dbms_rowid package

New row is OLTP compressed Lock affects the entire CU that contains the row

Data Life Cycle Management @ CERN, Luca Canali 51

Page 52: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Compression and Single-row Index Range Scan accessde a ge Sca access

Consistent gets Consistent gets Table Compr Type (select *) (select owner) no compression 4 4basic compression 4 4comp for oltp 4 4comp for query high 9 5comp for query low 9 5

Test SQL: select from a copy of dba objects with a

p q ycomp for archive low 10 5comp for archive high 24 5

Test SQL: select from a copy of dba_objects with a index on object_id

Predicate: ‘where object_id=100

Data Life Cycle Management @ CERN, Luca Canali

j _ Note: ‘owner’ is the first column of the test table

52

Page 53: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Conclusions

Data Life Cycle Management experience at CERNy g p Proactively address issues of growing DBs manageability performance cost

Involvement of application owners is fundamental Techniques within Oracle that can help

Partitioning Archival DB service

C i Compression

53Data Life Cycle Management @ CERN, Luca Canali

Page 54: Data Life Cycle orcan LC CERN€¦ · Data Lifecycle Management Motivated by large data volumes produced by LHC experiments Large amounts of data will be collected and stored for

Acknowledgments

CERN-IT DB group and in particular: Jacek Wojcieszuk, Dawid Wojcik, Maria Girone

Oracle, for the opportunity of testing Oracle Exadata Monica Marinucci, Bill Hodak, Kevin Jernigan

More info More info:http://cern.ch/it-dep/dbhtt // h/ lihttp://cern.ch/canali

Data Life Cycle Management @ CERN, Luca Canali 54