Exadata Snapshot Databases
Peter Brink DOAG 2018 November 2018
Agenda
§ Database Virtualization § Exadata Test Environments § Exadata Snapshot Databases (12.1) § Exadata Snapshot Databases (12.2) § Sparse Clones in the Cloud § Snaphot Performance § Discussion § Appendix § Pre-Requisites § Creating and Managing Snapshot
Databases (12.1)
DOAG 2018, Nuremberg, November 2018 Peter Brink 2
Database Virtualization Database copies are needed to support business and efficient development but are hard to sustain using traditional database cloning methods
DOAG 2018, Nuremberg, November 2018 Peter Brink
Providing an adequate number of database copies increases productivity, shortens time to market and
improves product quality.
Multiple DB copies Challenges with traditional database refreshes
þ INCREASED PRODUCTIVITY • Avoid sharing between project streams • DB's for business to test
þ INCREASED QUALITY • Databases for unit test, CI, QA, UAT etc • Reproduce production behaviour
þ REDUCED COST • Faster time to market • Fewer bugs, less support
q DECREASED PRODUCTIVITY • Time and effort for database refreshes
q DECREASED QUALITY • Sharing of DB's will impact users • Data subsetting reduces quality
q INCREASED COST • High storage cost • Higher development, support cost • Opportunity cost
Pro
duct
ivity
Q
ualit
y P
rofit
UAT CCI EII FRS Dev
3
Database Virtualization Technologies offering thin provisioning with a copy-on-write paradigm provide a fast, cost-efficient method for database cloning, increasing productivity
DOAG 2018, Nuremberg, November 2018 Peter Brink
§ Most data in test environments is identical, no need for multiple copies
§ Only data that is unique to a test database gets written
§ Fast provisioning § Enables self-service § Multiple different hard- and
software options, e.g. NetApp, ZFS-SA, Delphix, EMC, CloneDB
Clone Master
Block 1
Block 3
Block 2
Clone DB 1
Block 1 CM
Block 1 updated on Clone Master
Block 3 C1
Block 3 updated on Clone 1
Block 4 C1
Block 4 written on Clone 1
Clone DB 2
Solution: Thin Provisioning
4
Test databases don’t change a lot Production data changes slowly
Database Virtualization Through sharing of read-only data between databases virtualization technologies enable fast, space efficient cloning.
DOAG 2018, Nuremberg, November 2018 Peter Brink
Production data changes slowly
5
Test databases don’t change a lot
Daily block changes (%) Date App MA App TS App MY 11-05-2017 0.9 3.5 2.7
12-05-2017 1.8 4.0 3.7
13-05-2017 1.7 1.4 3.2
14-05-2017 0.7 - -
15-05-2017 0.8 1.0 0.4
16-05-2017 2.2 1.7 3.0
17-05-2017 2.3 4.3 3.5
18-05-2017 2..2 1.1 3.5
19-05-2017 2.3 1.6 3.6
20-05-2017 1.8 0.7 3.1
21-05-2017 0.5 - -
22-05-2017 1.5 0.9 0.4
23-05-2017 3.2 3.3 3.0
Average 1.7 2.1 2.4
Application Size of Prod (TB)
Number of Clones
Avg % Clone Change
App MA 43 91 1.3
App TS 4 64 6.7
App MY 44 17 0.9
Exadata Test Environments Cloning Exadata DBs either sacrificed Exadata features by running clones on another platform or used a non-thin provisioning method on Exadata § Non-virtualized databases on Exadata result in
§ Limited number of environments § Sharing of test databases and data
subsetting § Time and effort to refresh § Increased storage demand
§ Use virtualization technologies, e.g. ZFS-SA § Differentiating features of Exadata like
Storage Offloading and HCC are not available
§ Test environment users suffer from poor performance
§ Not suited for performance testing § Difficult to develop and test against
DOAG 2018, Nuremberg, November 2018 Peter Brink
12.1.0.2: fast space-efficient Snapshot database on Exadata
Exadata
ZFS-SA
Production DR Standby
Clone Master
Clone DB 1
Clone DB 2
Clone DB 4
Clone DB 3
6
12.1 Exadata Snapshot Databases New Snaphot Databases give all the advantages of thin-provisioning and retaining performance of the Exadata platform
§ Thin provisioning of test and development snapshot databases
§ Snaphot databases can use all Storage Server Software features
§ Snaphots are created from a read-only database
§ Multiple snapshot databases hanging off the same clone master can share space
§ Integration with multi-tenant option provides simple workflow to create PDB snapshot databases. Snapshots can be created for individual PDB’s or a CDB / non-container database
DOAG 2018, Nuremberg, November 2018 Peter Brink
Exadata
Production DR Standby
Read-only Clone Master
Clone DB 1
Clone DB 2
Clone DB 4
Clone DB 3
Dataguard / RMAN
Dataguard
7
ASM Sparse Disk Groups New Sparse Disk Group are key invention enabling thin-provisioned Exadata clones
DOAG 2018, Nuremberg, November 2018 Peter Brink 8
Exadata
Production DR Standby
Read-only Clone Master
Clone DB 1
Clone DB 2
Clone DB 4
Clone DB 3
Sparse Disk Group
§ Sparse files used to store changed / new blocks from Snapshot databases.
§ Pointer to parent file gives access to unchanged blocks. Sparse data files can only be created in sparse disk groups
§ Sparse grid disks have a physical and virtual size with limits of 4TB and 100TB respectively
§ Sparse disk groups can contain sparse and non-sparse files
§ Control files, temp files and redo logs are not sparse
Lifecycle of 12.1 Snapshot databases Lifespan of a snapshot database is limited by the lifetime of the Clone Master. Refreshes with the latest data will wipe out the Clone Master and clones
1. Prepare clone master Reconfigure DataGuard standby / RMAN backup/restore
2. Run any pre-clone script on clone master, e.g. data masking 3. Create clones 4. Refresh clone master
drop all snapshot database sync clone master
Different refresh requirements likely to require to have multiple Clone Masters with different refresh cycle. DOAG 2018, Nuremberg, November 2018 Peter Brink
Exadata
Production / DR
Standby
Exadata
Read-only Clone Master
Clone DB 2
Clone DB 4
Clone DB 3
Clone DB 1
Clone DB 5
1) 2)
3) refresh
9
Exadata
12.2 Hierarchical Snapshot Databases Hierarchical snapshots allow to create a clone from a clone, thus giving more flexibility for testing and maintaining all snapshots from a single Standby
Convert to R/O Test Master to create child clones
DOAG 2018, Nuremberg, November 2018 Peter Brink
Exadata
Production / DR
Standby
Test Master (RO)
Clone DB 2
Clone DB 3
Clone DB 1
10
Prj V1
Prj V2 Prj V2.2
RO TM 2 (sparse DG)
12.2 – Data Guard Standby as Test Master Using a Data Guard physical standby as test master database makes refreshes easy.
DOAG 2018, Nuremberg, November 2018 Peter Brink
Production / DR
Standby
Test Master
11
R/W Test Master
(sparse DG)
Data Guard
RO TM 1 (full, original data files)
Peter’s Clone
David’s Clone
Marcin’s Clone
Marcin’s Clone 2
Data Guard
Marcin’s Clone (RO)
RO TM 42 (sparse DG)
R/W Test Master
(sparse DG)
Data Guard
...
Steve’s Clone
Adam’s Clone (RO)
Adam’s Clone 2
Adam’s Clone 3
12.2 – Data Guard Standby as Test Master Implications of hierarchical structure might prevent you from configuring a DG Standby as Test Master
DOAG 2018, Nuremberg, November 2018 Peter Brink 12
§ It is not possible to delete individual test masters
§ Performance degrades when accessing data blocks up the hierarchy – Clones hanging off the oldest test masters perform best
Original data Clone Data Original 5.38 -
TM4 7.01 5.51
TM8 10.35 5.55
TM11 11.92
Sparse Clones in the Cloud
Prerequisites • Create Sparse Disk Group when setting up Exadata Cloud Service
instance (check “Create sparse disk group”) • Databases must be 12c or higher
DOAG 2018, Nuremberg, November 2018 Peter Brink 13
Exadata Cloud Service– Create Snapshot Master
DOAG 2018, Nuremberg, November 2018 Peter Brink 14
Exadata Cloud Service– Create Snapshot Master
DOAG 2018, Nuremberg, November 2018 Peter Brink 15
Exadata Cloud Service– Create Snapshot Master
DOAG 2018, Nuremberg, November 2018 Peter Brink 16
Exadata Cloud Service– Create Snapshot Master
DOAG 2018, Nuremberg, November 2018 Peter Brink 17
Exadata Cloud Service– Create Snapshot Master
DOAG 2018, Nuremberg, November 2018 Peter Brink 18
Exadata Cloud Service– Create Clone Database
DOAG 2018, Nuremberg, November 2018 Peter Brink 19
Exadata Cloud Service– Create Clone Database
DOAG 2018, Nuremberg, November 2018 Peter Brink 20
Exadata Cloud Service– Create Clone Database
DOAG 2018, Nuremberg, November 2018 Peter Brink 21
Exadata Cloud Service– Create Clone Database
DOAG 2018, Nuremberg, November 2018 Peter Brink 22
Exadata Cloud Service– Create Clone Database
DOAG 2018, Nuremberg, November 2018 Peter Brink 23
Snapshot Performance – Hardware Configuration
DOAG 2018, Nuremberg, November 2018 Peter Brink
Exadata X5 half-rack Cluster nodes 4
CPU count (total) Intel Xeon E5-2699 v3 (2.3 GHz) 4 x 36 = 144
DRAM GB Memory (total) 4 x 756 = 3024
Database Instance Memory SGA + PGA: 4 x (227 + 80) = 1228
Storage 7 Exadata storage servers, HC disk, 44 TB raw flash
Storage Connectivity Proprietary Infiniband Network (Oracle iDB over 40 Gb/sec fabric)
Exadata Snapshot clone. As above apart from: Cluster nodes 1
Database Instance Memory SGA + PGA: 1 x (60 + 20) = 80
Conventional Oracle RDBMS Cluster nodes 2
CPU count (total) Intel Xeon E7-4890 v2 (2.8 GHz) 2 x 60 = 120
DRAM GB Memory (total) 2 x 1024 = 2048
Database Instance Memory SGA + PGA: 2 x (600 + 200) = 1600
Storage HDS VSP/G1000 SAN Storage
Storage Connectivity 2 x 2 HBA (4 x 8 Gb/sec = 32 Gb/sec aggregate throughput capacity)
24
Snapshot Performance Sample 1
DOAG 2018, Nuremberg, November 2018 Peter Brink 25
Snapshot Performance Sample 2
DOAG 2018, Nuremberg, November 2018 Peter Brink 26
Snapshot Performance Sample 3
DOAG 2018, Nuremberg, November 2018 Peter Brink 27
Appendix
• Pre-Requisites for using Snapshot Databases • Creating and Managing Snapshots (12.1)
1. Create sparse grid disk 2. Create ASM disk group for the sparse grid disks 3. Creating Clone Master database 4. Creating Snapshot database
DOAG 2018, Nuremberg, November 2018 Peter Brink 29
Pre-Requisites for using Snapshot Databases
§ Storage nodes must be X3 or later § Exadata Storage Software 12.1.2.1.0 or later for Exadata Storage
and Database Servers § Oracle Grid Infrastructure 12.1.0.2.0 BP5 or later § ASM grid disk have COMPATIBLE.RDBMS and
COMPATIBLE.ASM set to 12.1.0.2 or later § Oracle Database 12.1.0.2.0 BP5 or later § Cell 12.1.2.1 § Data files for snapshot databases and parent database must be in
the same ASM cluster § db_block_size multiple of 4k § Use of hierarchical snapshot databases require Oracle Database
and Grid Infrastructure 12.2.0.1 and Oracle Exadata 12.2.1.1 or higher.
DOAG 2018, Nuremberg, November 2018 Peter Brink 30
Create Sparse Grid Disk
§ CellCLI> create griddisk SPARSE celldisk=CD_00_ed2pcell1, size=100G, virtualsize=1000G
§ Some grid disk attributes from list griddisk detail command: CellCLI> list griddisk SPARSE detail name: sptest size: 100G sparse: TRUE virtualSize: 1000G
§ Total physical size: ASM redundancy * (sum( size of clone masters in sparse ASM disk group ) + sum(expected changes written by all snapshots))
§ Total virtual size: ASM redundancy * (sum( size of clone masters in sparse ASM disk group ) + sum(full virtual size of all snaphots))
DOAG 2018, Nuremberg, November 2018 Peter Brink 31
Create Sparse Grid Disk Group § SQL> create diskgroup SPARSE normal redundancy disk 'o/*/SPARSE*'
2 attribute 3 'compatible.asm'='12.1.0.2' 4 'compatible.rdbms'='12.1.0.2' 5 'cell.smart_scan_capable'='true' 6 'cell.sparse_dg'='allsparse' 7 'au_size'='4M'
DOAG 2018, Nuremberg, November 2018 Peter Brink 32
Create Clone Master Database
Create Clone Master from a Dataguard Standby or any method used to create a database clone 1. Enable access control on diskgroup containing data files of clone
master alter diskgroup DATA_PB set attribute 'access_control.enabled'='true'
2. Define os user to own disk group alter diskgroup DATA_PB add user 'peter'
3. Change ownership of datafiles alter diskgroup DATA_PB set ownership owner = 'peter' for file '+DATA_PB/PB/DATAFILE/pb_data.275.881838523'; alter diskgroup DATA_PB set ownership owner = 'peter' for file '+DATA_PB/PB/DATAFILE/sysaux.261.881832971'; alter diskgroup DATA_PB set ownership owner = 'peter' for file '+DATA_PB/PB/DATAFILE/system.260.881832969'; alter diskgroup DATA_PB set ownership owner = 'peter' for file '+DATA_PB/PB/DATAFILE/undotbs1.262.881832971'; alter diskgroup DATA_PB set ownership owner = 'peter' for file '+DATA_PB/PB/DATAFILE/users.267.881832977'; alter diskgroup DATA_PB set ownership owner = 'peter' for file '+DATA_PB/PB/DATAFILE/pb_data.275.881832971';
DOAG 2018, Nuremberg, November 2018 Peter Brink 33
Create Clone Master Database
4. Change write permissions of datafiles alter diskgroup DATA_PB set permission owner = read ONLY, group=read ONLY, other=none for file '+DATA_PB/PB/DATAFILE/pb_data.275.881838523' / alter diskgroup DATA_PB set permission owner = read ONLY, group=read ONLY, other=none for file '+DATA_PB/PB/DATAFILE/sysaux.261.881832971' / alter diskgroup DATA_PB set permission owner = read ONLY, group=read ONLY, other=none for file '+DATA_PB/PB/DATAFILE/system.260.881832969' / alter diskgroup DATA_PB set permission owner = read ONLY, group=read ONLY, other=none for file '+DATA_PB/PB/DATAFILE/undotbs1.262.881832971' / alter diskgroup DATA_PB set permission owner = read ONLY, group=read ONLY, other=none for file '+DATA_PB/PB/DATAFILE/users.267.881832977' / alter diskgroup DATA_PB set permission owner = read ONLY, group=read ONLY, other=none for file '+DATA_PB/PB/DATAFILE/pb_data.275.881832971' /
DOAG 2018, Nuremberg, November 2018 Peter Brink 34
Create Snapshot Database
1. Create sample control file script for snapshot databases from existing control file. Log into the clone master as sysdba and backup the controlfile: SQL> select value from v$diag_info where name = 'Default Trace File'; VALUE -------------------------------------------------------------------------- /u02/app/PB/diag/rdbms/PB/PB1/trace/PB1_ora_36379.trc SQL> alter database backup controlfile to trace;
2. Create init.ora file for the clone from the init.ora of the clone master. Change the following: • db_name = CL1
• control_files = '+RECO_PB/PB/CONTROLFILE/control1.f'
3. Change all references to the clone master to the new snapshot database in the init file
DOAG 2018, Nuremberg, November 2018 Peter Brink 35
Create Snapshot Database
4. Generate a rename_files.sql script to map 'parent' datafiles from the clone master to the clone datafiles in the SPARSE diskgroup spool rename_files.sql select 'EXECUTE dbms_dnfs.clonedb_renamefile ('||''''|| name||''''||','||''''||replace(replace(replace(name,'.','_ '), 'CLONEMASTER','CL1'),'DATA_PB','SPARSE')||''''||');' from v$datafile;
this generates commands like the following: EXECUTE dbms_dnfs.clonedb_renamefile ( '+DATA_PB/CLONEMASTER/DATAFILE/system.260.881832969', '+SPARSE/CL1/DATAFILE/system_260_881832969');
5. Shut down the clone master
DOAG 2018, Nuremberg, November 2018 Peter Brink 36
Create Snapshot Database
6. Copy the trace file generated in step 1 to CFSnapClone.ora. Change the database name to the new snaphot database CREATE CONTROLFILE REUSE SET DATABASE CL1 RESETLOGS
7. Mount the snapshot database $ sqlplus /as sysdba SQL> startup nomount pfile=clone_init.ora Oracle instance started.
8. Create controlfile for the snapshot database SQL> @CFSnapClone.ora Control file created.
9. Run the rename_files.sql from step 4. Any directories used for the child datafiles will not be created automatically. Create these directories before running this
10. Open database resetlogs SQL> alter database open resetlogs; Database altered.
DOAG 2018, Nuremberg, November 2018 Peter Brink 37
Create Snapshot Database
11. Verify that parent/child relationship for files has been created SQL> select filenumber num, CLONEFILENAME child, SNAPSHOTFILENAME parent from x$ksfdsscloneinfo;
12. Add temp files to the TEMP tablespace. Temp files aren't sparse SQL> alter tablespace TEMP add tempfile '+DATA' size 20G;
DOAG 2018, Nuremberg, November 2018 Peter Brink 38