unleash the power of ceph across the data center · unleash the power of ceph across the data...

49
Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph Ettore Simone Alchemy LAB [email protected]

Upload: others

Post on 20-May-2020

16 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Unleash the Power of Ceph Across the Data CenterTUT18972: FC/iSCSI for Ceph

Ettore SimoneAlchemy LAB

[email protected]

Page 2: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

2

Agenda

• Introduction

• The Bridge

• The Architecture

• Benchmarks

• A Light Hands-On

• Some Optimizations

• Software and Hardware

• Q&A

Page 3: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Introduction

Page 4: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

4

About Ceph

“Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability.” (http://ceph.com/)

FUT19336 - SUSE Enterprise Storage Overview and Roadmap

TUT20074 - SUSE Enterprise Storage Design and Performance

Page 5: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

5

Ceph timeline

OpenSource2006

OpenStackIntegration2011

ProductionReadyQ3 2012

XenIntegration2013

SUSEEnterpriseStorage 2.0Q4 2015

2004ProjectStart atUCSC

2010MainlineLinuxKernel

Q2 2012Launch ofInktank

2012CloudStackIntegration

Q1 2015SUSEStorage 1.0

Page 6: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

6

Some facts

Common data centers storage solutions are built mainly on top of Fibre Channel (yes, and NAS too).

Source: Wikibon Server SAN Research Project 2014

Page 7: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

7

Is the storage mindset changing?

New/Cloud ‒ Micro-services Composed Applications

‒ NoSQL and Distributed Database (lazy commit, replication)

‒ Object and Distributed Storage

SCALE-OUT

Classic‒ Traditional Application → Relational DB → Traditional Storage

‒ Transactional Process → Commit on DB → Commit on Disk

SCALE-UP

Page 8: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

8

Is the storage mindset changing? No!

New/Cloud ‒ Micro-services Composed Applications

‒ NoSQL and Distributed Database (lazy commit, replication)

‒ Object and Distributed Storage

Natural playground of Ceph

Classic‒ Traditional Application → Relational DB → Traditional Storage

‒ Transactional Process → Commit on DB → Commit on Disk

Where we want to introduce Ceph!

Page 9: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

9

Is the new kid on the block so noisy?

Ceph is cool but I cannot rearchitect my storage!

And what about my shiny big disk arrays?

I have already N protocols, why another one?

<Add your own fear here>

Page 10: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

10

SAN

SCSIover FC

Our goal

How to achieve a non disruptive introduction of Ceph into a traditional storage infrastructure?

NAS

NFS/SMB/iSCSIover Ethernet

RBD/S3over Ethernet

Ceph

Page 11: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

11

How to let happily coexist Ceph in your datacenter with the existing neighborhood

(traditional workloads, legacy servers, FC switches etc...)

Page 12: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

The Bridge

Page 13: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

13

FC/iSCSI gateway

iSCSI‒ Out-of-the-box feature of SES 2.0

‒ TUT16512 - Ceph RBD Devices and iSCSI

Fiber Channel‒ That's the point we will focus today

Page 14: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

14

Back to our goal

How to achieve a non disruptive introduction of Ceph into a traditional storage infrastructure?

RBDSAN NAS

Page 15: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

15

Linux-IO Target (LIO™)

Is the standard open-source SCSI target in Linux.

FCFCoE

FireWireiSCSIiSERSRPloop

vHost

FABRIC BACKSTORELIO

FILEIOIBLOCKpSCSIRAMDISKTCMU

Kernel space

Page 16: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

16

Linux-IO command line

TODO:

• better define user-space interaction

• targetcli examples

Page 17: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

The Architecture

Page 18: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

18

Solution Architecture

Page 19: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

19

The LAB Architecture

Page 20: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

20

Benefits

Smooth Transition

Unlimited scalability

Cheaper

But what about performance?

Page 21: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Benchmarks

Page 22: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

22

Erasure Code and Cache Tiering

• TBD

Page 23: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

23

Erasure Code and Cache Tiering

• TBD

Page 24: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

24

Benchmarks and TestsStandard Replication with and without Cache Tiering

CONFIGURATION SSD HDD IOPS THROUGHPUT

R3SATA none 3 SATA

RJ3SATA journal 3 SATA

RC3SATA cache 3 SATA

RJC3SATA journal+cache 3 SATA

R10SAS none 10 SAS

RJ10SAS journal 10 SAS

RC10SAS cache 10 SAS

RJC10SAS journal+cache 10 SAS

Page 25: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

25

Benchmarks and TestsErasure Code and Cache Tiering

CONFIGURATION SSD HDD IOPS THROUGHPUT

E3SATA none 3 SATA N/A N/A

EJ3SATA journal 3 SATA N/A N/A

EC3SATA cache 3 SATA

EJC3SATA journal+cache 3 SATA

E10SAS none 10 SAS N/A N/A

EJ10SAS journal 10 SAS N/A N/A

EC10SAS cache 10 SAS

EJC10SAS journal+cache 10 SAS

Page 26: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

26

Workloads to Test

• 2 nodes cluster with VMware ESX‒ GNU/Linux VM

‒ Microsoft Windows 2008/2012 VM

‒ TBD: number of VMs, iozone test for Linux, for MS?

• 3 nodes cluster with Oracle RAC‒ Data Warehouse DB

‒ Other DB type?

• Other: TBD

Page 27: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

27

Benchmark Results

• TBD

• First results will be available on October

Page 28: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

A Light Hands-On

Page 29: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

29

A Vagrant LAB for Ceph and iSCSI

• 3 all-in-one nodes (MON+OSD+iSCSI Target)

• 1 admin Calamari and iSCSI Initiator with multipath

• 3 disks per OSD node

• 2 replicas

• Placement Groups: 3*3*100/2 = 450 → 512

Page 30: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

30

Ceph Initial ConfigurationLogin into ceph-admin and create initial ceph.conf

# ceph-deploy install ceph-{admin,1,2,3}# ceph-deploy new ceph-{1,2,3}# cat <<-EOD >>ceph.confosd_pool_default_size = 2osd_pool_default_min_size = 1osd_pool_default_pg_num = 512osd_pool_default_pgp_num = 512EOD

Page 31: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

31

Ceph DeployLogin into ceph-admin and create the Ceph cluster

# ceph-deploy mon create-initial# ceph-deploy osd create ceph-{1,2,3}:sdb# ceph-deploy osd create ceph-{1,2,3}:sdc# ceph-deploy osd create ceph-{1,2,3}:sdd# ceph-deploy admin ceph-{admin,1,2,3}

Page 32: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

32

Linux IO Target Initial ConfigurationTBD

# Not yet defined

Page 33: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

33

LRBD “auth”

"auth": [ { "authentication": "none", "target": "iqn.2015-09.ceph:sn" }]

Page 34: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

34

LRBD “targets”

"targets": [ { "hosts": [ { "host": "ceph-1", "portal": "portal1" }, { "host": "ceph-2", "portal": "portal2" }, { "host": "ceph-3", "portal": "portal3" } ], "target": "iqn.2015-09.ceph:sn" }]

Page 35: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

35

LRBD “portals”

"portals": [ { "name": "portal1", "addresses": [ "10.20.0.101" ] }, { "name": "portal2", "addresses": [ "10.20.0.102" ] }, { "name": "portal3", "addresses": [ "10.20.0.103" ] }]

Page 36: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

36

LRBD “pools”

"pools": [ { "pool": "rbd", "gateways": [ { "target": "iqn.2015-09.ceph:sn", "tpg": [ { "image": "data", "initiator": "iqn.1996-04.suse:cl" } ] } ] }]

Page 37: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

37

Multipath

# cat /etc/multipath.confdevices {

device {vendor "(LIO-ORG|SUSE)"product "RBD"path_grouping_policy "multibus"path_checker "tur"features "0"hardware_handler "1 alua"prio "alua"failback "immediate"rr_weight "uniform"no_path_retry 12rr_min_io 100

}}

Page 38: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

38

Screenshot Here

Page 39: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Some Optimizations

Page 40: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

40

Design optimizations

• SSD on monitor nodes for LevelDB

• SSD Journal decrease IO latency and 3x IOPs

Page 41: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

41

Software optimizations

• deadline elevator on physical disk

• noop elevator on RAID0

Page 42: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Software and Hardware

Page 43: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

43

Software Projects Used

• SUSE Linux Enterprise Storage 2.0 Beta 3

• LRBD‒ Some modification needed to manage FC

• Intel VSM‒ Some plugin written to manage FC and iSCSI

Page 44: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

44

Software Details

• Write detailed usage of previous listed software

Page 45: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

45

Hardware Details

• Write Hardware details

• OSD Nodes

• Monitor Nodes

• Gateway Nodes

Page 46: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Q&A

Page 47: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

47

[email protected]

http://alchemy.solutions/lab

Page 48: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Corporate HeadquartersMaxfeldstrasse 590409 NurembergGermany

+49 911 740 53 0 (Worldwide)www.suse.com

Join us on:www.opensuse.org

48

Page 49: Unleash the Power of Ceph Across the Data Center · Unleash the Power of Ceph Across the Data Center TUT18972: FC/iSCSI for Ceph ... Xen Integration 2013 SUSE Enterprise Storage 2.0

Unpublished Work of SUSE LLC. All Rights Reserved.This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability.

General DisclaimerThis document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-party trademarks are the property of their respective owners.