storage spaces data deduplication smb application support smb direct storage qos refs/ntfs...
TRANSCRIPT
Windows Server 2012 R2Software Defined StorageLenovo ThinkServer HA Solutions
Rui FreitasOEM Partner Strategist
What’s New in Storage Enterprise-class Storage
Platform built on WindowsProtecting your data
Maximizing your storage investments
Simplified Manageability
Storage Spaces
Thin and trim provisioning
Data Deduplication
SMB application support
SMB Direct
Storage QoS
ReFS/NTFS enhancements
Cluster Shared Volume 2
SMB Multichannel
ODX
NFS enhancements
Windows PowerShell
SC Management Packs
BPA
New/Improved in Windows Server
2012 R2SMB transparent failover
Live storage migration
Virtual Fibre Channel in Hyper-V
Windows Cluster in a Box
iSCSI Software Target
SMI-S / SM-API
Storage Tiering
Persistent write-back cache
VHDX online resize
SC VMM Management
SC DPM
Windows Azure Backup
Hyper-V Recovery Manager
Mirrored and Parity Storage Spaces
Storage SpacesWindow Server 2012 R2 Capabilities
Capabilities Overview (1)• Pooling of disks
• Flexible resilient virtual disks
Physical Disks from
Shared SAS JBODsStorage Pool
“Pepsi”
Space
Space
Space
Provisioned Virtual Disks
FlexibleAllocatio
n
Storage Pool “Coke”
Space
Space
Space
Provisioned Virtual Disks
FlexibleAllocatio
n
Capabilities Overview (1)• Pooling of disks
• Flexible resilient virtual disks
• Enclosure awareness with Spaces certified hardware
Data Copy 1 Data Copy 2
Storage Pool
Mirror or Parity Space
Capabilities Overview (2)• Data Integrity Scanner• Periodic background scan detects latent
corruptions• Integration with both NTFS (good) and ReFS
(better)
• Operational Simplicity• PowerShell, Server Manager, and SCVMM
2012 R2
• Continuous Availability with Windows File Server Cluster
Spaces & Windows File Server Cluster
Clustered Storage Pool & Storage Spaces
Hyper-V Compute Nodes
SMB Direct
Unified Cluster Shared Volume
Namespace
Shared SASJBOD Arrays
Clustered File Servers
\\SRV\VDI \\SRV\CRM \\SRV\DB
60-baySAS Array
60-baySAS Array
Mirror Space
Mirror Space
Mirror Space
9 6 G b p s S h a r e d S A S L i n k s
ComputeStorage CSV aggregates
namespace for data access across volumes
Architecturally similar to traditional storage deployment
Scaling the Windows File Server Cluster
Clustered Storage Pool & Storage
SpacesMirror SpaceMirror Space Mirror Space Mirror Space
Physical or Virtualized Workloads
Hyper-V Compute Nodes
High Speed Network(10GbE/InfiniBand)
Unified Cluster Shared Volume
Namespace\\SRV\VDI_Mrktg \\SRV\DB\\SRV\Folders\\SRV\VDI_Dev
SMB Direct
60-baySAS Array
60-baySAS Array
60-baySAS Array
60-baySAS Array
60-bay Shared SAS JBOD Arrays
Clustered File Servers with 10GbE/InfiniBand
9 6 G b p s S h a r e d S A S L i n k s 9 6 G b p s S h a r e d S A S L i n k s
SMB Direct
Storage Spaces R2 Objectives
Building upon the foundation in Windows Server 2012 to further deliver:
Minimized $/TB & Capex
Minimized Opex
Maximized IOPS/$
Dual Parity: Minimizing $/TB• Optimized space utilization
for archival workloads
• Efficient rebuild times
• Supported with Windows Clustering
• Integrated Journaling improves random workload performance
Tiered Storage: Maximizing IOPS/$
• Utilize best characteristics of SSDs and HDDs in single storage space
• Frequently accessed data moved to SSD tier, and less frequently accessed data moved to HDD tier
• Admins can assign files to specific storage tiers
Hyper-V Compute Nodes
Storage Space
HDD Tier
Cold Data
SSD Tier
Hot Data
400GB EMLC SAS SSD
4TB 7200RPM SAS HDD
Reads/Writes Accumulates Data Activity
Write-Back Cache: Maximizing IOPS/$• Absorbs spikes in
random write activity
• Seamless integration and familiar management
• Complements Tiered Storage
Hyper-V Compute Nodes
Storage Space
HDD Tier
Cold Data
SSD Tier & WBC
Hot Data
400GB EMLC SAS SSD
4TB 7200RPM SAS HDD
Reads/Writes Accumulates Data Activity
Efficient storage through Data Deduplication
VHD library
Software deployment share
General file share
0% 20% 40% 60% 80% 100%Average savings with data deduplication by workload type
Source: ESG Lab and Microsoft Internal Testing
Operating system VHDs
Maximize capacity by removing duplicate data.
Works with live VHD/VHDX files on remote VDI storage.
Increased scale and performance.• Better VM performance in VDI scenario• Low CPU and memory impact. • Configurable compression schedule.• Transparent to primary server workload.
Improved reliability and integrity.• Redundant metadata and critical data.• Checksums and integrity checks.• Increase availability through redundancy.
Faster file download times with BranchCache.
File and iSCSI StorageWindow Server 2012 R2 Capabilities
Scenarios for File and iSCSI Storage• Windows Client • SMB for Information Workers• Main features: Continuous
Availability, Directory Leasing, Encryption
• Non-Windows Client• NFS for Information Workers• Main features: Continuous
Availability, Support for both NFS v3 and v4.1
• Windows Server• Hyper-V over SMB, SQL Server over
SMB• Host boot from iSCSI Target• Main features: Continuous
Availability, Scale-Out, SMB Multichannel, SMB Direct, SMB Encryption
• Non-Windows Server• VMware over NFS, VMware over
iSCSI• Main features: Continuous
Availability
Cloud
15
Windows Server 2012 R2 is cloud optimized
Private CloudsHosted CloudsCloud Service Providers
Reducing capital and operational storage and availability costs
16
BlockStorage
Cloud Deployment Storage Vision
Scale-out File Server
Storage Spaces
Hyper-V
SMB
17
FocusEnabling Hosters and the Private Cloud
Scalability ReliabilityReducing
CostsManageabilit
yPerformance
18
SMB Direct v2 increases IOPS
DFS Replicationimprovements
Main New FeaturesPerformance
Improved SMB eventing
New SMB event channels, on by
default
New/improved PowerShell for DFS-R and SMB
Built-in SMI-S for iSCSI Target
Automatic SMB Scale-Out client
rebalancing
iSCSI Target uses
VHDX format
Diagnosability
Manageability
Optimized
19
SMB Direct Performance Enhancements• Focused on efficiency for small IO workloads
• Increased current 8KB IOPs from ~300K IOPS per interface in Windows Server 2012 to ~450K IOPS per interface (or higher) in the new release
• Key to supporting small IO with high speed NICs including 40Gbps Ethernet and 56Gbps InfiniBand
• Implemented through several techniques including:o Batching of operations between SMB and the NICo Use of RDMA remote invalidationo Further NUMA optimizations across the stack
• SMB Direct v2 leverages the new NDKPI v1.2 (RDMA)
• Backward compatible with the original NDKPI v1.1
20
Scale-Out File Server
File Server 2
Multiple SMB Instances
• Each node in a Scale-Out File Server has two instances of the SMB Server
• One instance handles incoming traffic from SMB clients accessing regular file shares
• One instance handles only inter-node CSV traffic (metadata access or redirected traffic)
• Separate data structures (locks/queues) for regular client traffic and inter-node traffic
• Improves scalability and reliability of inter-node traffic between CSV nodes
File Server 1
Hyper-V Host 1
CSV1(Metadata
Owner)
Shared SAS Storage
SM
B S
erv
er
Defa
ult
In
stance
SM
B S
erv
er
CSV
Inst
ance
SM
B C
lient
SMB Client
SM
B S
erv
er
Defa
ult
In
stance
SM
B S
erv
er
CSV
Inst
ance
SM
B C
lient
Hyper-V Host 2
SMB Client
CSV2(Not
Metadata Owner)
CSV1(Not
Metadata Owner)
CSV2(Metadata
Owner)
21
Scale-Out File Server
File Server 2
Share1 Share2
SMB Scale-out Automatic Rebalancing• Scale-Out File Server clients are now
redirected to the “best” node for access
• Avoids unnecessary redirection traffic• Driven by ownership of Cluster Shared
Volumes (coordination node)• SMB connections managed per share
(not per file server) when Direct I/O not available on the volume
• Dynamically moves as CSV volume ownership changes. Clustering is now also balancing CSV volumes automatically.
• Automatic behavior. No Administrator action.
File Server 1
Shared SAS Storage
Disk2 Disk4Disk1 Disk3 Disk5
Storage Spaces
SMB Client
Space2Space1
Share1 Share2
\\SOFS\Share1 \\SOFS\Share2
22
SMB Bandwidth Limits• SMB Traffic divided into three pre-defined
categories: Default, VirtualMachine and LiveMigration.
• Set a Bandwidth Limit for a category, using PowerShell cmdlets or WMI• Get-SmbBandwidthLimit [–Category X]• Set-SmbBandwidthLimit –Category X –BytesPerSecond Y• Remove-SmbBandwidthLimit –Category X
• Addresses the new scenario where you can use Live Migration over SMB and RDMA.
• Matching performance counters so you can observe traffic per category. Hyper-V host 2
Hyper-V host 1
File Server Cluster
for live VHD storage
File Serverfor libraryStorage
VM1 VM3
Live MigrationLimit=500MB
DefaultLimit=100MB
Virtual MachineNo Limit
Set-SmbBandwidthLimit –Category Default –BytesPerSecond 100MB
Set-SmbBandwidthLimit –Category LiveMigration –BytesPerSecond 500MB
VM2VM2
’
23
SMB Delegation• Simplified cmdlets for enabling
delegated administration of file servers
• Useful in certain Hyper-V over SMB scenarios
• Configured per SMB client/server pair• Get-SmbDelegation –SmbServer X• Enable-SmbDelegation –SmbServer X –SmbClient
Y• Disable-SmbDelegation –SmbServer X [–
SmbClient Y]
• Does not require Domain administrator rights
• Requires 2012 Forest Functional Level,
since it leverages the resource-based Kerberos constrained delegation.
Hyper-V host 2HV2
Hyper-V host 1
HV1
File ServerFS1
VM1 VM2VM2
’VM3
Domain Controller
DC1
Client used for
Management
CL1
Enable-SmbDelegation –SmbServer FS1 –SmbClient HV1Enable-SmbDelegation –SmbServer FS1 –SmbClient HV2
24
NFS Server • File sharing for non-Windows Platforms• Interop with major UNIX/Linux clients and
VMware• Shared access in a heterogeneous
environments• Same share can be accessed via both
SMB and NFS
• Improved ID mapping • Different methods, including Kerberos
support• Better security and ease of
management
• Supports multiple versions: NFSv2, NFSv3, NFSv4.1
• NFS v3 – Continuous Availability for VMware
• NFS v4.1 – High Availability for client workloads
Windows File Server Cluster
UNIX-Linux NFS clients
Linux
Solaris
Mac OS
NFS v2/v3/v4.1
25
iSCSI Target: Enhancements through GUI
26
• Secure zeroing-out on allocation of fixed virtual disks
• Dynamically expanding virtual disks
New in Windows Server 2012 R2!
• New virtual disks are VHDX-formatted
• Provision virtual disks with sizes up to 64TB
26
iSCSI SMI-S Provider for SCVMM• iSCSI Target SMI-S provider enables
end-to-end storage automation for SCVMM-managed private or hosted cloud
• Easy to install – simply install iSCSI target role service!• Designed for active-active iSCSI target clusters • Previous version limited to active-passive clusters
• Asynchronous job management via SCVMM
• Scenarios enabled• Discover SCSI targets and properties• Discover SCSI LUs and properties• Create new or delete SCSI targets • Create new or delete SCSI LUs• Add capacity to a Hyper-V cluster • List, create and delete LU snapshots• Mask/unmask LUs to a SCSI target
SCVMM server
iSCSI Target Server Cluster
SMI-S provider
iSCSI Servic
e
Hyper-V Host
27
Cluster in a Box
Volume Platform for Availability
Looking Back to Windows Server 2012
“Continuous Availability” identified by customers as critical
Problem: High-Availability server hardware is costly and too difficult to buy, install, and manage.
Major investments made throughout Windows Server
Solution: “Cluster in a Box” (CiB) programMicrosoft works with our hardware partners to enable their release of new highly-available systems for a volume market
30
Customer Focused Design (CFD) &
Areas of Investigation
22 Areas of Investigation
200 Customer Focused Design Sessions
900+ Customer Prioritized Buckets
6000+ Voice of the Customer Statements Continuous
availability of the OS, applications, and data was ranked by customers WW (US, Germany and Japan) as a must have feature
2+ years of research
6,000+ customers
Customer Demand for Continuous Availability
31
Target markets for Cluster in a Box solutions• SMB• Branch• Private Cloud
• High Availability • Simple OOBE• Storage Spaces
configurations• HW RAID
capability• SSD perf
capability
Optional JBOD storage
expansion
Cluster in a Box
JBOD storage
Single node servers
• Extended scale out• Advanced replication• Advanced power
management• Advanced performance
options• …
Storage Arrays (iSCSI/FC, SMB/NFS)
Scale out Server Cluster
$$IT
Expertise
IT Generalist
PB 1 - 3
IT Specialist
PB 4+
Features
Extending the Market for Continuous AvailabilityNew systems expand choices of cost and features
Existing marketsMarket expansion with CiB
32
Availability• At least one node and storage always
available, despite failure or replacement of any component
• Dual power domains
Simplicity• Pre-wired, internal interconnects
between nodes, controllers, and storage
Flexibility• PCIe slots for flexible LAN options• External SAS ports for JBOD expansion• Office-level power, cooling, and
acoustics to fit under a desk
Cluster in a Box, What is inside?Cluster in a Box Design Considerations
Server Enclosure
Additional JBODs …
B ports
A ports
Server BServer A
x8 PCIe
x4 SAS
External JBOD
x8 PCIe
x4 SAS
1/10G E or Infiniband1/10G E or Infiniband
B ports
A ports
SAS Expander
SAS Expander23…10
NetworkNetwork
23…10SAS
Expander
Storage Controller
CPU
SAS Expander
Storage Controller
CPU
x8 PCIe x8 PCIe
x4 SAS (through midplane)
1/10G Ethernet cluster connect (through
midplane)
Design Example (with Direct-Attached SAS)
Configuring for E-to-E Storage PerformanceBalancing performance end-to-end
Server Enclosure
B ports
A ports
Server BServer A
External JBODB ports
A ports
SAS Expander
SAS Expander23…10
NetworkNetwork
23…10SAS
Expander
Storage Controller
CPU
SAS Expander
Storage Controller
CPU
NIC
SAS
PCIe
Drives
1*
2x 1GbE
x8 PCIe 2.0
X4 6Gb
12 7.2K RPM
0.23 GB/s
3.4 GB/s
2.3 GB/s
1.5 GB/s
2x 10GbE
x8 PCIe 3.0
X8 6Gb
24 7.2K RPM
2
2.3 GB/s
6.8 GB/s
4.7 GB/s
3 GB/s
2x 40GbE
x8 PCIe 3.0
X8 6Gb
24 6Gb SSD
3
12 GB/s
4.7 GB/s
6.8 GB/s
9.2 GB/s
2x 56Gb IB
x16 PCIe 3.0
X16 6 Gb
24 6Gb SSD
4
12 GB/s
9.3 GB/s
13.6 GB/s
13 GB/s
• Be aware of system bottlenecks• Look for balanced IOPs as well as
bandwidth
*Example components(per system);
These are illustrations only,
not recommendations
Scale Horizontally
Live Migrate Hyper-V Storage
Hyper-V Servers
Scale Horizontally• Expand capacity with
additional clusters• Scale out with
asymmetrical cluster
Live Migration• Hyper-V storage
Scale
Vert
ically
JBOD expansion
JBOD expansion
JBOD expansion
JBOD expansion
Cluster in a Box as a Storage Building BlockScaling storage capacity and connectivity
Scale Vertically• Add internal
storage• Add JBOD
expansion• Add RDMA NICs
Scale Horizontally• Expand capacity with
additional clusters• Scale out with asymmetrical
cluster
Live Migration• Hyper-V guests• Hyper-V storage
Disaster Recovery• Hyper-V Replication
Scale
V
ert
ically
JBOD expansion JBOD expansion
Scale Horizontally
Live Migrate Hyper-V Guests
Live Migrate Hyper-V Storage
Replicate Hyper-V Guests
Cluster in a Box as a Hyper-V Server Building BlockScaling Guest VM capacity
Scale Vertically• Add internal storage• Add JBOD expansion• Add CPU/memory
CiB: Key Scenarios
New configurations using Parity Spaces Disk utilization can increase from 50% to 87% (mirrored
Space vs. 8 column Parity Space, single driver redundancy) Two baseline configurations recommended to evaluate
All HDDs 12 drives, single pool, two 8 column Parity Spaces 24 drives, single pool, four 8 column Parity Spaces
HDDs + SSDs 8 HDDs + 4 SSDs, single pool, two 8 column Parity Spaces 20 HDDs + 4 SSDs, single pool, four 8 column Parity Spaces
SMB3 File Server – Performance evaluation
Scale-Out File Server – VDI server using Dedupe and CSV
Q&A
Thank You!
38