washington dc area technical update update openvms march 28, 2001 brian allison...

33
Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison [email protected]

Upload: phillip-doyle

Post on 17-Jan-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

Washington DC Area Technical Update

Update Update OpenVMS

March 28, 2001

Brian [email protected]

Page 2: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

2

Discussion TopicsDiscussion Topics

FC basics HBS & DRM SANs Fibre Channel Tape Support SCSI/FibreChannel Fastpath FC 2001 Plans FC Futures

Page 3: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

3

Fibre ChannelFibre Channel ANSI standard network and storage interconnect

– OpenVMS, and most others, use it for SCSI storage 1.06 gigabit/sec., full-duplex, serial interconnect

– 2gb in late 2001… 10gb over the next several years Long distance

– OpenVMS supports 500M multi-mode fiber and 100 KM single-mode fiber

– Longer distances with inter-switch ATM links, if DRM is used Large scale

– Switches provide connectivity and bandwidth aggregation, to support hundreds of nodes

Page 4: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

4

Topologies

Arbitrated loop FC-AL (NT/UNIX today)

– Uses Hubs (or new switch hubs)

– Max. Number of nodes is fixed at 126

– Shared bandwidth

Switched (SAN - VMS / UNIX / NT)

– Highly scalable

– Multiple concurrent communications

– Switch can connect other interconnect types

Page 5: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

5

Fibre Channel Link TechnologiesFibre Channel Link Technologies

Multi-mode fiber

– 62.5 micron, 200 M

– 50 micron, 500 M (widely used)

Single-mode fiber for Inter-Switch Links (ISLs)

– 9 micron, 100 KM

DRM supports ISL gateway

– T1/E1, T3/E3 or ATM/OC3

DRM also supports ISLs with Wave Division Multiplexors (WDM) and Dense Wave Division Multiplexing (DWDM)

Page 6: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

6

Current ConfigurationsCurrent Configurations Up to twenty switches (8 or 16-port) per FC fabric AlphaServer 800, 1000A*, 1200, 4100, 4000, 8200, 8400,

DS10, DS20, DS20E, ES40, GS60, GS80, GS140, GS160 & GS320

Adapters (max) per host determined by the platform type: 2, 4, 8, 26

Multipath support - no single point of failure 100km max length

* The AS1000A does not have console support for FC.

Page 7: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

7

Long-Distance Storage InterconnectLong-Distance Storage Interconnect FC is the first long-distance storage interconnect

– New possibilities for disaster tolerance

Host-based Volume Shadowing Data Replication Manager (DRM)

Page 8: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

8

A Multi-site FC ClusterA Multi-site FC Cluster

FC SwitchFC Switch

HSG HSG HSG HSG

FC host FC host

HSG HSG HSG HSG

FC host FC host

FC SwitchFC Switch

Host-to-Hostcluster communication

100KM max

Page 9: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

9

FDDIT3ATM

CI, DSSI, MC, FDDIGigabit Ethernet

HBVS: Multi-site FC Clusters (Q4 2000)HBVS: Multi-site FC Clusters (Q4 2000)

FC SwitchFC Switch

HSG HSG HSG HSG

Alpha Alpha

HSG HSG HSG HSG

Alpha Alpha

FC Switch

FC (100 KM)

FC Switch

host based shadow set= GigaSwitch

Page 10: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

10

HBVS Multi-site FC Pro and ConHBVS Multi-site FC Pro and Con Pro

– High performance, low latency

– Symmetric access

– Fast failover

Con

– ATM bridges not supported until some time in late 2001

– Full shadow copies and merges are required today HSG write logging, after V7.3

– More CPU overhead

Page 11: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

11

FC SwitchFC Switch

DRM ConfigurationDRM Configuration

HSG HSG HSG HSG

FC host FC host

FC Switch

HSG HSG HSG HSG

FC host FC host

FC Switch

Cold stand-by nodesHost-to-Hostcluster communication

100KM max

Page 12: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

12

FC SwitchFC Switch

DRM ConfigurationDRM Configuration

HSG HSG HSG HSG

Alpha Alpha

FC Switch

Host-to-Host(LAN/CI/DSSI/MC)

HSG HSG HSG HSG

Alpha Alpha

FC SwitchFC (100 KM single mode)

controller based remote copy set

Cold stand-by nodes

Page 13: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

13

DRM Pro and ConDRM Pro and Con Pro

– High performance, low latency

– No shadow merges

– Supported now, and enhancements are planned

Con

– Asymmetric access

– Cold standby

– Manual failover 15 min. Is typical

Page 14: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

14

Storage Area Networks (SAN)Storage Area Networks (SAN) Fibre channel, switches, HSG, together offer SAN

capabilities– First components of Compaq’s ENSA vision

Support non-cooperating heterogeneous and homogeneous operating systems, and multiple O.S. Cluster instances through – Switch zoning

Controls which FC nodes can see each other Not required by OpenVMS

– Selective Storage Presentation (SSP) HSG controls which FC hosts can access a storage unit Use an HSG access ID command More interoperability with support for transparent failover

Page 15: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

15

Zoning, SSP, and Switch-to-Switch FabricsZoning, SSP, and Switch-to-Switch Fabrics

SW

SW

SW

SW

HSG

Sys1

Sys2 Sys3

Sys4

Zone AZone B

The HSG ensures that Sys1, Sys2 get one disk, andSys3, Sys4 get the other

Page 16: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

16

Cascaded SANCascaded SAN

8 Switch Cascaded

8x2 Switch Cascaded - 2 Fabrics

Well suited for applications where the majority of data access is local (eg.multiple Departmentals).Scales easily for additional connectivitySupports from 2 to 20 switches (~200 ports)Supports centralized management and backupServer/storage switch connectivity is optimized for higher performanceDesign could be used for centralized or distributed access, provided that traffic patterns are well understood and factored into the designSupports multiple fabrics for higher availabilities

Page 17: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

17

Meshed SAN: Meshed SAN:

8 Switch Meshed

8x2 Switch Meshed - 2 Fabrics

Provides a higher availability since all switches are interconnected. Topology provides multiple paths between switches in case of (link) failureIdeal for situations where data access is a mix of local and distributed requirementsScales easilySupports centralized management and backupSupports from 2 to 20 switchesSupports multiple fabrics for higher availabilities

Page 18: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

18

Ring SAN:Ring SAN:

8 Switch Ring

8x2 Switch Ring - 2 Fabrics

Provides at least two paths to any given switchWell suited for applications where data access is localized , yet provides the benefits of SAN integration to the whole OrganizationScaling is easy, logical and economicalModular DesignCentralized management and backupNon-disruptive expansionSupports from 2 to 14 switches, and multiple fabrics

Page 19: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

19

Skinny Tree Backbone SAN:Skinny Tree Backbone SAN:

10 Switch Skinny Tree

10x2 Switch Skinny Tree 2 Fabrics

Highest fabric performanceBest for “many-to-many” connectivity and evenly distributed bandwidth throughout the fabricOffers maximum flexibility for implementing mixed access types (local, distributed, centralized)Supports centralized management and backupCan be implemented across wide areas with interswitch distances up to 10 KMCan be implemented with different availability levels, including multiple fabricsCan be an upgrade path from other designsSupport 2 to 20 switches

Page 20: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

20

Fibre Channel Tape Support (V7.3)Fibre Channel Tape Support (V7.3)

Modular Data Router (FireFox)

– Fibre Channel to parallel SCSI bridge

– Connects to a single Fibre Channel port on a switch

Multi-host, but not multi-path Can be served to the cluster via TMSCP Supported as a native VMS tape device by COPY,

BACKUP, etc. ABS, MRU, SLS support is planned

Page 21: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

21

Fibre Channel Tape PictorialFibre Channel Tape Pictorial

MDR (FireFox)

FC Switch

OpenVMS OpenVMS AlphaAlpha

OpenVMS OpenVMS AlphaAlpha

OpenVMS OpenVMS AlphaAlpha

OpenVMS OpenVMS AlphaAlpha

OpenVMS OpenVMS Alpha Alpha

OpenVMS OpenVMS Alpha Alpha

Tape Library

Cluster host-to-host

RAID Array Disk Controller

OpenVMS OpenVMS Alpha or Alpha or

VAX VAX

OpenVMS OpenVMS Alpha or Alpha or

VAX VAX

Page 22: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

22

Fibre Channel Tape Support (V7.3)Fibre Channel Tape Support (V7.3)

Planned device support

– DLT 35/70

– TL891

– TL895

– ESL 9326D

– SSL2020 (AIT drives 40/80)

– New libraries with DLT8000 drives

Page 23: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

23

Fibre Channel Tape Device NamingFibre Channel Tape Device Naming WWID uniquely identifies the device WWID-based device name SCSI mode page 83 or 80 $2$MGAn, where n is assigned sequentially Remembered in SYS$DEVICES.DAT Coordinated cluster-wide

– Multiple system disks and SYS$DEVICES.DAT allowed

Page 24: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

24

SCSI/Fibre Channel “Fast Path” (V7.3)SCSI/Fibre Channel “Fast Path” (V7.3) Improves I/O scaling on SMP platforms

– Moves I/O processing off the primary CPU

– Reduces “hold time” of IOLOCK8

– Streamlines the normal I/O path

– Pre-allocated “resource bundles” Round-robin CPU assignment of fast-path ports

– CI, Fibre (KGPSA), parallel SCSI (KZPBA) Explicit controls available

– SET DEVICE/PREFERRED_CPU

– SYSGEN parameters fast_path fast_path_ports

Page 25: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

25

FibreChannel 2001 PlansFibreChannel 2001 Plans

Multipath Failover to Served Paths

– Current implementation supports failover amongst direct paths

– High availability FC clusters want to be able to fail to served path when FC fails

– Served path failover planned for V7.3-1 in late 2001

Page 26: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

26

FibreChannel 2001 plansFibreChannel 2001 plans

Expanded configurations

– Greater than 20 switches per fabric

– ATM Links

– Larger DRM configurations

Page 27: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

27

FibreChannel 2001 PlansFibreChannel 2001 Plans

HSG write logging

– Mid/Late 2001

– Requires ACS 8.7

Page 28: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

28

FibreChannel 2001 PlansFibreChannel 2001 Plans

2Gb Links

– End to end upgrade during 2001

– LP9002 (2Gb PCI adapter)

– Pleadies 4 switch (16 2Gb ports)

– HSVxxx (2Gb storage controller) 2Gb links to FC drives

Page 29: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

29

FibreChannel 2001 PlansFibreChannel 2001 Plans

HSV Storage Controller

– Follow on to HSG80/60

– Creates virtual volumes from physical storage

– ~2x HSG80 performance

– 248 physical FC drives (9TB) dual ported 15k rpm drives

– 2Gb interface to the fabric

– 2Gb interface to drives

– Early Ship program Q3 2001

Page 30: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

30

FibreChannel 2001 PlansFibreChannel 2001 Plans

SAN Management Appliance– NT based web server

– Browser interface to SAN switches HSG60/80 HSV All future SAN based storage

– Host based CLI interface also planned

Page 31: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

31

FibreChannel Futures????FibreChannel Futures???? Low Cost Clusters

– Low cost FC adapter

– FC-AL switches

– Low end storage arrays Native FC tapes Cluster traffic over FC Dynamic path balancing Dynamic volume expansion SMP distributed interrupts Multipath Tape Support IP over FC

Page 32: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

32

Potential BackportsPotential Backports

FibreChannel Tapes MSCP Multipath Failover

No Plans to backport SCSI or FC Fastpath

Page 33: Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

Fibre is good for you!