hfs to zfs migration

79
ibm.com International Technical Support Organization zFS Enhancements z/OS V1R7 zFS - 1

Upload: victor-victorica

Post on 03-Jan-2016

202 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: HFS to zFS MIgration

ibm.com

International Technical Support Organization

zFS Enhancementsz/OS V1R7

zFS - 1

Page 2: HFS to zFS MIgration

eNetwork DFSMStvs IMS RACF

geoManager DFSMSdfp IMS/ESA RMF AD/Cycle DFSMSdss IP PrintWay RS/6000 ADSTAR DFSMShsm IPDS S/390 AFP DFSMSrmm Language Environment S/390 Parallel Enterprise Server APL2 DFSORT Multiprise SecureWayAPPN Enterprise System 3090 MQSeries StorWatch BookManger Enterprise System 4381 MVS/ESA Sysplex Timer

BookMaster Enterprise System 9000 Network Station System/390C/370 ES/3090 NetSpool SystemView CallPath ES/4381 OfficeVision/MVS SOM

CICS ES/9000 Open Class SOMobjectsCICS/ESA ESA/390 OpenEdition SPCICS/MVS ESCON OS/2 VisualAgeCICSPlex First Failure Support Technology OS/390 VisualGenCOBOL/370 FLowMark Parallel Sysplex VisualLiftDataPropagator FFST Print Services Facility VTAMDisplayWrite GDDM PrintWay WebSphereDB2 ImagePlus ProductPac 3090 DB2 Universal Database Intelligent Miner PR/SM 3890/XP DFSMS IBM QMFr z/OS

z/OS.e

Trademarks

Domino (Lotus Development Corporation)DFS (Transarc Corporation) Java (Sun Microsystems, Inc.) Lotus (Lotus Development Corporation)

Tivoli (Tivoli Systems Inc.) Tivoli Management Framework (Tivoli Systems Inc.) Tivoli Manger (Tivoli Systems Inc.)

UNIX (X/Open Company Limited) Windows (Microsoft Corporation) Windows NT (Microsoft Corporation)

zFS - 2

Page 3: HFS to zFS MIgration

UNIX System Services

z/OS UNIXprograms

POSIX-APIC RTL

ASM/C/C++

z/OS UNIX Callable Services Interfaces

Logical File System

z/OS UNIX-PFS Interface

HFS PFSPhysical FileSystems

Kernel

Interactive Interface

REXX

ShellShellcmdscmds

zFS - 3

Page 4: HFS to zFS MIgration

Physical File Systemsread write open close

auto-mountTFS IP

socketsLocal

socketsNFSclientHierarchical

File SystemZFS

HFSVOLHFSVOL ZFSVOLZFSVOL

FF

/

FF F

F F

/

FF F

F

Physical File Systems

Migration

Copy HFS to zFS

z/OS UNIX Callable Services Interfaces

Logical File System

z/OS UNIX-PFS Interface

Kernel

zFS - 4

Page 5: HFS to zFS MIgration

zSeries File System (zFS)

zFS is the strategic UNIX Systems Services file system for z/OS The Hierarchical File System (HFS) functionality has been stabilized

HFS is expected to continue shipping as part of the operating system and will be supported in accordance with the terms of a customer's applicable support agreement

IBM intends to continue enhancing zFS functionality, including RAS and performance capabilities, in future z/OS releases

All requirements for UNIX file services are expected to be addressed in the context of zFS only

zFS - 5

Page 6: HFS to zFS MIgration

zFS Migration Considerations z/OS V1R7

Migrate from HFS file systems to zFS file systems (Recommended) - because zFS is planned to become a requirement in a future releasezFS is the strategic file system

HFS may no longer be supported in future releases and you will have to migrate the remaining HFS file systems to zFS

Migration tool available with z/OS V1R7HFS and zFS file system types in mount statements and command operands are now generic file system types that can mean either HFS or zFS

Based on the data set type, the system will determine which is appropriate

zFS - 6

Page 7: HFS to zFS MIgration

They can be used for either HFS or ZFS

Mount processing first searches for a data set matching the file system name

If the data set is not an HFS data set and zFS has been started, the filesystem type is changed to ZFS and the mount proceeds to zFS

If the data set was not found, the mount proceeds to zFS (assuming zFS is started)

If zFS is not started, the type is not changed and the mount proceeds to HFS

HFS/zFS as Generic Filesystem Type

zFS - 7

Page 8: HFS to zFS MIgration

Migration Considerations

Often the file system type is used in a portion of the file system name

With the change to a file type of zFS, parmlib members, mount scripts and policies are affected as well as the file system name

You may prefer to keep the current file system name and then no changes to mount scripts Migration tool will help with handling these changes

zFS file systems that span volumes must be preallocated prior to invoking the toolzFS file systems that are greater than 4 GB must be defined with a data class that includes extended addressability

zFS file systems cannot be striped zFS - 8

Page 9: HFS to zFS MIgration

Use a migration tool to handle many details needed for the migration of HFS file systems to zFS, including the actual copy operation - z/OS V1R7

Migration tool - BPXWH2Z

An ISPF based tool for migration from an HFS file system to zFS

It has a panel interface that enables you to alter the space allocation, placement, SMS classes, and data set names

A HELP panel is provided

Uses a new pax utility for copy

Migration Tool

zFS - 9

Page 10: HFS to zFS MIgration

Migration Checks Filesystem Type

MOUNT FILESYSTEM('ZOSR17.MAN.///') TYPE(HFS) MODE(READ) MOUNTPOINT('/usr/man')

For migration scenario, mount processing first checks for ZFS, if that data set exists, migration has been done

To revert back to using HFS, that data set must be renamed or removed

A new place holder, ///, is created to represent the string HFS or ZFS

If type is HFS, as shown in the following mount statement or mount command Mount processing first substitutes ZFS to see if data set exists - If no, HFS is used to see if the data set existsMount is directed to zFS or HFS

zFS - 10

Page 11: HFS to zFS MIgration

Automount Migration Considerations

name *type ZFSfilesystem OMVS.<uc_name>.///mode rdwrduration 1440delay 360

File system name contains substitution strings (V1R7) Data set status is determined after substitution is performed substituting ZFS or HFS for any /// in the name

zFS - 11

Page 12: HFS to zFS MIgration

The REXX is located in library SYS1.SBPXEXEC which is concatenated to SYSPROC ISPF Option 6 command line, Enter

bpxwh2zCommand line flag options:

-v Additional messages are issued at various points in processing -c Summary information goes to a file when background (bg) is selected once you are in the application. If this is not specified, summary information goes to the console

bpxwh2z -c - bpxwh2z -v - bpxwh2z -cv

REXX EXEC - BPXWH2Z

zFS - 12

Page 13: HFS to zFS MIgration

BPXWH2Z Panels

Panels will allow for customization of space allocation, placement SMS class and data set name Tool makes the zFS a little larger if it sees the HFS is near capacity by:

10% if it is over 75% full and under 90% and 20 % if overThis is a precaution since zFS is a logging filesystem and includes the log in the aggregate

If unmounted it creates a mountpoint in /tmp and mounts it read only for the migration then deletes the mountpointIf there are filesystems mounted below the migrating filesystem they are unmounted for migration

The mount points are created in the new filesystem and when complete the filesystems are re-mounted

zFS - 13

Page 14: HFS to zFS MIgration

BPXWH2Z Panels

By default, the panels are primed such that your HFS data set is renamed with a .SAV suffix and the zFS data set is renamed to the original HFS nameYou can preallocate the zFS file system or specify a character substitution string such that the data sets are not renamed - The substitution string is specified as the first argument on the command line as follows:

/fromstring /tostring/ datasetname For example, if your data set is OMVS.DB2.HFS and you want your zFS data set to be named OMVS.DB2.ZFS, you can specify the following:

bpxwh2z -cv /hfs/zfs/ omvs.db2.hfszFS - 14

Page 15: HFS to zFS MIgration

It is difficult to determine the exact size necessary for a zFS file system to contain all of the data within an HFS file system

Depending on number of files, directories, ACLs, symlinks, and file sizes, zFS can consume either more or less space than HFS

zFS also has a log in the aggregate

For general purpose file systems, it appears that they consume about the same amount of space

Two data sets will exist during the migration, so plan for space for two file systems about same size

Space Allocations - HFS versus zFS

zFS - 15

Page 16: HFS to zFS MIgration

Migration Steps

If the HFS file system is mounted, either unmount it or switch it to read-only mode

The migration tool switches it to R/O If the file system is not mounted make a directory for it and mount it in R/O mode

The migration tool creates a temporary directory in /tmp for the mountpoint and deletes it when the migration is complete

Create a temporary directory for the zFS file system The migration tool creates a temporary directory in /tmp for the mountpoint and deletes it when the migration is complete

Check the allocation attributes for of the HFS file systemzFS - 16

Page 17: HFS to zFS MIgration

Define a new zFS file system with the appropriate allocation attributesMount the zFS file system on its temporary mountpoint (a pre-existing zFS cannot already be mounted) Use the pax utility to copy the HFS contents to the zFS by entering the shell, change directory to the HFS mountpoint, and run the pax utility - The pax command you can use follows:

pax -rw -peW -XCM . /tmp/zfsmountpointIf the HFS contains active mountpoints do a mkdir for each of these in the zFS file system and set directory attributes as required

Migration Steps

zFS - 17

Page 18: HFS to zFS MIgration

Migration Steps

Unmount the zFS file systemUnmount the HFS file system Remove any temporary directories used as mountpointsRename the data sets as appropriate and ensure all mount scripts and policies have been updated as neededThe migration tool does not modify or search for any type of mount scripts or policiesIf the HFS file system was originally mounted, mount the zFS file system in that same location along with any hierarchy that also had to be unmounted to unmount the HFS

zFS - 18

Page 19: HFS to zFS MIgration

Using the Migration Tool

From ISPF Option 6, Enter:

bpxwh2z

When the first panel is displayed

Enter HFS data set name to migrate

Then hit Enter

zFS - 19

Page 20: HFS to zFS MIgration

Using SMS if Required

Alter SMS classes and volume if required Then hit the Enter key to see the allocations

zFS - 20

Page 21: HFS to zFS MIgration

Migrate in ForegroundA - Alter allocation

Migrating OMVS.ROGERS.TEST creating zFS OMVS.ROGERS.TEST.TMP copying OMVS.ROGERS.TEST to OMVS.ROGERS.TEST.TMP Blocks to copy: 2832 IGD01009I MC ACS GETS CONTROL &ACSENVIR=RENAME IGD01009I MC ACS GETS CONTROL &ACSENVIR=RENAME IDC0531I ENTRY OMVS.ROGERS.TEST.TMP ALTERED IDC0531I ENTRY OMVS.ROGERS.TEST.TMP.DATA ALTERED mount /u/rogers/tmp OMVS.ROGERS.TEST ZFS 1 ***

zFS - 21

Page 22: HFS to zFS MIgration

Alter Allocation Parameters

Panel to alter allocations of the zFS data setThis panel is required if your zFS is going to be multi-volume - zFS must be preallocated - Y

zFS - 22

Page 23: HFS to zFS MIgration

Migrating a List of Data Sets

OMVS.ROGERS.TEST1OMVS.ROGERS.TEST2OMVS.ROGERS.TEST3OMVS.ROGERS.TEST4OMVS.ROGERS.TEST5OMVS.ROGERS.TEST6OMVS.ROGERS.TEST7

List of data sets to migrate:

zFS - 23

Page 24: HFS to zFS MIgration

Use the HELP command for full usage information on this tool Select items with D to delete items from this migration list Select items with A to alter allocation parameters for the items Enter command FG or BG to begin migration in foreground or UNIX background ------------------------------------------------------------------------------ _ HFS data set ..: OMVS.ROGERS.TEST Utilized: 62% Save HFS as ..: OMVS.ROGERS.TEST.SAV Initial zFS ..: OMVS.ROGERS.TEST.TMP Allocated: N HFS space Primary : 25 Secondary: 5 Units ..: CYL zFS space Primary : 25 Secondary: 5 Units ..: CYL Dataclas : HFS Mgmtclas : HFS Storclas: OPENMVS MOUNTED Volume : SBOX1F Vol count: 1 ------------------------------------------------------------------------------ _ HFS data set ..: OMVS.ROGERS.TEST1 Utilized: 62% Save HFS as ..: OMVS.ROGERS.TEST1.SAV Initial zFS ..: OMVS.ROGERS.TEST1.TMP Allocated: N HFS space Primary : 25 Secondary: 5 Units ..: CYL zFS space Primary : 25 Secondary: 5 Units ..: CYL Dataclas : HFS Mgmtclas : HFS Storclas: OPENMVS Volume : SBOX1E Vol count: 1

Data Set List Displayed

--------------------------- DATA SET LIST ------------------------------------------------------------- Row 1 to 3 of 8 Command ===> fg

............................. more data sets .............. zFS - 24

Page 25: HFS to zFS MIgration

All function and options from previous levels of pax are still valid at this level

All automated scripts which use pax at the previous level still work without errors

Any archives created with previous levels of pax can be extracted by the new version without problems

New option flags for pax

New options on an older level of pax will fail with a usage message - new options are:

-C , -M , -D

New pax Functions in z/OS V1R7

zFS - 25

Page 26: HFS to zFS MIgration

pax Enhancements

Sparse files are those which do not use real disk storage for pages of file data that contain only zeros

Copying files as sparse saves disk space

The ability to skip read errors may allow installations to salvage some files from corrupted file systems

Preserving all files attributes of copied file saves installations from having to manually set desired attributes that may not have been preserved previously

Create mountpoint directories

Copy all attributes from the source root to the target of the copy after the copy is complete

Lotus Domino is an example for an application exploiting the usage of sparse files to save disk space zFS - 26

Page 27: HFS to zFS MIgration

zfsadm Command Changes for Sysplex

Previously, zfsadm commands (and the pfsctl APIs) could only display or change information that is located or owned on the current member of the sysplex

Only issued from the owning systemWhen DFSMS is used to backup a zFS aggregate, the aggregate is quiesced by DFSMS before the backup

In order for this quiesce to be successful, it must be issued on the system that owns the aggregate

zfsadm commands cannot be routed to systems not running at z/OS V1R7

zFS - 27

Page 28: HFS to zFS MIgration

[-admin_threads number] [-user_cache_size number[,fixed]] [-meta_cache_size number[,fixed]][-log_cache_size number[,fixed]] [-sync_interval number] [-vnode_cache_size number][-nbs {on|off}] [-fsfull threshold,increment] [-aggrfull threshold,increment] [-trace_dsn PDSE_dataset_name] [-tran_cache_size number] [-msg_output_dsn Seq_dataset_name] [-user_cache_readahead {on|off}] [-metaback_cache_size number[,fixed]] [-fsgrow increment,times] [-aggrgrow {on|off}] [-allow_dup_fs {on|off}] [-vnode_cache_limit number][-system system name] [-level] [-help]

zfsadm config Command Options

[-metaback_cache_size number][-fsgrow increment,times][-aggrgrow {on|off}][-allow_dup_fs {on|off}]

New optionsin z/OS V1R4

New optionsin z/OS V1R7

[-system system name]

IOEFSPRM or IOEPRMxx parameters

zFS - 28

Page 29: HFS to zFS MIgration

[-system system name] [-adm_threads] [-aggrfull][-aggrgrow] [-all] [-allow_dup_fs] [-auto_attach][-cmd_trace] [-code_page] [-debug_dsn] [-fsfull] [-fsgrow][-group] [-log_cache_size][-meta_cache_size][-metaback_cache_size]

zfsadm configquery Command Options

[-msg_input_dsn][-msg_output_dsn][-nbs] [-storage_details][-sync_interval][-sysplex_state] [-trace_dsn][-trace_table_size][-tran_cache_size][-user_cache_readahead] [-user_cache_size][-usercancel][-vnode_cache_size] [-level][-help]

R7<---

R7<---

R7<---

$> zfsadm configquery -sysplex_stateIOEZ00317I The value for configuration option -sysplex_state is 1.

zFS - 29

Page 30: HFS to zFS MIgration

zfsadm Command Changes for Sysplex

In z/OS V1R7, if you are in a shared file system environment, zfsadm commands and zFS pfsctl APIs generally work across the sysplex There are two main changes:

The zfsadm commands act globally and report or modify all zFS objects across the sysplex They support a new –system option that limits output to zFS objects on a single system or sends a request to a single system

-system system name

zfsadm commands can now be routed to systems running z/OS V1R7

zFS - 30

Page 31: HFS to zFS MIgration

zfsadm configquery command options -group -system system name -sysplex_state

z/OS V1R7 zfsadm Command Changes

-sysplex_state

Displays the sysplex state of ZFS. Zero (0) indicates that ZFS is not in a shared file system environment. One (1) indicates that ZFS is in a shared file system environment

$> zfsadm configquery -sysplex_stateIOEZ00317I The value for configuration option -sysplex_state is 1.

zFS - 31

Page 32: HFS to zFS MIgration

-system system nameSpecifies the name of the system the report request will be sent to, to retrieve the data requested

-groupDisplay XCF group for communication between members

Default Value: IOEZFS Expected Value: 1 to 8 characters Example group: IOEZFS1

Configuration Options - z/OS V1R7

PAUL @ SC65:/>zfsadm configquery -groupIOEZ00317I The value for configuration option -group is IOEZFS.

PAUL @ SC65:/>zfsadm aggrinfo -system sc70 IOEZ00368I A total of 1 aggregates are attached to system SC70.ROGERS.HARRY.ZFS (R/W COMP): 3438 K free out of total 3600

zFS - 32

Page 33: HFS to zFS MIgration

zFS Command Forwarding Support

z/OS V1R7 - Ability to issue zFS filesystem commands from anywhere within a sysplex using:

-system system nameAllows quiesce of zFS from any LPAR in a sysplex

zfsadm commands go to, display info on all membersaggrinfo - clonesys - lsaggr - lsfs

zfsadm commands go to the correct system automaticallyaggrinfo, clone, create, delete, grow, lsfs, lsquota, quiesce, rename, setquota, unquiesce

zfsadm commands can be limited/directed to a systemaggrinfo, attach, clonesys, config, configquery, define, detach, format, lsaggr, lsfs, query

zFS - 33

Page 34: HFS to zFS MIgration

zfsadm commands will now display and work with zFS aggregates and file systems across the sysplex

This enhancement is exploited by:

DFSMS backup (ADRDSSU)

Backups can now be issued from any member of the sysplex when all members are at z/OS V1R7 regardless of which member owns the file system

ISHELL

zFS administrators

Command Forwarding Support

zFS - 34

Page 35: HFS to zFS MIgration

Previously, you could not have a zFS aggregate or file system name that contained special charactersSupport in z/OS V1R7 for special characters:

(@ # $)APAR OA08611 support for prior releases

UA14531 for z/OS V1R4 UA14529 for z/OS V1R5 UA14530 for z/OS V1R6

Special Characters in zFS Aggregates

# zfsadm define -aggr [email protected] -volumes CFC000 -cyl 10IOEZ00248E VSAM linear datase PLEX.JMS.AGGR#06.LDS0006 successfully created.# zfsadm format -aggr [email protected] -compatIOEZ00077I HFS-compatibility aggregate [email protected] has been successfully created

zFS - 35

Page 36: HFS to zFS MIgration

Implemented with APAR OA12519

Coexistence APAR OA11573 for prior releases

Apply to Z/OS Versions V1R4, V1R5, V1R6, V1R7

Note: Apply to V1R7 before OA12519

Resolves the following problem:

Dead system recovery caused zFS file systems to move to another system

zFS file system had tens of millions of files

Mount during move took 30 minutes to complete

System (USS) unavailable during this time

zFS Fast Mount - Performance Improvement

zFS - 36

Page 37: HFS to zFS MIgration

zFS Fast Mount - Performance Improvement

To support this improvement, the first time that a zFS aggregate is mounted on z/OS V1R7 after APAR OA12519 is applied

It is converted to a new on-disk format (called version 1.4) Additional information for mount processing can be recorded in the aggregate All existing aggregates are version 1.3 aggregates

After conversion, subsequent mount processing will occur more quickly

APAR OA11573 supplies the code to process the new format (version 1.4) for all releases (1.4 to 1.7)

Only works for compatibility mode aggregateszFS - 37

Page 38: HFS to zFS MIgration

Salvager has new option to convert a zFS aggregate that is in version 1.4 format back to a version 1.3

Use in case the 1.4 aggregate is to be used on a system that does not have APAR OA11573 applied

A converttov3 option is added

ioeagslv -aggregate <name> -recoveronly {-converttov3 | -verifyonly | -salvageonly} -verbose -level -help

Fast Mount Considerations

IOEAGSLV can be executed from any supported release by STEPLIBing to MIGLIB

zFS - 38

Page 39: HFS to zFS MIgration

Previously, if a zFS aggregate was quiesced and the job failed to unquiesce it, you would need to use OMVS - zfsadm unquiesce command

Not possible to unquiesce from operator console

z/OS V1R7 new command to issue unquiesce

F ZFS,UNQUIESCE,[email protected]

Must be issued from owning system

This command only works on z/OS V1R7

It must be issued from the owning system

This command does not forward requests to other members of the sysplex

New UNQUIESCE Command

zFS - 39

Page 40: HFS to zFS MIgration

Previously, an End of Memory condition for an application while executing in zFS could cause zFS to go down

When zFS goes down, all zFS file systems are unmounted

Now, End of Memory conditions are recovered by zFS

zFS does not go down

No zFS file systems are unmounted

z/OS V1R7 zFS will recover for EOM in all situations, earlier releases will not

zFS EOM Support

zFS - 40

Page 41: HFS to zFS MIgration

zFS End of Memory (EOM) support is invoked by:A calling application going to End of Memory while executing in zFSOperator action when a job is forced (did not respond to cancel)

Changed external outputzFS does not go down/zFS file systems do not need to be remounted

Important: With this EOM support in z/OS V1R7 IBM no longer recommends not to use zFS for a root, sysplex root, or version root structure. zFS is now the preferred file system

zFS EOM Support

zFS - 41

Page 42: HFS to zFS MIgration

Mounting File Systems with SET OMVS

The SET OMVS command enables you to modify BPXPRMxx parmlib settings without re-IPLing (V1R7) Previously, when the SETOMVS RESET=xx command was issued from the console to activate a BPXPRMxx parmlib member

MOUNT statements in the BPXPRMxx member were not executed

z/OS V1R7 - using the SET OMVS=xx operator command from the console:

MOUNT commands in the BPXPRMxx parmlib member are executedEasier to dynamically mount file systems

zFS - 42

Page 43: HFS to zFS MIgration

z/OS V1R7 changes for mounting filesystems

Can now change the mount configuration from the console using a BPXPRMxx parmlib member

Information about prior mount or file system move failures can be displayed with D OMVS command

Execute mount commands, including mounting the root

See console messages indicating success or failure of the mount requested

Statements that can be changed using SET OMVS=xx

MOUNT, FILESYSTYPE, SUBFILESYSTYPE, NETWORK and ROOT statements

Mounting with SET OMVS Command

zFS - 43

Page 44: HFS to zFS MIgration

FILESYSTYPE TYPE(ZFS) /* Type of file system to start */ ENTRYPOINT(IOEFSCM) /* Entry Point of load module */ ASNAME(ZFS) /* Procedure name */ ROOT FILESYSTEM('OMVS.&SYSNAME..&SYSR1..ROOT.ZFS') /* z/OS root filesystem */ TYPE(ZFS) /* Filesystem type ZFS */ MODE(RDWR) /* Mounted for read/write */ /* */ MOUNT FILESYSTEM('OMVS.&SYSNAME..ETC.ZFS') /* ZFS for /etc directory */ MOUNTPOINT('/etc') TYPE(ZFS) /* Filesystem type ZFS */ MODE(RDWR) /* Mounted for read/write */ /* */ /* */ MOUNT FILESYSTEM('OMVS.&SYSNAME..VAR.ZFS') /* ZFS for /var directory */ MOUNTPOINT('/var') TYPE(ZFS) /* Filesystem type ZFS */ MODE(RDWR) /* Mounted for read/write */

BPXPRMSS Parmlib Member

zFS - 44

Page 45: HFS to zFS MIgration

P ZFS IOEZ00541I 729 zFS filesystems owned on this system should be unmounted or moved before stopping zFS. If you do not applications may fail. *005 IOEZ00542D Are you sure you want to stop zFS? Reply Y or N IEE600I REPLY TO 005 IS;Y IOEZ00050I zFS kernel: Stop command received. IOEZ00048I Detaching aggregate OMVS.TC7.VAR.ZFS IOEZ00048I Detaching aggregate OMVS.TC7.ETC.ZFS IOEZ00048I Detaching aggregate OMVS.TC7.TRNRS1.ROOT.ZFS IOEZ00057I zFS kernel program IOEFSCM is ending *006 BPXF032D FILESYSTYPE ZFS TERMINATED. REPLY 'R' WHEN READY TO RESTART. REPLY 'I' TO IGNORE. IEF352I ADDRESS SPACE UNAVAILABLE $HASP395 ZFS ENDED $HASP250 ZFS PURGED -- (JOB KEY WAS BE68CA18) R 6,I IEE600I REPLY TO 006 IS;I SET OMVS=SS IEE252I MEMBER BPXPRMSS FOUND IN SYS1.PARMLIB BPXO032I THE SET OMVS COMMAND WAS SUCCESSFUL.

Messages from a Shutdown of zFS

zFS - 45

Page 46: HFS to zFS MIgration

$HASP100 ZFS ON STCINRDR $HASP373 ZFS STARTED IOEZ00052I zFS kernel: Initializing z/OS zSeries File System Version 01.07.00 Service Level OA12872 - HZFS370. Created on Mon Sep 26 16:19:25 EDT 2005. IOEZ00178I SYS1.TC7.IOEFSZFS(IOEFSPRM) is the configuration datasetcurrently in use. IOEZ00055I zFS kernel: initialization complete. BPXF013I FILE SYSTEM OMVS.TC7.TRNRS1.ROOT.ZFS 768 WAS SUCCESSFULLY MOUNTED. BPXF013I FILE SYSTEM OMVS.TC7.ETC.ZFS 769 WAS SUCCESSFULLY MOUNTED. BPXF013I FILE SYSTEM OMVS.TC7.VAR.ZFS 770 WAS SUCCESSFULLY MOUNTED.

Messages from a Shutdown of zFS

zFS - 46

Page 47: HFS to zFS MIgration

New Command Options

D OMVS,MF - displays the last 10 (or less) failures

D OMVS,MF=A - displays up to 50 failures

D OMVS,MF=ALL

D OMVS,MF=P - deletes the failure information log

D OMVS,MF=PURGE

zFS - 47

Page 48: HFS to zFS MIgration

D OMVS,MF BPXO058I 16.05.10 DISPLAY OMVS 356 OMVS 0011 ACTIVE OMVS=(7B) SHORT LIST OF FAILURES: TIME=16.02.09 DATE=2005/08/16 MOUNT RC=007A RSN=052C018F NAME=HAIMO.ZFS TYPE=ZFS PATH=/SC70/tmp/bpxwh2z.HAIMO.16:01:56.zfs PARM=AGGRGROW TIME=10.57.24 DATE=2005/08/15 MOUNT RC=0079 RSN=5B220125 NAME=OMVS.TCPIPMVS.HFS TYPE=HFS PATH=/u/tcpipmvs

Mount Error Command and Messages

zFS - 48

Page 49: HFS to zFS MIgration

D OMVS,MF=A BPXO058I 13.26.37 DISPLAY OMVS 974 OMVS 0011 ACTIVE OMVS=(TT) LAST PURGE: TIME=08.49.43 DATE=2005/05/11 ENTIRE LIST OF FAILURES: TIME=09.36.41 DATE=2005/05/13 MOUNT RC=0081 RSN=EF096055 NAME=HFS.TEST.MOUNT4 TYPE=ZFS PATH=/TEST/TSTMOUNTZ PLIB=BPXPRMTT (...up to 50 failures)

D OMVS,MF=P BPXO058I 13.30.25 DISPLAY OMVS 010 OMVS 0011 ACTIVE OMVS=(TT) PURGE COMPLETE: TIME=13.30.25 DATE=2005/05/13 D OMVS,MF BPXO058I 13.42.45 DISPLAY OMVS 041 OMVS 0011 ACTIVE OMVS=(TT) LAST PURGE: TIME=13.30.25 DATE=2005/05/13 NO MOUNT OR MOVE FAILURES TO DISPLAY

Mount Error Commands and Messages

zFS - 49

Page 50: HFS to zFS MIgration

zFS Performance Tuning

zFS performance features

The IOEFSPRM provides tuning options

Output of the operator modify query commands provide feedback about the operation of zFS

f zfs,query,all ------ zfsadm query ......

zFS performance optimization with caches sizes

Reduce I/O rates and pathlength

Hit ratios should be 80% or better

Over 90% gives good performance

Monitor DASD performance

Lotus Domino seems to perform well even if the read hit ratio is below 80% zFS - 50

Page 51: HFS to zFS MIgration

zFS Cache

With V1R4, the log file cache is in a dataspaceDefault - (64 MB)Frees up space for caches in ZFS address space

Cache Dataspace? How many?User file Yes 32Log file Yes 1Metaback Yes 1Metadata NoVnode NoTransaction No

zFS - 51

Page 52: HFS to zFS MIgration

zFS Cache Locations

ZFS address space Dataspaces

User file cache

256MB

Metadata cache

32MB

Default sizes shown

Metadata cacheAll directory contentsFile status informationFile system structuresAlso caching of data for files smaller than 7K

ZFSVL1ZFSVL1

Log file cache

8K64MB

2000

64K

Transaction cache

Vnode cache

OMVS.CMP01.ZFS

F

/

FF F

F

Metaback cache

1M-2048M

Directory cache

2M-512M

f zfs,query,all zFS - 52

Page 53: HFS to zFS MIgration

Metadata Backing Cache

With V1R4, a new backing cache contains an extension to the metadata cache and resides in a dataspace - Specify in IOEFSPRM file

metaback_cache_size=64M,fixed

Values allowed: 1 MB to 2048 MB

Used as a "paging" area for metadata

Allows a larger meta cache for workloads that need large amounts of metadata

Only needed if meta cache is constrained

zFS - 53

Page 54: HFS to zFS MIgration

Performance APIs

Previously, certain zFS performance counters could only be retrieved via operator console command

Data could not easily be retrieved by an applicationProvide an API to obtain statistics on zFS activity

Now, all zFS performance counters can be retrieved by an application - pfsctl API (BPX1PCT)

Now, zfsadm command can be used to display/reset performance counters

zfsadm query -metacacheAdministrative application programs can use this functionRMF exploits this function in z/OS V1R7

zFS - 54

Page 55: HFS to zFS MIgration

zFS provides six new pfsctl APIs to retrieve performance counters

LocksStorageUser data cacheiocountsiobyaggriobydasd

zFS provides a new zfsadm command (query) to query/reset the performance counters

Performance Monitoring APIs

zfsadm query [-locking ] [-storage ] [-usercache ] [-iocounts] [-iobyaggregate ] [-iobydasd] [-level] [-help] [-reset]

f zfs,query,all

z/OS V1R6

zFS - 55

Page 56: HFS to zFS MIgration

zfsadm query Command

zfsadm query command options with z/OS V1R7knpfs, -metacache, -dircache, -vnodecache, -logcache, -trancache -system system name

ROGERS @ SC65:/u/rogers>zfsadm query -dircache Directory Backing Caching Statistics

Buffers (K bytes) Requests Hits Ratio Discards---------- --------- ---------- ---------- ------ ---------- 256 2048 1216679 1216628 99.9% 50

ROGERS @ SC65:/u/rogers>zfsadm query -system sc70 -dircache Directory Backing Caching Statistics Buffers (K bytes) Requests Hits Ratio Discards ---------- --------- ---------- ---------- ------ ---------- 256 2048 0 0 0.0% 0

zFS - 56

Page 57: HFS to zFS MIgration

Menu List Mode Functions Utilities Help ------------------------------------------------------------------------- ISPF Command Shell Enter TSO or Workstation commands below: ===> ioezadm query -metacache Place cursor on choice and press enter to Retrieve command => ioezadm query -metacache => ioezadm aggrinfo => omvs => ish => ishell => bpxwh2z =>

Metadata Caching Statistics Buffers (K bytes) Requests Hits Ratio Updates ---------- --------- ---------- ---------- ------ ---------- 512 4096 15779 15727 99.6% 4

IOEZADM Utility from TSO for Commands

zFS - 57

Page 58: HFS to zFS MIgration

Directory Cache

In z/OS V1R7 a new zFS IOEPRMxx parameter file statement is available to set the directory cache size

dir_cache_size - Size of the directory buffer cache Example: dir_cache_size=4M

The minimum and default size is 2M, the maximum value that can be set is 512MChanging this cache size may be useful if you expect situations with many file creates or deletesThis is available via APAR OA10136 and PTFs:

UA16125 for z/OS V1R4 UA16123 for z/OS V1R5

UA16124 for z/OS V1R6 zFS - 58

Page 59: HFS to zFS MIgration

ROGERS @ SC65:/u/rogers>zfsadm query -iobyaggr zFS I/O by Currently Attached Aggregate DASD PAV VOLSER IOs Mode Reads K bytes Writes K bytes Dataset Name ------ --- ---- ---------- ---------- ---------- ---------- ------------ SBOX42 1 R/O 10 68 0 0 OMVS.TEST.MULTIFS.ZFS MHL1A0 3 R/O 93432 373760 0 0 ZFSFR.ZFSA.ZFS MHL1A1 2 R/O 93440 373824 0 0 ZFSFR.ZFSB.ZFS SBOX32 1 R/W 22 164 82775 331100 OMVS.HERING.TEST.ZFS SBOX1A 3 R/W 9 60 82747 330988 OMVS.HERING.TEST.DIRCACHE ------ --- ---- ---------- ---------- ---------- ---------- ------------ 5 186913 747876 165522 662088 *TOTALS* Total number of waits for I/O: 186910 Average I/O wait time: 0.703 (msecs)

zfsadm query -iobyaggr Command

zFS - 59

Page 60: HFS to zFS MIgration

//ZFSJOBB JOB ,'IOEBATCH',NOTIFY=&SYSUID.,REGION=0M //* ------------------------------------------------------------------ //* Run zFS related IOEZADM commands in batch mode //* ------------------------------------------------------------------ // SET REXXLIB=ROGERS.ZFS.REXX.EXEC <=== SYSEXEC library //* ------------------------------------------------------------------ //IOEZADM EXEC PGM=IKJEFT01,PARM=RXIOE //SYSEXEC DD DSNAME=&REXXLIB.,DISP=SHR //SYSIN DD DATA,DLM=## # -------------------------------------------------------------------- ioezadm query -metacache ioezadm query -usercache ioezadm query -logcache ioezadm query -vnodecache ioezadm query -dircache ioezadm query -trancache ioezadm query -knpfs ioezadm query -metacache -system sc70 ioezadm aggrinfo # -------------------------------------------------------------------- //SYSTSIN DD DUMMY ## //SYSPRINT DD SYSOUT=*,LRECL=260,RECFM=V //SYSTSPRT DD SYSOUT=*,LRECL=136,RECFM=V//*

Batch Job to Issue zfsadm Commands

zFS - 60

Page 61: HFS to zFS MIgration

REXX procedure RXIOE allows a user to execute the zFS command ioezadm in TSO batch jobs by switching to superuser mode by setting the UID of the submitting user to 0 and then executing the command

Command lines ending with ( - ) continue on next line Leading blanks of a continuation line are removed Commands ending with a plus sign ( + ) are not displayed within the command output data Lines starting with a pound sign ( # ) in column 1 are treated as a comment Blank lines are displayed as blank lines in the output.Text entered in lines following slash plus asterisk ( /* ) is displayed in the output allowing comments in output All the command output data is available with DD name SYSPRINT

REXX Procedure RXIOE Advantages

zFS - 61

Page 62: HFS to zFS MIgration

zFS Redbooks

z/OS Distributed File Service zSeries File System Implementation, SG24-6580-01 - (z/OS V1R6)Publication - April 2005

z/OS Distributed File Service zSeries File System Implementation, SG24-6580-00 - (z/OS V1R3)Publication - September 2002

z/OS Distributed File Service zSeries File System Implementation, SG24-6580-02 - (z/OS V1R7)Publication - February 2006

zFS - 62

Page 63: HFS to zFS MIgration

Monitor III zFS reports data in z/OS V1R7 on:zFS response time / wait timeszFS cache activityzFS activity / capacity by aggregatezFS activity / capacity by filesystem

Data helps to control the zFS environment for:Cache sizesI/O balancingCapacity control for zFS aggregates

RMF Monitor III Support for zFS

zFS - 63

Page 64: HFS to zFS MIgration

Continuous Async I/O

Application

SynchronousApplication I/O

zFS Address space and Dataspaces

Metadata cache

Vnode cache

Transaction Cache

Log file cache

User cache

F

/

FF F

F

zFS VSAM data set (Aggregate)

zFS filesystem

zFS Access to File Systems

f zfs,query,allzFS - 64

Page 65: HFS to zFS MIgration

RMF Overview Report Selection Menu Selection ===> Enter selection number or command for desired report. Basic Reports 1 WFEX Workflow/Exceptions (WE) 2 SYSINFO System information (SI) 3 CPC CPC capacity Detail Reports 4 DELAY Delays (DLY) 5 GROUP Group response time breakdown (RT) 6 ENCLAVE Enclave resource consumption and delays (ENCL) 7 OPD OMVS process data 8 ZFSSUM zFS Summary (ZFSS) 9 ZFSACT zFS File system activity (ZFSA)

RMF Overview Report Selection Menu (Detail reports)

Command interface ZFSSUM or ZFSS:

zFS summary reportZFSACT or ZFSA:

zFS activity report

Gatherer options:Option NOZFS | ZFS

RMF Overview Report Selection Menu

zFS - 65

Page 66: HFS to zFS MIgration

Response time section:Average response time for

zFS requestWait percentages

Cache activity:Request ratesHit ratios% read requests% requests delayed

Aggregate section:Capacity dataMount mode# filesystems in the aggregateRead / Write rates (Bytes per

second)

RMF V1R7 zFS Summary Report Line 1 of 3 Command ===> Scroll ===> CSR Samples: 120 System: TRX2 Date: 09/07/04 Time: 16.30.00 Range: 120 Sec ---- Response Time ---- ---------------- Cache Activity ---------------------- ----- Wait% ----- --------- User ------ -- Vnode -- - Metadata - -Trx - Avg I/O Lock Sleep Rate Hit% Read% Dly% Rate Hit% Rate Hit% Rate 0.27 0.4 80.7 0.0 405.5 100 0.0 0.0 0.0 0.0 10.9 99.9 0.2 ------------Aggregate Name------------------ Size Use% --Mode- FS Read Write (B/Sec) OMVS.TRX2.LOCAL.COMPAT.A.ZFS 528K 29.2 R/W CP 1 0 137 OMVS.TRX2.LOCAL.MULTI.A.ZFS 2160K 8.0 R/W MS 3 0 137 RMF.TEST.ZFS.HFSMF1 7200K 16.1 R/W MS 3 137 1058

zFS Actvity report for Aggregate name

zFS Summary Report

zFS - 66

Page 67: HFS to zFS MIgration

RMF V1R7 zFS Summary Report Line 1 of 3 Command ===> Scroll ===> CSR Samples: 120 System: TRX2 Date: 09/07/04 Time: 16.30.00 Range: 120 Sec ---- Response Time ---- ---------------- Cache Activity ---------------------- ----- Wait% ----- --------- User ------ -- Vnode -- - Metadata - -Trx - Avg I/O Lock Sleep Rate Hit% Read% Dly% Rate Hit% Rate Hit% Rate

0.27 0.4 80.7 0.0 405.5 100 0.0 0.0 0.0 0.0 10.9 99.9 0.2 ------------Aggregate Name------------------ Size Use% --Mode- FS Read Write (B/Sec) OMVS.TRX2.LOCAL.COMPAT.A.ZFS 528K 29.2 R/W CP 1 0 137 OMVS.TRX2.LOCAL.MULTI.A.ZFS 2160K 8.0 R/W MS 3 0 137 RMF.TEST.ZFS.HFSMF1 7200K 16.1 R/W MS 3 137 1058

zFS Summary - I/O Details by Type Count Waits Cancl Merge Type 815 326 0 0 FILE SYSTEM METADATA 346 23 0 29 LOG FILE 1447 175 0 2 USER FILE DATA Press Enter to return to the Report panel.

zFS Summary I/O Details by Type

zFS - 67

Page 68: HFS to zFS MIgration

Vnode cache details: Request rate, hit ratio Vnode statistics

RMF V1R7 zFS Summary Report Line 1 of 3 Command ===> Scroll ===> CSR Samples: 120 System: TRX2 Date: 09/07/04 Time: 16.30.00 Range: 120 Sec ---- Response Time ---- ---------------- Cache Activity ---------------------- ----- Wait% ----- --------- User ------ -- Vnode -- - Metadata - -Trx - Avg I/O Lock Sleep Rate Hit% Read% Dly% Rate Hit% Rate Hit% Rate 0.27 0.4 80.7 0.0 405.5 100 0.0 0.0 0.0 0.0 10.9 99.9 0.2 ------------Aggregate Name------------------ Size Use% --Mode- FS Read Write (B/Sec) OMVS.TRX2.LOCAL.COMPAT.A.ZFS 528K 29.2 R/W CP 1 0 137 OMVS.TRX2.LOCAL.MULTI.A.ZFS 2160K 8.0 R/W MS 3 0 137 RMF.TEST.ZFS.HFSMF1 7200K 16.1 R/W MS 3 137 1058

zFS Summary - User Cache Details Read Rate : 27.4 Size : 256M Write Rate : 16.9 Total Pages : 65536 Read Hit (%) : 59.4 Free Pages : 12703 Write Hit (%) : 100.0 Segments : 8192 Read Delay (%) : 1.3 Write Delay (%) : 0.0 User Cache readahead: ON Async Read Rate : 10.1 Storage fixed : NO Scheduled Write Rate : 83.8 Page Reclaim Writes : 0 Fsyncs : 0 Press Enter to return to the Report panel.

zFS Summary - Vnode Cache Details Request Rate : 14.2 vnodes : 65536 Hit% : 99.9 vnode size : 168 ext. vnodes : 65536 ext. vnode size : 668 open vnodes : 12 held vnodes : 44 Press Enter to return to the Report panel.

User cache details: Request rates, hit ratios, delays Storage statistics

User and Vnode Cache Detail

zFS - 68

Page 69: HFS to zFS MIgration

ibm.com

International Technical Support Organization

Appendix for Migrationprior to install of z/OS

V1R7

zFS - 69

Page 70: HFS to zFS MIgration

Using pax in copy mode - (COPYPAX)

Without an intermediate archive is a more efficient way of copying an HFS file system to a zFS file system

REXX procedure COPYPAX allows the user to use the pax command in copy mode - can be used from the TSO foreground but use in a TSO batch job

Benefits of COPYPAXA test is done to verify that the source and the target structure are located in different file systems A test is done to verify that the target directory is empty Uses option -X to avoid skipping over mount point boundaries Invalid file descriptor files are corrected when pax has finished its copy processing

zFS - 70

Page 71: HFS to zFS MIgration

//ZFSJOB JOB ,'COPYPAX',NOTIFY=&SYSUID.,REGION=0M //* ------------------------------------------------------------------//* Use pax to copy source directory structure to target directory //* Property of IBM (C) Copyright IBM Corp. 2002 //* ------------------------------------------------------------------// SET SOURCED='/u/hering' <=== Source directoy// SET TARGETD='/u/zfs/m01' <=== Target directoy//* ------------------------------------------------------------------// SET TIMEOUT=0 <=== Timeout value in seconds, 0=no timeout// SET REXXLIB=HERING.ZFS.REXX.EXEC <=== SYSEXEC library//* ------------------------------------------------------------------//ZFSADM EXEC PGM=IKJEFT01,PARM='COPYPAX &TIMEOUT &SOURCED &TARGETD'//SYSEXEC DD DSNAME=&REXXLIB.,DISP=SHR //SYSTSIN DD DUMMY //SYSTSPRT DD SYSOUT=*,LRECL=136,RECFM=V //*

Batch Job for COPYPAX

zFS - 71

Page 72: HFS to zFS MIgration

//ZFSJOB JOB ,'COPYPAX',NOTIFY=&SYSUID.,REGION=0M //* ------------------------------------------------------------------//* Use pax to copy source directory structure to target directory //* Property of IBM (C) Copyright IBM Corp. 2002 //* ------------------------------------------------------------------// SET SOURCED='/' <=== Source directory// SET TARGETD='/u/zfs/zfsroot' <=== Target directory//* ------------------------------------------------------------------// SET TIMEOUT=0 <=== Timeout value in seconds, 0=no timeout// SET REXXLIB=HERING.ZFS.REXX.EXEC <=== SYSEXEC library//* ------------------------------------------------------------------//ZFSADM EXEC PGM=IKJEFT01,PARM='COPYPAX &TIMEOUT &SOURCED &TARGETD'//SYSEXEC DD DSNAME=&REXXLIB.,DISP=SHR //SYSTSIN DD DUMMY //SYSTSPRT DD SYSOUT=*,LRECL=136,RECFM=V //*

Use COPYPAX for Copy of Root

zFS - 72

Page 73: HFS to zFS MIgration

To recreate missing directoriesREXX exec ADDMNTPS

//ZFSJOB JOB ,'ADDMNTPS',NOTIFY=&SYSUID.,REGION=0M //* ------------------------------------------------------------------ //* Add missing mount point directories after clone process //* Property of IBM (C) Copyright IBM Corp. 2002 //* ------------------------------------------------------------------ // SET SOURCED='/' <=== Source directory // SET TARGETD='/u/zfs/zfsroot' <=== Target directory //* ------------------------------------------------------------------ // SET TIMEOUT=0 <=== Timeout value in seconds, 0=no timeout // SET REXXLIB=HERING.ZFS.REXX.EXEC <=== SYSEXEC library //* ------------------------------------------------------------------ //ZFSADM EXEC PGM=IKJEFT01,PARM='ADDMNTPS &TIMEOUT &SOURCED &TARGETD'//SYSEXEC DD DSNAME=&REXXLIB.,DISP=SHR //SYSTSIN DD DUMMY //SYSTSPRT DD SYSOUT=*,LRECL=136,RECFM=V //*

REXX Procedure ADDMNTPS

zFS - 73

Page 74: HFS to zFS MIgration

ADDMNTPS ........

It looks for all mount points of all the file systems mounted in the system If the file system is the source file system or the mount point is the root ( / ), no action is taken If the parent directory of the mount point belongs to the source file system, then this mount point is one of the mount point directories that needs to exist in the target structure A test is done to determine whether the corresponding directory in the target file system is already there (it could have been defined in the meantime). If it does not exist, it is created with owner UID 0 and permission bits set to 700

zFS - 74

Page 75: HFS to zFS MIgration

Attention: The sample REXX procedures are available in softcopy on the Internet from the Redbooks Web server

Point your browser to:ftp://www.redbooks.ibm.com/redbooks/SG246580/

Note: SG246580 must be uppercase (the SG part)

Alternatively, you can go to:http://www.redbooks.ibm.comand search for SG24-6580 - Select book and then Additional Materials

Download of REXX execs and JCL

zFS - 75

Page 76: HFS to zFS MIgration

You can choose whether each filesystem is to be an HFS or a zFS:

On the data set attributes panelsWith the new CHange DSNTYPE command

Exception: The root file system must be an HFSNew data set types in the installation dialog:

HFS, zFS, PDS, PDSE, SEQ, and VSAM

HFS or zFS Data Sets

>---+--- CHANGE ---+---+--- DSNTYPE ---+---+--- PDS PDSE ---+---< | | | | | | +--- CH -------+ +--- TYPE ------+ +--- PDSE PDS ---+ | | | | +--- T ---------+ +--- HFS ZFS ----+ | | +--- ZFS HFS ----+

zFS - 76

Page 77: HFS to zFS MIgration

Set Data Set Type

zFS - 77

Page 78: HFS to zFS MIgration

HFS data set vs. VSAM LDS (aggregate) for zFS

zFS aggregate must be preformatted

Different FILESYSTYPEs used in BPXPRMxx

FILESYSTYPE TYPE(ZFS) ENTRYPOINT(IOEFSCM) ASNAME(ZFS)

Different operands on MOUNT command

Different security system setup

CICS, DB2, and IMS ServerPacs assume that any necessary zFS setup has been done

HFS versus zFS

zFS - 78

Page 79: HFS to zFS MIgration

ALLOCDS job formats zFS data sets you define

RACFDRV and RACFTGT Jobs in z/OS orders:

Add group for DFS setup

Add user ID for ZFS address space

RESTFS Job:

Uses appropriate TYPE on MOUNT commands depending on filesystem type

Puts appropriate TYPE on MOUNT parameters in BPXPRMFS

Add FILESYSTYPE for zFS to BPXPRMFS if needed

ServerPac Changes if using zFS

zFS - 79