celerra network server using snapsure on celerra...using snapsure on celerra version 5.5 3 of...

72
1 of 72 Contents Introduction to SnapSure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Cautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 SnapSure concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Two views of the PFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Copying PFS data blocks before they change . . . . . . . . . . . . . . . . . . . 7 Providing the checkpoint view for the client . . . . . . . . . . . . . . . . . . . . 7 Managing multiple checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 System requirements for SnapSure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 E-Lab Interoperability Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Upgrading SnapSure to Celerra Network Server version 5.5 . . . . . . . . . 11 About pageable checkpoint blockmaps (new in version 5.4) . . . . . . 11 Preparing to upgrade SnapSure checkpoints to version 5.5 from a pre-5.4 system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Planning considerations for SnapSure . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Checkpoint persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 SnapSure resource requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Using SnapSure’s options and features . . . . . . . . . . . . . . . . . . . . . . . 20 NDMP backup-created checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Automated checkpoint scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 User interface choices for SnapSure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Using SnapSure from the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 SnapSure roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Creating checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Create a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Listing checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 List all PFS checkpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 List an individual checkpoint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Refreshing checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Refresh a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Deleting checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Delete a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Automating checkpoint schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Create a one-time automated checkpoint schedule . . . . . . . . . . . . . 39 Using SnapSure on Celerra P/N 300-002-720 Rev A02 Version 5.5 March 2007

Upload: others

Post on 10-Mar-2020

32 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

1 of 72

ContentsIntroduction to SnapSure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3Cautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4

SnapSure concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Two views of the PFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6Copying PFS data blocks before they change. . . . . . . . . . . . . . . . . . .7Providing the checkpoint view for the client . . . . . . . . . . . . . . . . . . . .7Managing multiple checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

System requirements for SnapSure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10E-Lab Interoperability Navigator . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

Upgrading SnapSure to Celerra Network Server version 5.5 . . . . . . . . .11About pageable checkpoint blockmaps (new in version 5.4). . . . . .11Preparing to upgrade SnapSure checkpoints to version 5.5 from a pre-5.4 system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

Planning considerations for SnapSure . . . . . . . . . . . . . . . . . . . . . . . . . . .14Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Checkpoint persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15SnapSure resource requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . .17Using SnapSure’s options and features. . . . . . . . . . . . . . . . . . . . . . .20NDMP backup-created checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . .22Automated checkpoint scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . .23

User interface choices for SnapSure . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26Using SnapSure from the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26

SnapSure roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27Creating checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28

Create a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28Listing checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31

List all PFS checkpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31List an individual checkpoint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34

Refreshing checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36Refresh a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36

Deleting checkpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38Delete a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38

Automating checkpoint schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39Create a one-time automated checkpoint schedule . . . . . . . . . . . . .39

Using SnapSure on CelerraP/N 300-002-720

Rev A02

Version 5.5March 2007

Page 2: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on CelerraVersion 5.5 2 of 72

Create an automated daily checkpoint schedule . . . . . . . . . . . . . . .40Create an automated weekly checkpoint schedule . . . . . . . . . . . . . .41Create an automated monthly checkpoint schedule . . . . . . . . . . . .42List automated checkpoint schedules . . . . . . . . . . . . . . . . . . . . . .43Display automated checkpoint schedule information . . . . . . . . .44Modify an automated checkpoint schedule . . . . . . . . . . . . . . . . . . . .45Pause an automated checkpoint schedule . . . . . . . . . . . . . . . . . . . .45Resume a paused automated checkpoint schedule . . . . . . . . . . . . .46Delete an automated checkpoint schedule . . . . . . . . . . . . . . . . . . . .46

Restoring PFS from a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47Restore a PFS from a checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . .48

Accessing checkpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50Accessing checkpoints using CVFS. . . . . . . . . . . . . . . . . . . . . . . . . .50Accessing checkpoints using SCSF. . . . . . . . . . . . . . . . . . . . . . . . . .59

Troubleshooting SnapSure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61Technical support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61Telephone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61Troubleshooting steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61

Related information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66Customer training programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66

Appendix: Facility and event ID numbers for SNMP/email . . . . . . . . . . .67Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69

Page 3: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

3 of 72Version 5.5Using SnapSure on Celerra

Introduction to SnapSureThis technical module describes the Celerra® Network Server’s SnapSure™ feature and how to use it to create and manage checkpoints, which are read-only, logical images of a production file system.

This document is part of the Celerra Network Server User Information set and is intended for system administrators responsible for maintaining business continuance in their organization.

TerminologyThis section defines the terms important to understanding SnapSure. The Celerra Glossary provides a complete list of Celerra terminology.

bitmap: A software register tracking if a change is made to a PFS block since the the newest checkpoint was created.

blockmap: A mapping data structure pageable to the SavVol from Data Mover memory and, for each checkpoint, maps the older PFS volume block saved to the SavVol block. blockmap index file: Stored with each checkpoint on the SavVol, it is the on-disk representation of the checkpoint’s blockmap.

checkpoint: A read-only, logical point-in-time image of a production file system (PFS). A checkpoint is sometimes referred to as a checkpoint file system or a SnapSure file system. checkpoint application: A business application reading a checkpoint for point-in-time information about the PFS. Checkpoint applications require point-in-time (not realtime data) and cannot write on the checkpoint.

checkpoint audit: The process of monitoring a checkpoint’s SavVol to determine how much free space remains for capturing checkpoint data. By default, SnapSure audits the SavVol automatically and writes a message to the system log when the SavVol’s high water mark (HWM) is reached.

checkpoint extend: Enlarging the checkpoint SavVol to ensure it does not become full and inactivate the checkpoint(s). By default, and as system resources permit, SnapSure automatically extends the SavVol each time the HWM (set when the newest checkpoint is created) is reached.

checkpoint refresh: A SnapSure feature recycling SavVol space by deleting the data in the checkpoint of a PFS and creating a new checkpoint using the same checkpoint filename, file system identification number, and mount state.

checkpoint restore: A SnapSure feature restoring a PFS to a point in time using checkpoint information. As a precaution, SnapSure automatically creates a new checkpoint of the PFS before it performs the restore operation.

Checkpoint Virtual File System (CVFS): A file system that allows any NFS or CIFS client read-only access to online snapshots of a file system’s directory by accessing a virtual .ckpt directory entry. CVFS and SVFS are synonymous terms.

checkpoint window: The time the checkpoint is active (readable). A checkpoint is active when you enter the checkpoint creation command. It remains active unless it

Page 4: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra4 of 72 Version 5.5

is deleted, or runs out of space to store point-in-time information (at which point it becomes inactive and unreadable).

PFS application: A business application that accesses the production file system and performs transactions, such as write/delete activity, on the PFS blocks.

Shadow Copy for Shared Folders (SCSF): A Microsoft Windows feature SnapSure supports, for Windows XP and Windows Server 2003 clients, to provide these clients with direct access to the point-in-time versions of their files and folders for the purpose of recovering point-in-time data.

Snapshot Virtual File System (SVFS): A file system that allows any NFS or CIFS client read-only access to online snapshots of a file system’s directory by accessing a virtual .ckpt directory entry. CVFS and SVFS are synonymous terms.

SnapSure SavVol: A Celerra volume to which SnapSure copies original point-in-time data blocks from the PFS before the blocks are altered by a PFS transaction.

CautionsThis section lists the cautions for using this feature on the Celerra Network Server. If any of this information is unclear, contact an EMC Customer Support Representative for assistance.

!! CAUTION

◆ A checkpoint is not intended to be a mirror, disaster recovery, or high-availability tool. Because it is partially derived from realtime PFS data, a checkpoint could become inaccessible (unreadable) if the PFS is inaccessible. Only checkpoints and a PFS saved to tape or other alternate storage location can be used to provide disaster recovery.

◆ Do not perform data migration tasks (using a feature such as the Celerra Data Migration Service) while SnapSure is active. Since migration activity produces significant changes in the data environment, be sure to complete the migration tasks before using SnapSure. This avoids consuming SnapSure resources to capture information unnecessary to checkpoint.

◆ Do not create or refresh a checkpoint during a Unicode conversion process. Doing so causes the create/refresh operation to fail. If this occurs, delete the checkpoint associated with the command failure. Retry creating or refreshing the checkpoint after the conversion process completes and stops.

◆ Do not perform a full restore of a PFS directly from a checkpoint until you ensure sufficient system resources are available for SnapSure to complete the procedure. Without resources for the restore operation, data in all PFS checkpoints could be lost. Read "SnapSure resource requirements" on page 17 to ensure sufficient resources are available before using SnapSure’s restore feature.

◆ Use caution when restoring a file system from one of its checkpoints. If you create a checkpoint of any file system and then write a file (for example, file1) into the file system, and later restore the file system using the checkpoint, then file1 will not remain because it did not exist when the checkpoint was created. The Celerra Network Server provides a warning when a restore is attempted.

◆ Creating new file systems, and extending or mounting file systems may fail until the SnapSure checkpoint conversion process (of the Celerra Network Server version pre-5.5 to 5.5 upgrade process) completes. The action may fail because

Page 5: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

5 of 72Version 5.5Using SnapSure on Celerra

the pinned blockmaps of pre-5.5 checkpoints consume too much physical memory.

◆ On a per-Data Mover basis, the total size of all file systems, the size of all SavVols used by SnapSure, and the size of all SavVols used by the Celerra Replicator™ feature, must be less than the total supported capacity of the Data Mover. The Celerra Network Server Release Notes, available on Powerlink®, include a list of Data Mover capacities.

◆ All parts of a PFS including all parts of its associated checkpoint (SavVol) must reside on the same storage array.

Page 6: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra6 of 72 Version 5.5

SnapSure conceptsThe Celerra Network Server’s SnapSure feature creates a read-only, logical image (checkpoint) of a production file system (PFS) reflecting the PFS state at the point in time the checkpoint is created. SnapSure can maintain up to 96 PFS checkpoints while allowing PFS applications continued access to the realtime data.

Checkpoints can serve as a direct data source for applications requiring point-in-time data, but do not demand realtime data. Such applications include simulation testing, data warehouse population, and automated backup engines performing backup to tape or disk, such as NDMP. The Configuring NDMP Backups to Disk on Celerra provides more details. You can also use a checkpoint to restore a PFS or part of a file system (for example, a file or directory) to the state in which it existed when the checkpoint was created.

The principle of SnapSure is copy old on modify. A production file system is made up of blocks. When a block within the PFS is modified, a copy containing the block’s original contents is saved to a separate volume called the SavVol. Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS (in the SavVol) and the unchanged PFS blocks (remaining in the PFS) are read by SnapSure according to a bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time file system image called a checkpoint.

Two views of the PFSDuring a checkpoint window, the system provides two different views of PFS information, depending on which file system an application is accessing:

◆ Production file system view: PFS applications view realtime PFS data.

◆ Point-in-time view: Checkpoint applications view point-in-time PFS data.

While PFS applications read and write on PFS data blocks, applications using checkpoints read a point-in-time state of the original data blocks. In this model, PFS data and checkpoint data are kept separate, as shown in Figure 1.

Figure 1 Two views of a file system during a checkpoint window

Production File System

New Block(seen only by

PFS applications)

Old Block(in SavVol, seen only bycheckpoint applications)

Checkpoint View (10:00)

Live View (10:00+)

Page 7: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

7 of 72Version 5.5Using SnapSure on Celerra

Copying PFS data blocks before they changeThe first time the system detects a change instruction from a PFS application, it copies the original PFS block into the SavVol (unless the block is unused by a checkpoint), and then allows the instruction to execute on the PFS block. Figure 2 shows the original PFS data blocks copied into the SavVol before being overwritten by the initial write requests from the PFS application, and how the system performs bitmap tracking to show the modified data blocks. Only the latest checkpoint has a bitmap, but all checkpoints (including the newest) have their own blockmap, as discussed in "Providing the checkpoint view for the client".

Figure 2 Using the SavVol to save PFS blocks before they change

Providing the checkpoint view for the clientWhen NFS/CIFS clients access the latest checkpoint, SnapSure queries its bitmap for existence of the needed point-in-time block. If the bitmap flag is on, the data is read from the SavVol using the checkpoint’s blockmap to determine the location (SavVol block number) to read. If the bitmap flag is off, the data is read from the PFS.

When NFS/CIFS clients access a checkpoint that is not the latest, SnapSure directly queries the checkpoint's blockmap for the SavVol block number to read. If the block number is in the blockmap, the data is read from the SavVol space for the checkpoint. If the block number is not in the blockmap, SnapSure queries the next newer checkpoint to find the block number. If the requested block number is found in the blockmap, the data block is read from the SavVol space for the checkpoint. This mechanism is repeated until it reaches the PFS.

CheckpointPFS image(CKPT_1)maintained

Checkpoint remains activeunless deleted or SavVol is full.Persistent upon Celerra orData Mover reboot. Time (10:00+) ...

Poi

nt-in

-tim

e: 1

0:00

a.m

. Unchanged=0 Changed=1PFS Data Blocks

DB0

DB1

DB2DB3DB4

DB5DB6

DB7

DB8DB9

DB10

DB11

DB0

DB1

DB2DB3DB4

DB5DB6

DB7

DB8DB9

DB10

DB11

SavVol

ProductionFile System

0

0

1

1

00

0

0

1

1

1

0

Bitmap

Save Original DB2

Save Original DB5Save Original DB7

Save Original DB8Save Original DB10

Read PFS DB0

Read PFS DB0

Read PFS DB1

Read PFS DB3

Write Over PFS DB2Write Over PFS DB5

Write Over PFS DB7

Write Over PFS DB8

Write Over PFS DB10

On the PFS Application Side

Transactionsbound for PFS

CNS-000414

Page 8: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra8 of 72 Version 5.5

The bitmap and blockmaps play similar roles, but a bitmap is used for the latest checkpoint because processing data with it is faster than with the blockmap, and typically the latest checkpoint is accessed most frequently.

As shown in Figure 3, if the bitmap indicates (as shown with a 1 in this example) the PFS data block changed since the creation of the latest checkpoint, the system consults the checkpoint's blockmap to determine which SavVol block contains the original PFS block, and the read is directed to that SavVol block. If the PFS data block is unchanged since checkpoint creation, as indicated with a 0 in the bitmap, the application reads the data block directly from the PFS. The bitmap is only present for the latest checkpoint. In its absence, the blockmap for the checkpoint is consulted immediately to determine if a mapping is present.

Figure 3 Constructing the checkpoint image using the bitmap/blockmap

Consider a PFS with blocks a, b, c, d, e, f, g, and h. When the checkpoint is created, they have the a0, b0, c0, d0, e0, f0, g0, and h0 values. Thereafter, PFS applications modify blocks a and b, writing the values a1 and b1 into them. At this point, the following contents are in the volumes:

As shown in this example, when you read the checkpoint, you view the complete point-in-time picture of the PFS.

SnapSure adds efficiency to this method by also keeping track of blocks used by the PFS when the checkpoint is created. It does not copy original PFS data when changes are made to blocks not used by the PFS (blocks that were free) at the time of checkpoint creation. Consider a PFS with blocks a, b, c, d, e, f, g, and h, but

PFS a1, b1, c0, d0, e0, f0, g0, h0

SavVol a0, b0

Checkpoint view of PFS a0, b0, c0, d0, e0, f0, g0, h0

CheckpointPFS image(CKPT_1)maintained

Time (10:00+) ...

Poi

nt-in

-tim

e: 1

0:00

a.m

. PFS Data BlocksDB0

DB1DB2

DB3

DB4DB5

DB6

DB7DB8

DB9

DB10

DB11

ProductionFile SystemBitmap

SavVol

Read Point-in-Time DB0

Read Point-in-Time DB4

Read Point-in-Time DB7

Read Point-in-Time DB8

Read Point-in-Time DB9Read Point-in-Time DB10

Read Point-in-Time DB2

CheckpointApplications

Read instructionsbound for checkpoint

Checkpoint remains active unless deleted or SavVol is full.Persistent upon Celerra orData Mover reboot. CNS-000534

Save Original DB2 0

Save Original DB5 1

Save Original DB7 2

Save Original DB8 3

Save Original DB10 4

Index

DB2 0

DB5 1

DB7 2

DB8 3

DB10 4

PageableBlockmap

Unchanged=0 Changed=1

DB0 0 Action: Read PFS BlockDB1 0 Action: Read PFS Block

DB3 0 Action: Read PFS Block

DB4 0 Action: Read PFS Block

DB9 0 Action: Read PFS Block

DB5 1 Action: Read SavVol Block

DB6 0 Action: Read PFS Block

DB7 1 Action: Read SavVol Block

DB8 1 Action: Read SavVol Block

DB11 0 Action: Read PFS Block

DB10 1 Action: Read SavVol Block

Action: Read SavVol BlockDB2 1

Page 9: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

9 of 72Version 5.5Using SnapSure on Celerra

among these only a, b, c, and e are used by the PFS at the time of checkpoint creation. Thereafter, if block d, which was free at the time of checkpoint creation, is modified by the PFS application, its original data is not copied into the SavVol. However, if block c is modified, its original data is copied into the SavVol. Checkpoints are automatically mounted (and should remain so) to ensure SnapSure efficiency.

Managing multiple checkpointsYou can create up to 96 checkpoints (as system space allows) on the same PFS, each representing the state of the PFS at a given point in time, as shown inFigure 4.

Figure 4 SnapSure supports up to 96 checkpoints of the same PFS

Multiple checkpoints share the same SavVol, but are logically separate point-in-time file systems. When multiple checkpoints are active, their blockmaps are linked in chronological order. For any checkpoint, blockmap entries needed by the system, but not resident in main memory, are paged in from the SavVol. The entries stay in the main memory until system memory consumption requires them to be purged. If a PFS data block, remainsunchanged since the checkpoints were created, the system directs the read request to the PFS data block where the original point-in-time data resides.

ProductionFile

System

Checkpoint 1Monday

Checkpoint 2Tuesday

Checkpoint 3Wednesday

Page 10: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra10 of 72 Version 5.5

System requirements for SnapSureThis section describes the Celerra Network Server software, hardware, network, resource, and environment required for using SnapSure as described in this technical module.

E-Lab Interoperability NavigatorThe E-Lab Interoperability Navigator is a searchable, web-based application that provides access to EMC interoperability support matrices. It is available at http://Powerlink.EMC.com. After logging in to Powerlink®, go to Support > Interoperability > E-Lab Interoperability Navigator.

Table 1 SnapSure system requirements

Software EMC Celerra Network Server version 5.5.x. The Celerra Network Server Release Notes include effectivity details.

Hardware Celerra Network Server with 510 (3 GB RAM) and later model Data Movers.

Network No specific network requirements.

System Resources

Sufficient Data Mover blockmap memory, system space, and disk storage must be available to support SnapSure operations. "SnapSure resource requirements" on page 17 offers details.

System Environment

While using SnapSure, ensure the following conditions are true to enable proper functionality:• All parts of a PFS, including all parts of its checkpoint SavVol, must be in

the same Celerra cabinet.• All volumes (PFS and SavVol) are accessible to the same Data Mover.• The PFS is mounted (on a single Data Mover) to enable SnapSure to

access data and create checkpoints.• All checkpoints are mounted to conserve SavVol space usage and enable

checkpoint application access to the data.• The checkpoints are mounted on the same Data Mover as the PFS.Read "SnapSure resource requirements" on page 17 for SnapSure’s blockmap memory, disk space, and system space requirements.

Page 11: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

11 of 72Version 5.5Using SnapSure on Celerra

Upgrading SnapSure to Celerra Network Server version 5.5This section identifies the preparation required to ensure a successful SnapSure upgrade to Celerra Network Server version 5.5. EMC recommends contacting EMC Customer Service to define the best upgrade strategy (online or offline, and so on) for your business environment.

Note: Special upgrade considerations apply only to pre-5.4 release systems.

About pageable checkpoint blockmaps (new in version 5.4)Upon successful upgrade to Celerra Network Server version 5.5 the system allows up to 1 GB of physical RAM per Data Mover to store the blockmaps for all checkpoints of all PFSs on the Data Movers. (The 1 GB also ensures sufficient Data Mover memory is available for Celerra Replicator sessions.) Each blockmap contains specific point-in-time information required by a checkpoint; a string of blockmaps typically completes a checkpoint. A blockmap uses between 8 and 16 bytes of memory to map each block of point-in-time data used by the checkpoint.

Note: For Celerra systems with less than 4 GB of Data Mover memory, a total of 512 MB of physical RAM per Data Mover is allocated for the blockmap storage.

Each time a checkpoint is read, the system queries it to find the required data block’s location. For any checkpoint, blockmap entries needed by the system, but not resident in main memory, are paged in from the SavVol. The entries stay in main memory until system memory consumption requires them to be purged. This design increases the number of checkpoints supported and improves checkpoint recovery time upon system reboot. To support this design, the system automatically converts all existing, supported checkpoints to the version 5.5 data structure format upon upgrade. The conversion requires no user intervention and causes no access disruption to either the checkpoint or the PFS.

When the conversion completes, all the checkpoints’ blockmaps become pageable from Data Mover memory to the SavVol and SnapSure enters high-capacity mode. Next, the system waits for both sides of the Celerra Replicator (that is, the sessions on the primary and secondary Data Movers) to also enter high-capacity mode, as applicable. It then enables the 1 GB blockmap memory-quota capacity of the Data Mover. The 1 GB is allocated to storing checkpoint blockmaps; the internal blockmap memory quotas go into effect, and Data Mover blockmap memory-usage tracking immediately begins. (The memory-usage tracking is for EMC support purposes.) It is only now that file systems greater than 2 TB are allowed on the Celerra system. Thus, creating new file systems, and extending or mounting file systems may fail until the SnapSure checkpoint conversion process (of the Celerra Network Server version pre-5.5 to 5.5 upgrade) completes.

Note: If SnapSure and Celerra Replicator are both on the Data Mover when upgrading, the upgrade may take longer. Replicator must also reach high-capacity mode for the SnapSure high-capacity mode to be reached (and vice-versa). Read the Celerra Network Server Release Notes on Powerlink for more details.

Page 12: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra12 of 72 Version 5.5

Preparing to upgrade SnapSure checkpoints to version 5.5 from a pre-5.4 systemBefore upgrading to Celerra Network Server version 5.5 from a pre-5.4 system only, perform these steps.

Note: The fs_ckpt command is new for Celerra Network Server version 4.0 and later. The SnapSure command for version 2.2 releases (nas_fs -Checkpoint) does not work with version 4.0 and later. If upgrading from Celerra Network Server version 2.2 software, update all scripts containing the former SnapSure command with the new command.

Step Action

1. Log in to the Control Station.

2. If you create a Linux cron job script to automate checkpoint refreshes, manually stop the script before the upgrade, and restart the script when the upgrade is complete.

3. Delete all checkpoints created with version 4.1 (or earlier) software. These checkpoints are unsupported in version 5.5 and if they are detected by the system, the Celerra software-upgrade script fails. Also delete any version 4.2 automated-checkpoint schedules (checkpoint schedules created with the Celerra Native Manager scheduler).

Note: To see what software version a checkpoint was created by, list the checkpoints using nas_fs -i and look for the volume = field. If vpnnn appears as the default volume used in this field, then the checkpoint was created by version 4.2 or greater software. If the default volume is not vpnnn, then the checkpoint was created by software earlier than version 4.2, and should be deleted before upgrading to version 5.5.

Checkpoints created with software lower than version 4.2 cannot be reliably detected as “old” by the system (to provide notification), but you still must delete them or the upgrade script fails.

Checkpoints created using versions 4.2 through 5.3 are automatically converted to the version 5.5 format upon the system reboot step in the upgrade process; do not delete these checkpoints if you want to keep them.

4. Delete all version 4.2 checkpoint schedules created with the Celerra Native Manager. You can re-create the schedules in version 5.5 using the Celerra Manager application, and the new checkpoints will be automatically mounted upon creation.

5. Ensure the checkpoint SavVols associated with each PFS have enough space to store the blockmap index file required for each checkpoint. Each SavVol requires about 1 percent available space to accommodate the index files.

Note: If insufficient SavVol space is available for the indices (an unusual case), the conversion pauses until space is available, and then automatically resumes. Typically, space is made available through automatic SavVol extension or routine de-activation of the oldest checkpoint resulting from refreshes to the newest checkpoint.

When the checkpoints successfully convert to the version 5.5 format and blockmaps are in pageable mode, a message is written to the Celerra Network Server’s server log.

Page 13: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

13 of 72Version 5.5Using SnapSure on Celerra

Note: When creating an automated checkpoint schedule using the Celerra Manager > Checkpoints > Schedule page, the online help states you can change the End On date after the schedule completes, and the schedule begins to run again as scheduled. The ability to change to End On date after the schedule completes is unavailable in version 5.5.

6. To view the blockmap memory activity, use this command syntax:$ server_sysstat <server_name> -blockmap

Where:<server_name> = the name of the Data Mover for which you want to list blockmap memory statusExample:To list the blockmap memory status for server_2, type:$ server_sysstat server_2 -blockmap

Example Output: server_2 :1 total paged in = 3312 total paged out = 3733 page in rate = 04 page out rate = 05 block map memory quota = 1048576(KB)6 block map memory consumed = 15280(KB)

Output description:

1 The total number of blockmap pages paged in since the system booted.2 The total number of blockmap pages paged out since the system booted.3 The number of blockmap pages paged in per second (over last 180 seconds).4 The number of blockmap pages paged out per second (over last 180 seconds).5 The current value of the blockmap memory quota.6 The amount of memory consumed for blockmaps.

7. Starting in version 5.2, all checkpoints (by default) are automatically mounted upon creation. However, checkpoints created (but not mounted) in other versions of SnapSure are not automatically mounted upon upgrade. Be sure to mount these checkpoints after upgrade to version 5.5 to ensure SnapSure efficiency.

8. If you plan to use the automated checkpoint (task) scheduling feature in Celerra Network Server version 5.5, and previously created a checkpoint schedule in version 5.1 or 5.2, the old schedule format is automatically translated into the new (released with version 5.3) schedule format during the upgrade. However, if the version 5.1 schedule (created with the Celerra Native Manager) has irregular monthly refreshes scheduled (for example, January, March, and August), or, if you have created schedules using version 4.2, these schedules do not automatically translate upon upgrade. Re-create them by selecting the Checkpoints > Schedules in the version 5.5 Celerra Manager interface.

Step Action

Page 14: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra14 of 72 Version 5.5

Planning considerations for SnapSureUsing SnapSure, you can perform administrative tasks such as creating checkpoints, managing (listing, refreshing, or deleting checkpoints), and employing them for such tasks as restoring a PFS or copying to a backup device. Clients can use checkpoint data to independently restore files and directories or for business modeling. The following section provides the checkpoint planning and management information helpful to understand before using SnapSure.

Performance considerationsSnapSure processes affect the system as follows:

◆ Creating a checkpoint requires the PFS to be paused. Therefore, PFS write activity suspends (but read activity continues) while the system creates the checkpoint. The pause time depends on the amount of data in the cache but is typically a few seconds. EMC recommends a 10-minute minimum interval between the creation or refresh of checkpoints of the same PFS.

◆ Restoring a PFS from a checkpoint (checkpoint restore) requires the PFS to be frozen. Therefore, all PFS activities are suspended while the system restores the PFS from the selected checkpoint. The pause time depends on the amount of data in the cache, but is typically a few seconds.

Note: When read activity is suspended during a freeze, connection to CIFS users is broken. However, this is not the case when write activity is suspended.

◆ Refreshing a checkpoint (checkpoint refresh) requires the checkpoint to be frozen. Therefore, checkpoint read activity suspends while the system refreshes the checkpoint. Clients attempting to access the checkpoint during a refresh process experience the following:

• NFS (UNIX): The system continuously tries to connect indefinitely. When the system thaws, the file system automatically remounts.

• CIFS (Windows): Depending on the application running on Windows, or if the system freezes for more than 45 seconds, the Windows application may drop the link. The share may need to be refreshed (remounted and remapped).

◆ Deleting a checkpoint requires the PFS to be paused. Therefore, PFS write activity suspends (but read activity continues) momentarily while the system deletes the checkpoint.

◆ If a checkpoint becomes inactive for any reason, read/write activity on the PFS continues uninterrupted.

Checkpoint persistenceCheckpoints persist in the following manner:

◆ If the Data Mover on which a given PFS is mounted is rebooted or fails over to another Data Mover, the associated checkpoints remain active. The

Page 15: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

15 of 72Version 5.5Using SnapSure on Celerra

checkpoints are recovered before the PFS is available to allow any accumulative PFS modifications to be captured by the checkpoints.

Note: When a checkpoint accumulates a large amount of data due to a heavy volume of PFS modifications (for example, if the checkpoint approaches 500 GB), it should be refreshed regularly. When large checkpoints are current, system performance is optimized during a reboot or failover to another Data Mover.

◆ The PFS can be temporarily unmounted if its checkpoints are mounted, but it cannot be permanently unmounted until the checkpoint is unmounted.

◆ When a PFS with a checkpoint attached to it is unmounted and mounted on another Data Mover, the checkpoints are recovered, but must be remounted.

◆ In version 5.2 and later, all checkpoints (by default) are automatically mounted upon creation. However, checkpoints created in other versions of SnapSure are not mounted, and are not automatically mounted upon upgrade to version 5.2 and later. Mount these checkpoints (on the same Data Movers as the PFS) after upgrade to ensure SnapSure efficiency.

GuidelinesFollow these guidelines when using SnapSure in your business environment:

◆ Configure the Celerra Network Server to provide email or SNMP trap notification of the following events:

• A SavVol or PFS reaches its percent-full threshold (also known as high water mark, or HWM), or becomes full.

• A scheduled checkpoint-refresh attempt fails or the schedule fails to run.

"Appendix: Facility and event ID numbers for SNMP/email" on page 67 provides the facility and event numbers to use. Then, read Configuring Celerra Events and Notifications for the procedures.

◆ Review the Celerra Network Server Release Notes for the most recent technical information on using SnapSure on Celerra with this software release.

RestrictionsReview the following restrictions before using SnapSure with other Celerra features.

SnapSure and ATA drivesSnapSure allows the PFS and the SavVol to be on an Advanced Technology Attached (ATA) drive. It also allows the PFS to be on a non-ATA drive and the SavVol to be on an ATA drive. You can specify an ATA drive for the SnapSure checkpoint SavVol by using the -pool option when creating the first PFS checkpoint. EMC recommends the PFS and SavVol be on the same type of backend system.

Page 16: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra16 of 72 Version 5.5

SnapSure and Celerra File-Level Retention CapabilityYou can create SnapSure checkpoints of a Celerra File-Level Retention Capability file system. However, as with any file system, use caution when restoring a File-Level Retention Capability file system from one of its checkpoints. Suppose you create a SnapSure checkpoint of a file system, and then write a file (for example, file1) into the file system. Later, if you restore the file system using the checkpoint, then file1 will not remain because it did not exist when the checkpoint was created. The Celerra Network Server provides a warning when the restore of a File-Level Retention Capability file system is attempted, and performing the action requires fs_ckpt -Restore -Force (through the CLI) or clicking OK at the confirmation pop-up window (through Celerra Manager). Using File-Level Retention on Celerra offers more information.

SnapSure and data migrationDo not perform data migration tasks (using a feature such as the Celerra Data Migration Service) while SnapSure is active. Since migration activity produces significant changes in the data environment, be sure to complete migration tasks before using SnapSure to create and manage checkpoints. This avoids consuming SnapSure resources to capture information unnecessary to checkpoint.

SnapSure and Celerra FileMoverCelerra's FileMover feature supports SnapSure checkpoints. Using Celerra FileMover provides more information.

SnapSure and SRDFIf your Celerra Network Server environment is configured with Symmetrix Remote Data Facility (SRDF®) protection, and you plan to create checkpoints of a production file system residing on an RDF volume, ensure the entire SnapSure volume (the SavVol, which stores the checkpoints) resides in the same pool of SRDF volumes used to create the PFS. Otherwise, if any part of the SavVol is stored on a local standard (STD) volume rather than on an RDF volume, the checkpoints are not failed over and thus, are not recoverable in the SRDF-failback process. Using SRDF/S with Celerra for Disaster Recovery offers more information.

SnapSure and TimeFinder/FSIf you use the TimeFinder®/FS feature to create and mirror-on a snapshot of the PFS while using SnapSure to create a checkpoint of the PFS, the information in the snapshot may differ from that in the checkpoint. This is because as the snapped TimeFinder/FS copy completes, changes can occur to the PFS captured only by the checkpoint.

SnapSure and VDMsIf a PFS is mounted on a Virtual Data Mover (VDM), the checkpoint must be mounted on the same VDM.

If a checkpoint is mounted on a VDM and you move the VDM to a different Data Mover, active checkpoints will persist. Checkpoint schedules resume at the next scheduled runtime after the move occurs. The CVFS name parameter will be set to that of the new Data Mover (where the VDM is loaded). The Checkpoint View File System (CVFS) timestamp will be adjusted to the new Data Mover’s time and time

Page 17: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

17 of 72Version 5.5Using SnapSure on Celerra

zone. The SCSF timestamp will be adjusted to the new Data Mover’s time, but not the new time zone.

SnapSure resource requirementsTo perform properly, SnapSure requires sufficient system resources be available including:

◆ Disk space for creating and extending SavVols

◆ System space for disks for use by SavVols

The following sections discuss how these system resources are designed to support SnapSure operations and options for changing them, if needed. It also discusses options for creating and managing checkpoints (and related SavVol activity) using SnapSure’s features.

Disk space required for creating and extending SavVolsSnapSure requires a SavVol, or checkpoint-specific volume in which to hold data when you create the first PFS checkpoint. When you create more than one checkpoint of the same PFS, SnapSure uses the same SavVol, but logically separates the point-in-time data using unique checkpoint names. A SavVol is not a Symmetrix business continuance volume (BCV) as required by the Celerra TimeFinder/FS feature.

When you create the first PFS checkpoint, you can choose to let SnapSure automatically create and manage a SavVol using its default settings, or you can create and manage SavVol activity yourself using SnapSure’s volume and high water mark (HWM) options.

When you create the first PFS checkpoint using SnapSure’s default values (for example, if you use the CLI command fs_ckpt <fsname> -C and specify no volume_name, size=, or %full= option), SnapSure automatically:

◆ Creates a SavVol using this criteria:

If PFS > 10 GB, then SavVol = 10 GB.

If (PFS < 10 GB) and (PFS > 64 MB), then SavVol = PFS size.

If PFS < 64 MB, then SavVol = 64 MB (minimum SavVol size).

◆ Monitors the SavVol and extends it (by 10 GB) if a checkpoint reaches a HWM of 90 percent full to keep the checkpoint(s) active (readable).

Note: By default, EMC Automatic Volume Manager (AVM) algorithms determine the selection of disks used for a SavVol. AVM tries to match the storage pool for the SavVol with that of the PFS whenever possible. However, SnapSure provides the option of specifying any available pool for the SavVol when you create the first PFS checkpoint if you do not want to use the AVM selection method. The SavVol can be autoextended as long as space is available on the same disk type. For example, if the SavVol is built on a CLSTD (CLARiiON® standard) disk, then autoextend can occur only if space is available on another CLSTD disk; the autoextend feature cannot use a CLATA (CLARiiON ATA) disk for the autoextend in this example. (Use nas_disk -l to display the disk types.)

An event message is sent to the Celerra Network Server system log each time an automatic SavVol extension succeeds (or fails).

Page 18: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra18 of 72 Version 5.5

A large and busy PFS requires a bigger SavVol than that of a smaller, less active PFS. SnapSure’s options allow you to customize SavVol size more proportional to the PFS size and the activity level you anticipate the checkpoints will need to capture. In general, start by specifying a SavVol size 10 percent of the PFS. As you become more familiar with PFS activity, adjust the SavVol size as follows:

1. To keep the checkpoint data existing for the PFS, back up the data to tape or available disks before proceeding.

2. Delete all PFS checkpoints. This causes SnapSure to automatically delete the associated (wrong-size) SavVol.

3. Create a new checkpoint of the PFS, specifying a custom-size SavVol.

Note: If you use only the size option (for example, size = 100M) when creating the first checkpoint of a PFS, the Celerra system first selects the same pool and storage system as the PFS when creating the SavVol. If the PFS spans multiple storage systems, the system picks the same set of storage systems to find space for the SavVol after it ensures there is at least the specified (100M) amount of space in the pool on the storage system(s). If not, the system extends the storage pool by adding unused disk volumes, and then creates the SavVol. If the pool cannot be extended, the QoS method of disk selection is used.

Read "Using SnapSure’s options and features" on page 20 for more tips on SnapSure administration tasks, such as conserving SavVol space.

System space required for disks that can be used for SavVolsThe system allows SavVols to be created and extended until the sum of the space consumed by all SavVols on the system exceeds 20 percent (default) of the total space available on the system. This calculation includes SavVols used by SnapSure and other applications, such as the Celerra Replicator. Before using SnapSure, ensure your system has enough space for the SavVols for each PFS of which you plan to create checkpoints.

If the 20 percent value is reached, the following message appears in the command output, and also in the /nas/log/cmd_log.err and /nas/log/nas_log.al.err files:

Error 2238: Disk space: quota exceededFeb 1 15:20:07 2005 NASDB:6:101 CKPT volume allocation quota exceeded

To manually gauge system space, use nas_fs -size <checkpoint name> to calculate the SavVol size for each PFS with a checkpoint. Additional checkpoints on a PFS need not be calculated since checkpoints share the same SavVol per PFS. Then compare the sum of all SavVols to the entire volume size as calculated by totaling the size of all system disks using nas_disk -list:

$ nas_fs -size pfs001_ckpt57volume: total = 1181024 avail = 407241 used = 773783 ( 69% ) (sizes in MB)

$ nas_disk -lid inuse sizeMB storageID-devID type name servers1 y 11263 APM00043200225-0000 CLSTD root_disk 1,22 y 11263 APM00043200225-0001 CLSTD root_ldisk 1,23 y 2047 APM00043200225-0002 CLSTD d3 1,2

To make space available, delete unwanted data or change the system space parameter using the following procedures. These error messages could indicate

Page 19: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

19 of 72Version 5.5Using SnapSure on Celerra

you are using a volume the Data Mover cannot see. Check the location of the volume and ensure the Data Mover can see it.

Use this procedure to change the percentage of system space allotted to SavVols.

The Celerra Network Server Parameters Guide provides more information on changing system parameters.

If the system space quota for SavVols is exceeded, and autoextension fails, SnapSure tries to keep the checkpoint active using other means. First, it uses up the remaining SavVol space (for example, the 10 percent left over when an HWM of 90 percent is reached) for the checkpoint. Next, SnapSure deletes the oldest checkpoint of the PFS and recycles the space for the new checkpoint. It repeats this process until it runs out of old checkpoints. (You can view checkpoint activity in the system log file.) If SnapSure uses up all possible space it can find to keep a checkpoint active, all PFS checkpoints (including the newest checkpoint and the

Step Action

1. Log in to the Control Station.

2. Open /nas/site/nas_param with a text editor.A short list of configuration lines appears.

3. Locate this SnapSure configuration line in the file:ckpt:10:100:20:

Where:10 = Control Station polling interval rate (in seconds)20 = maximum rate to which a file system is written (in MB/second)100 = percentage of the entire system’s volume allotted to the creation and extension of all the SavVols used by Celerra software features

Note: If this line does not exist, it means the SavVol-space-allotment parameter is currently set to its default value (20), which means 20 percent of the system space can be used for SavVols. To change this setting, you must first add the line: ckpt:10:100:20:

4. Change the third parameter (percent of entire system’s volume allotted to SavVols) as needed. Values are 20 (default) through 99.

Note: To ensure proper SnapSure functionality, do not use a value below 20 percent for the third value.

Do not change the Control Station event polling interval rate (default =10), or the maximum rate to which a file system is written (default =100). Doing so will have a negative impact on system performance. Do not change any other lines in this file without a thorough knowledge of the potential effects on the system. Contact your EMC Customer Support Representative for more information.

5. Save and close the file.

Note: Changing this value does not require a Data Mover or Control Station reboot.

Page 20: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra20 of 72 Version 5.5

checkpoint from which the PFS is being restored, if applicable) become inactive and the data is lost.

Using SnapSure’s options and featuresThis section provides administrative tips helpful when using SnapSure’s options and features to support your business environment. It discusses several ways to conserve space, important facts about restoring from a checkpoint, and enabling client access to the online checkpoints.

Options for conserving spaceYou can conserve the space SnapSure uses for the checkpoint(s) as follows:

◆ Specify an HWM of 0 percent when you create the checkpoint (using the %full option). This saves system-wide space by preventing the SavVol from extending and by automatically deleting the oldest checkpoint as more SavVol space is needed.

◆ Use SnapSure’s refresh feature anytime after creating the checkpoint. This conserves SavVol space by recycling used space.

◆ Delete any unwanted checkpoint(s) to conserve SavVol space. Deleting the oldest checkpoint(s) typically frees more blockmap space than does deleting newer checkpoints. Deleting and refreshing checkpoints frees up the same amount of SavVol space.

If you set the HWM to 0 percent when you create a checkpoint, this tells SnapSure not to extend the SavVol when a checkpoint reaches full capacity. Instead, SnapSure deletes the data in the oldest checkpoint and recycles the space to keep the most recent checkpoint active. It repeats this behavior each time a checkpoint needs space. If you use this setting and have a critical need for the checkpoint information, periodically check the SavVol space used (using the fs_ckpt <fsname> -list command), and before it becomes full, copy the checkpoint to tape, read it with checkpoint applications, or extend it to keep it active. In summary, if you plan to use the 0 percent HWM option at creation time, auditing and extending the SavVol are important checkpoint management tasks to consider.

Note: Using an HWM of 100 percent does not prevent SavVol autoextension. Only a 0 percent HWM prevents the SavVol from autoextending. Using 100 percent allows the checkpoint to use all of its SavVol space before autoextending and does not provide notification to allow you to ensure available resources, if necessary.

You can recycle SavVol space by using SnapSure’s refresh feature. Rather than using new SavVol space when creating a new checkpoint of the PFS, use this option anytime after you create one or more checkpoints, and SnapSure deletes the data in the checkpoint you specify and creates a new one. You can refresh any active checkpoint of a PFS, and in any order. If the checkpoint contains important data, back it up or use it before you refresh it; the refresh operation is irreversible. When you refresh a checkpoint, SnapSure maintains the file system name, ID, and mount state of the checkpoint for the new one. The PFS must remain mounted during a refresh. With the Celerra Manager, you can schedule checkpoint refreshes to occur regularly.

Page 21: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

21 of 72Version 5.5Using SnapSure on Celerra

Note: When a checkpoint accumulates a large amount of data due to a heavy volume of PFS modifications (for example, if the checkpoint approaches 500 GB), it should be refreshed regularly. When large checkpoints are up to date, system performance is optimized during a reboot or failover to another Data Mover.

To reuse SavVol space, periodically list all PFS checkpoints, and delete (or refresh) those no longer needed. Checkpoints must be unmounted to be deleted. The Celerra Manager unmounts the checkpoints as part of the deletion process. You cannot delete a checkpoint in use or pending (scheduled, but not yet created).

Restoring from a checkpointFrom an active checkpoint, you can restore an online PFS back to the point in time in which it existed when the checkpoint was created. Before it begins the restore operation, SnapSure creates a new checkpoint of the PFS in case you do not want the restored image. It uses a 75 percent HWM for the SavVol when creating the new checkpoint. Before you attempt a restore, ensure SnapSure has enough SavVol space available to create the new checkpoint and to keep the checkpoint you are restoring from being active during the operation; otherwise, all data in the PFS checkpoints could be lost, including the new checkpoint.

SnapSure prohibits a 0 percent HWM, or one greater than 75 percent (by automatically overriding it with 75 percent) for the SavVol when restoring. The PFS must remain mounted during a restore process. However, during a restore operation, if the HWM reaches 75 percent, the Celerra Network Server remounts the PFS as read-only until the extension is done, at which point the PFS is remounted as read/write.

Use caution when restoring a file system from one of its checkpoints. If you create a SnapSure checkpoint of any file system and then write a file (for example, file1) into the file system, and later restore the file system using the checkpoint, then file1 will not remain because it did not exist when the checkpoint was created. The Celerra Network Server provides a warning when a restore is attempted.

The restore operation may fail the first time if SnapSure determines there is insufficient SavVol space and extends the SavVol. The next restore attempt succeeds because needed space is available.

If you create a checkpoint of a PFS, extend the PFS, and then try to restore the PFS using its smaller checkpoint, this action does not “shrink” the PFS, nor does it free the volume space used for the extension. Rather, the system reserves the space used for the extension and allows you to reclaim it the next time you extend the PFS. You only need to extend the PFS by a small amount (for example, 2 MB), to reclaim the original extension space for the new extension request. The 2 MB is added to the original extension size to create the new extension.

Accessing online checkpointsClients can use either the EMC CVFS or the Microsoft SCSF functionality to access online, mounted checkpoints in the PFS namespace. This capability eliminates system administrator involvement when a client needs to list, view, or copy a point-in-time file or directory in a checkpoint, or use it to restore information back to a point in time.

Page 22: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra22 of 72 Version 5.5

CVFS version 2 (available in NAS version 4.2 and later) supports both CIFS and NFS clients. It simulates virtual directory links to previous versions of file system objects on the PFS. CVFS smooths navigation through these virtual links as if they are read-only directories on the PFS. The virtual checkpoints require no storage.

CVFS also enables clients to change the name of the virtual checkpoint directory (from its default name, .ckpt), and to change the name of the checkpoints listed in the directory, as desired. The CVFS functionality is enabled on the Celerra Network Server by default. Read "Accessing checkpoints using CVFS" on page 50 for CVFS usage procedures.

SnapSure supports Microsoft Windows SCSF, a feature enabled by Windows Server 2003. SCSF provides Windows Server 2003 and Windows XP clients with direct access to point-in-time versions of their files (in checkpoints created with SnapSure) by providing a Previous versions tab listing all folders available in the checkpoint (shadow-copy) directory. The SCSF software is preloaded on Windows Server 2003 clients. Windows XP clients must install the SCSF software, which you can download from the Microsoft website.

To download the Shadow Copy Client, visit the Microsoft website, and download the ShadowCopyClient.msi file.

As of this release, SnapSure does not support the SCSF feature for Windows 2000 clients. Contact your EMC Customer Support Representative for the latest SnapSure support information.

Read "Accessing checkpoints using SCSF" on page 59 for the usage procedures to employ SCSF.

NDMP backup-created checkpointsWhen listing checkpoints created on a PFS, a temporary checkpoint created by the NDMP backup facility appears in the list. This type of checkpoint, named automaticTempNDMPCkpt<id>-<srcFsid>-<timestamp>, is created by Celerra on the request of an NDMP client software, such as NetWorker, and is meant for backups only.

To use a checkpoint for backup automatically, you must set the NDMP environment variable SNAPSURE=y on the NDMP client software. Then a checkpoint is created automatically and employed when the NDMP backup is initiated. After the backup is complete, the checkpoint is automatically deleted. Be aware if you manually delete this checkpoint, no warning message will appear and the NDMP backup will fail.

Page 23: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

23 of 72Version 5.5Using SnapSure on Celerra

If the NDMP variable SNAPSURE is not configured or supported by the DMA soft-ware, you can set the NDMP.snapsure parameter to use SnapSure. The NDMP variable SNAPSURE always overrides the NDMP.snapsure parameter.The following shows how the NDMP variable SNAPSURE and NDMP.snapsure parameter work together.

Table 2 Creating a checkpoint with NDMP SnapSure variable and parameter

Post-process

!CAUTION!Do not unmount, refresh, or delete the checkpoint in use by NDMP backup otherwise the backup will fail.

Configuring NDMP Backups on Celerra provides more detail.

Automated checkpoint schedulingAn automated checkpoint-refresh solution can be configured using the CLI nas_ckpt_schedule command, the Celerra Manager, or a Linux cron job script. The scheduling created with the Celerra Manager is not visible from and cannot be modified using the UNIX crontab command. All checkpoint schedules created using the Celerra Manager can be managed using the CLI, and all checkpoint schedules created using the nas_ckpt_schedule command can be managed through the Celerra Manager.

This document explains how to schedule and manage checkpoints using the CLI.

Using the Celerra Manager to automate checkpointsThis section summarizes the schedule attributes of the Celerra Manager. The Celerra Manager Online Help system provides more detailed usage information.

Creating and managing automated checkpoint schedules is simplified by enhancements to the Celerra Manager. Using the Checkpoints > Schedules tab of

NDMP variable SNAPSURE(set in DMA)

NDMP.snapsure parameter(set in Celerra)

Checkpoint created

Y 1 Yes

Y 0 Yes

N 1 No

N 0 No

1 Yes

0 No

Page 24: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra24 of 72 Version 5.5

the Celerra Manager, you can schedule checkpoint creation and refreshes on arbitrary, multiple hours of a day and days of a week or month. You can also specify multiple hours of a day on multiple days of a week to further simplify administrative tasks. More than one schedule per PFS is supported, as is the ability to name scheduled checkpoints, name and describe each schedule, and query the schedule associated with a checkpoint. You can also create a schedule of a PFS with a checkpoint created on it, and modify existing schedules.

If you prefer to quickly create a basic checkpoint schedule without some of these customization options, click any mounted production file system listed in the Celerra Manager interface to display the File System Properties page, and then click the Schedules tab. This tab enables hourly, daily, weekly, or monthly checkpoints to be created using default checkpoint and schedule names and no ending date for the schedule.

Creating multiple schedules SnapSure supports up to 96 checkpoints (scheduled or otherwise) per PFS, as system resources permit. To avoid system resource conflicts, schedules should not have competing refresh times. For example, if you plan to create three schedules, they should not each have a refresh time occurring at 2:10 P.M. on Monday. Instead, the times on the schedules should be staggered, similar to the following:

Schedule on ufs1: Monday 2:10 P.M.

Schedule on ufs2: Monday 2:20 P.M.

Schedule on ufs3: Monday 2:30 P.M.

Note: Do not schedule a checkpoint-refresh from one to five minutes past the hour because this conflicts with an internal NAS database backup process. EMC recommends you allow at least 15 minutes between the creation/refresh of SnapSure checkpoints of the same PFS. This includes checkpoints in the same schedule or between schedules running on the same PFS.

Minimizing scheduled checkpoint-creation/refresh failuresSnapSure checkpoint creation and refresh failures can occur if the schedule conflicts with other background processes, such as the internal NAS database backup process occurring from one to five minutes past the hour. If a refresh failure occurs due to a schedule or resource conflict, you can manually refresh the affected checkpoint, or let it automatically refresh in the next schedule cycle.

When scheduled tasks are missed because resources are temporarily unavailable, they are automatically retried for a maximum of 15 times, each time sleeping for 15 seconds before retrying. Retries do not occur on such conditions as network outages or insufficient disk space.

Refresh-failure events are sent to the /nas/log/sys_log file. You can configure the Celerra server to provide email or SNMP trap notification of refresh failures, too. Read "Appendix: Facility and event ID numbers for SNMP/email" on page 67 for the numbers to use, and then consult Configuring Celerra Events and Notifications for the procedure.

Page 25: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

25 of 72Version 5.5Using SnapSure on Celerra

Note: EMC recommends allowing at least 15 minutes between the creation/refresh of SnapSure checkpoints of the same PFS. This includes checkpoints in the same schedule, or between schedules running on the same PFS. The Celerra Manager helps to check for and prevent the 15-minute checkpoint creation/refresh overlap during schedule creation and modification. This restriction does not apply when using the CLI.

Page 26: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra26 of 72 Version 5.5

User interface choices for SnapSureThe Celerra Network Server offers flexibility in managing networked storage based on your support environment and interface preferences. This technical module describes how to configure and manage SnapSure using the command line interface (CLI). You can also perform SnapSure tasks, including checkpoint scheduling, using the Celerra Manager.

Getting Started with Celerra provides more information on user interface choices.

Using SnapSure from the CLIIf you plan to use SnapSure from the CLI, consult the Celerra Network Server’s man page for the fs_ckpt command, or the Celerra Network Server Command Reference Manual to understand the command structure, options, and switches to use to create and manage checkpoints. Next, consult the specific CLI examples of these tasks in this technical module, beginning with "Creating checkpoints" on page 28, to guide you through the process.

Page 27: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

27 of 72Version 5.5Using SnapSure on Celerra

SnapSure roadmap This section lists the tasks for creating and managing checkpoints.

Checkpoint required task:

◆ "Creating checkpoints" on page 28

Checkpoint managing tasks:

◆ "Listing checkpoints" on page 31

◆ "Refreshing checkpoints" on page 36

◆ "Deleting checkpoints" on page 38

◆ "Automating checkpoint schedules" on page 39

◆ "Restoring PFS from a checkpoint" on page 47

◆ "Accessing checkpoints" on page 50

Page 28: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra28 of 72 Version 5.5

Creating checkpointsBy default, when you create the first checkpoint of a PFS, SnapSure creates a SavVol, which is a checkpoint-specific volume. It uses this SavVol for all additional checkpoints you create of the same PFS. Consult "SnapSure resource requirements" on page 17 for more SavVol information.

Create a checkpoint

Action

To create a checkpoint using SnapSure’s default storage and checkpoint name assignments, use this command syntax:$ fs_ckpt <fs_name> -Create

Where:<fs_name> = file system on which you want to create a checkpoint

Note: EMC recommends you allow at least 15 minutes between the creation/refresh of SnapSure checkpoints of the same PFS. This includes checkpoints created via the CLI, and those created/refreshed in an automated schedule, or between schedules running on the same PFS.

Example:To create a checkpoint of production file system ufs1, type:$ fs_ckpt ufs1 -Create

CAUTION!When creating checkpoints, be careful not to exceed your system’s limit. Celerra permits 96 checkpoints per PFS, regardless of whether the PFS is replicated or not, for all systems except the Model 510 Data Mover (which permits 32 checkpoints with PFS replication and 64 without). This limit counts existing checkpoints or those already created in a schedule and may count two restartable checkpoints as well as a third checkpoint created by certain replication operations on either the PFS or SFS.

If you are at the limit, delete existing checkpoints to create space for newer ones or decide not to create new checkpoints at all if existing ones are more important. Be aware when you start to replicate a file system, the facility must be able to create two checkpoints otherwise replication will not start. For example, if you have 95 checkpoints and want to start a replication, the 96th checkpoint will be created but replication will fail when the system tries to create the 97th checkpoint since the limit is breached.

Also, be careful when scheduling not to keep any checkpoints that would surpass the limit otherwise you cannot start a replication. In other words, if all checkpoints you specify to keep are created, they must be within the limit.

Page 29: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

29 of 72Version 5.5Using SnapSure on Celerra

Output

operation in progress (not interruptible)...id = 1965name = ufs1acl = 0in_use = Truetype = uxfsworm = offvolume = v1065 pool = clar_r5_economymember_of = root_avm_fs_group_4rw_servers= server_2ro_servers=rw_vdms =ro_vdms =ckpts = ufs1_ckpt1

stor_devs = APM00042000817-0028,APM00042000817-001Bdisks = d43,d12 disk=d43 stor_dev=APM00042000817-0028 addr=c16t2l8 server=server_2 disk=d43 stor_dev=APM00042000817-0028 addr=c48t2l8 server=server_2 disk=d43 stor_dev=APM00042000817-0028 addr=c0t2l8 server=server_2 disk=d43 stor_dev=APM00042000817-0028 addr=c32t2l8 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c0t1l11 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c32t1l11 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c16t1l11 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c48t1l11 server=server_2id = 1967name = ufs1_ckpt1acl = 0in_use = Truetype = ckptworm = offvolume = vp1068pool = clar_r5_economymember_of =rw_servers=ro_servers= server_2rw_vdms =ro_vdms =checkpt_of= ufs1 Wed Jan 19 07:45:34 EST 2005used = 6%full(mark)= 90%stor_devs = APM00042000817-002A,APM00042000817-001Ddisks = d44,d13 disk=d44 stor_dev=APM00042000817-002A addr=c16t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c48t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c0t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c32t2l10 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c0t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c32t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c16t1l13 server=server_2

Page 30: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra30 of 72 Version 5.5

Notes

• In this example, a checkpoint name is not specified in the command line, therefore SnapSure assigned a default name to it (ufs1_ckpt1), based on the PFS name.

• A metavolume or volume pool for the SavVol is not specified in the command line, therefore SnapSure (through AVM) assigned a volume from the volume pool (using the same pool where the PFS is built), named vp<n>, where n = a volume ID.

Note: SnapSure allows the PFS and the SavVol to be on an ATA drive. It also allows the PFS to be on a non-ATA drive and the SavVol to be on an ATA drive. You can specify an ATA drive for the SnapSure checkpoint SavVol by using the -pool option when you create the first PFS checkpoint. EMC recommends the PFS and the SavVol be on the same type of backend system.

• The HWM for the SavVol used in the creation of the latest checkpoint (90 percent by default), overrides the HWM currently set for the SavVol (if different). In other words, to change the HWM for the SavVol, create a new checkpoint and specify the HWM.

• All checkpoints created in version 5.2 and later are automatically mounted by default to enable sharing or exporting. Mounted checkpoints are always considered as “in-use” (=True).

• The Celerra Network Server’s man page or the Celerra Network Server Command Reference Manual provides the complete syntax of the fs_ckpt command.

Page 31: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

31 of 72Version 5.5Using SnapSure on Celerra

Listing checkpointsYou can list all PFS checkpoints or query about individual checkpoints, depending on the information you seek. This section describes the various ways of listing checkpoints.

List all PFS checkpointsThere are two methods to list all PFS checkpoints. The first method provides checkpoint creation dates and SavVol information. The second method provides storage details about the related PFS.

Page 32: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra32 of 72 Version 5.5

List all PFS checkpoints with checkpoint creation dates and SavVol information

Action

To list all PFS checkpoints (in the order of creation, with checkpoint creation date and SavVol information), use this command syntax:$ fs_ckpt <fs_name> -list

Where:<fs_name> = name of the PFS of which the checkpoints were created

Note: The file system name may be truncated if it is more than 19 characters. To display the full file system name, list it using the file system ID by replacing <fs_name> with id=<fs_ID> in the command syntax above.

Example:To list all checkpoints created of a PFS named ufs1, type:$ fs_ckpt ufs1 -list

Output

id ckpt_name creation_time inuse full(mark) used1967 ufs1_ckpt1 01/19/2005-07:45:34-EST y 90% 38% 1968 ufs1_ckpt2 01/19/2005-08:45:55-EST y 90% 38% 1969 ufs1_ckpt3 01/19/2005-09:45:46-EST y 90% 38% 1970 ufs1_ckpt4 01/19/2005-10:45:54-EST y 90% 38% 1971 ufs1_ckpt5 01/19/2005-11:45:01-EST y 90% 38% 1972 ufs1_ckpt6 01/19/2005-12:45:09-EST y 90% 38%

Note

• When the system creates checkpoints using default names (that is, the PFS name followed by ckpt and a numeric suffix), it uses the first available suffix it finds. Thus, when the oldest checkpoints become inactive, and a new one is created, its numeric suffix may not indicate the order of its creation. The order of checkpoint creation is the order in which they appear using the fs_ckpt <fsname> -list command. In this example, ufs1_ckpt6 is the newest checkpoint of the PFS, and ufs1_ckpt1 is the oldest. (The last checkpoint in the list is the newest checkpoint.)

• When you refresh any checkpoint of a PFS, SnapSure automatically changes its order on the list to reflect it is the most recent point-in-time file system.

• Mounted checkpoints are considered “in use” (y).• The value in the full (mark) column is the high water mark currently set for the SavVol.• The value in the used field is the cumulative total of the SavVol used by all PFS checkpoints (not

each individual checkpoint in the SavVol).• If a Data Mover is inaccessible, NOT ACCESSIBLE appears in the used field.• The checkpoint creation_time timestamp is the time and time zone of the local Control

Station.

Page 33: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

33 of 72Version 5.5Using SnapSure on Celerra

List all PFS checkpoints with PFS-related details

Action

To list all PFS checkpoints in the order of creation, and including the mount state, storage pool, and volume used for the PFS, and the Data Mover on which the checkpoints are mounted, use this command syntax:$ nas_fs -i <fs_name>

Where:<fs_name> = the name of the PFS of which the checkpoints were created

Note: The file system name may be truncated if it is more than 19 characters. To display the full system name, replace fs_name with id=<fs_ID> in the -i (info) option above.

Example:To list all checkpoints created of a PFS named ufs1, type:$ nas_fs -i ufs1

To list all PFS checkpoints in their creation order (but without a creation date), and by common checkpoint-name component (such as ckpt), type:$ nas_fs -l | grep ckpt

Note: The following example of output results from using the first method explained above.

Output

id = 1965name = ufs1acl = 0in_use = Truetype = uxfsworm = offvolume = v1065pool = clar_r5_economymember_of = root_avm_fs_group_4rw_servers= server_2ro_servers=rw_vdms =ro_vdms =ckpts = ufs1_ckpt1,ufs1_ckpt2,ufs1_ckpt3,ufs1_ckpt4,ufs1_ckpt5,ufs1_ckpt6stor_devs = APM00042000817-0028,APM00042000817-001Bdisks = d43,d12 disk=d43 stor_dev=APM00042000817-0028 addr=c16t2l8 server=server_2 disk=d43 stor_dev=APM00042000817-0028 addr=c48t2l8 server=server_2 disk=d43 stor_dev=APM00042000817-0028 addr=c0t2l8 server=server_2 disk=d43 stor_dev=APM00042000817-0028 addr=c32t2l8 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c0t1l11 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c32t1l11 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c16t1l11 server=server_2 disk=d12 stor_dev=APM00042000817-001B addr=c48t1l11 server=server_2

Page 34: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra34 of 72 Version 5.5

List an individual checkpointListing an individual checkpoint shows you the size, volume number, and pool type of the SavVol on which the checkpoint is created, the percentage of the SavVol currently used (by all PFS checkpoints), and the HWM set for autoextension of the SavVol. It also identifies the storage devices and the disks used by the Automatic Volume Manager (AVM) to create the SavVol.

Note

• When the system creates checkpoints using default names (that is, the PFS name followed by ckpt and a numeric suffix), it uses the first available suffix it finds. Thus, when the oldest checkpoints become inactive, and a new one is created, its numeric suffix may not indicate the creation order. The order of checkpoint creation is the order in which they appear using the nas_fs -i command.

• In this example, ufs1_ckpt6 is the newest PFS checkpoint, and ufs1_ckpt1 is the oldest. (The last checkpoint in the list is the newest checkpoint.)

• When you refresh any PFS checkpoint, SnapSure automatically changes its order on the list to reflect that it is the most recent point-in-time file system.

Action

To list information about an individual checkpoint, use this command syntax:$ nas_fs -i -s <fs_name>

Where:<fs_name> = name of the checkpoint for which you want information

Note: The file system name may be truncated if it is more than 19 characters. To display the full file system name, use the file system ID by replacing <fs_name> with id=<fs_ID> in the -i (info) option above.

Example:To list all checkpoints created of a PFS named ufs1, type:$ nas_fs -i -s ufs1_ckpt1

Output

id = 1967name = ufs1_ckpt1acl = 0in_use = Truetype = ckptworm = offvolume = vp1068pool = clar_r5_economymember_of =rw_servers=ro_servers= server_2rw_vdms =ro_vdms =checkpt_of= ufs1 Wed Jan 19 07:45:34 EST 2005size = volume: total = 2000 avail = 1277 used = 723 ( 38% ) (sizes in MB) ckptfs: total = 1968 avail = 1968 used = 0 ( 0% ) (sizes in MB) ( blockcount = 4096000 )used = 38%full(mark)= 90%

Page 35: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

35 of 72Version 5.5Using SnapSure on Celerra

stor_devs = APM00042000817-002A,APM00042000817-001Ddisks = d44,d13 disk=d44 stor_dev=APM00042000817-002A addr=c16t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c48t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c0t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c32t2l10 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c0t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c32t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c16t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c48t1l13 server=server_2

Note

• In this example, ufs1_ckpt1 uses volume pool 1068 (vp1068) for its SavVol. Additional checkpoints created of ufs1 share this SavVol.

• The SavVol is now at 38 percent capacity. It autoextends by 10 GB when it reaches 90 percent capacity.

Note: The HWM used for the SavVol when latest checkpoint is created (90 percent by default), overrides the HWM currently set for the SavVol (if different). In other words, you can change the HWM for the SavVol when you create a new checkpoint.

Output

Page 36: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra36 of 72 Version 5.5

Refreshing checkpointsWhen you refresh a checkpoint, SnapSure deletes the checkpoint and creates a new one, recycling SavVol space while maintaining the old file system name, ID, and mount state. If a checkpoint contains important data, back it up or use it before you refresh it.

Refresh a checkpoint

Action

To refresh a checkpoint, use this command syntax:$ fs_ckpt <fs_name> -refresh

Where:<fs_name> = name of the checkpoint

Note: EMC recommends allowing at least 15 minutes between the creation/refresh of SnapSure checkpoints of the same PFS. This includes checkpoints created through the CLI and those created/refreshed in an automated schedule or between schedules running on the same PFS.

If a checkpoint contains important data, back it up or use it before you refresh it.

Example:To refresh a checkpoint named ufs1_ckpt1, type:$ fs_ckpt ufs1_ckpt1 -refresh

Page 37: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

37 of 72Version 5.5Using SnapSure on Celerra

Output

operation in progress (not interruptible)...id = 1967name = ufs1_ckpt1acl = 0in_use = Truetype = ckptworm = offvolume = vp1068pool = clar_r5_economymember_of =rw_servers=ro_servers= server_2rw_vdms =ro_vdms =checkpt_of= ufs1 Wed Jan 19 17:07:30 EST 2005used = 38%full(mark)= 90%stor_devs = APM00042000817-002A,APM00042000817-001Ddisks = d44,d13 disk=d44 stor_dev=APM00042000817-002A addr=c16t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c48t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c0t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c32t2l10 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c0t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c32t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c16t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c48t1l13 server=server_2

Note

• If multiple PFS checkpoints exist, and you refresh a checkpoint making it the newest, SnapSure automatically changes its order in the checkpoint list (placing it last) to reflect it is the most recent point-in-time file system. Go to "Listing checkpoints" on page 31 to generate a list of checkpoints.

• If you do not specify an HWM for the SavVol in the refresh command line, SnapSure uses a default value of 90 percent.

• When a checkpoint accumulates a large amount of data due to a heavy volume of PFS modifications (for example, if the checkpoint approaches 500 GB), it should be refreshed regularly. When large checkpoints are up to date, system performance is optimized during a reboot or failover to another Data Mover.

Page 38: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra38 of 72 Version 5.5

Deleting checkpointsYou must first unmount the checkpoint(s) you want to delete (CLI restriction only) because all checkpoints are automatically mounted upon creation.

Delete a checkpoint

Action

To unmount and delete a checkpoint in one step, use this command syntax:$ nas_fs -delete <fs_name> -o umount=yes

Where:<fs_name> = name of the checkpointExample:To unmount and delete the checkpoint usf1_ckpt2, type:$ nas_fs -delete ufs1_ckpt2 -o umount=yes

Note: There is no fs_ckpt command for deleting checkpoints, instead, use the nas_fs command for this purpose. Deleting a checkpoint does not affect the point-in-time view of the other checkpoints. (Read "Listing checkpoints" on page 31 to view all the checkpoints.) Always back up or use important checkpoint data before deleting it.

Output

id = 1968name = ufs1_ckpt2acl = 0in_use = Falsetype = ckptworm = offvolume =rw_servers =ro_servers =rw_vdms =ro_vdms =

Note

• You cannot delete a checkpoint if it is mounted, in use, part of a group, or if it is scheduled, but not yet created.

• The SavVol space from deleted checkpoints is recycled for new checkpoints.• Periodically delete or refresh inactive checkpoints to avail more SavVol space. (SnapSure does

not automatically delete inactive checkpoints.)• When you delete all PFS checkpoints, SnapSure automatically deletes the corresponding

SavVol.

Page 39: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

39 of 72Version 5.5Using SnapSure on Celerra

Automating checkpoint schedules

Before automating checkpoint schedules◆ EMC recommends you allow at least 15 minutes between the creation/refresh

of SnapSure checkpoints of the same PFS. This includes checkpoints created through the CLI and those created/refreshed in an automated schedule, or between schedules running on the same PFS.

◆ SnapSure allows the same checkpoint name to be used when creating different checkpoint schedules of the same PFS, and of different PFSs. No alert or error message is provided. Avoid using the same checkpoint names when creating multiple schedules. Using SnapSure's default checkpoint names avoids this problem.

◆ All checkpoint schedules created using the Celerra Manager can be managed using the CLI, and all checkpoint schedules created using the nas_ckpt_schedule command can be managed through the Celerra Manager.

Create a one-time automated checkpoint schedule

Action

To schedule a one-time checkpoint creation at a future date, use this command syntax:$ nas_ckpt_schedule -create <name> -filesystem <name> -description <description> -recurrence once -start_on <YYYY-MM-DD> -runtimes <HH:MM> -ckpt_name <ckpt_name>

Where:<name> = checkpoint schedule name, which must be a system-wide unique name. Names can contain up to 128 ASCII characters, including a-z, A-Z, 0-9, hyphen (-), and underscore ( _ ). Names cannot start with a hyphen.<name> = file system from which to create the checkpoint; you can also use the file system ID. The syntax for the file system ID is -filesystem id=<n>.<description> = optional text describing the schedule being created.<YYYY-MM-DD> = date on which the checkpoint creation occurs.<HH:MM> = time at which the checkpoint creation occurs. Specify this value using the 24-clock format (for example, 23:59).<ckpt_name> = optional checkpoint name. Example:To create an automated checkpoint schedule to occur on October 5, 2007 at 10:22, type:$ nas_ckpt_schedule -create ufs01_ckpt_sch -filesystem ufs01 -recurrence once -start_on 2007-10-05 -runtimes 10:22

Page 40: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra40 of 72 Version 5.5

Create an automated daily checkpoint schedule

Action

To schedule the creation of checkpoints on a daily basis, use this command syntax:$ nas_ckpt_schedule -create <name> -filesystem <name> -description <description> -recurrence daily -every <number_of_days> -start_on <YYYY-MM-DD> -end_on <YYYY-MM-DD> -runtimes <HH:MM> {-keep <number_of_ckpts> | -ckpt_names <ckpt_name>}

Where:<name> = checkpoint schedule name, which must be a system-wide unique name. Names can contain up to 128 ASCII characters, including a-z, A-Z, 0-9, hyphen (-), and underscore ( _ ). Names cannot start with a hyphen.<name> = file system from which to create the checkpoint; you can also use the file system ID. The syntax for the file system ID is -filesystem id=<n>.<number_of_days> = every number of days to create the checkpoints (for example, 2 generates checkpoints every 2 days); the default is 1.<description> = optional text describing the schedule being created.<YYYY-MM-DD> = dates on which to start and end the checkpoint schedule. The schedule takes effect immediately unless -start_on is specified and runs indefinitely unless -end_on is specified. The start date cannot be in the past and an end date cannot be before the start date.<HH:MM> = time(s) on which the checkpoint creation occurs on the specified days. Specify this time using the 24-clock format (for example, 23:59). Use a comma to separate each runtime.<number_of_ckpts> = specifies the number of checkpoints to keep (create) before the checkpoints are refreshed. <ckpt_name> = optional checkpoint name(s) created in the checkpoint schedule, Use a comma to separate each checkpoint name. The number of names specified indicates the number of checkpoints to keep before refreshing them. If you do not enter any checkpoint names, the system provides default names for the scheduled checkpoints using the format ckpt_<schedule_name>_<nnn>, where <schedule_name> is the name of the associated checkpoint schedule and <nnn> is an incremental number, starting at 001. The checkpoint name must be a system-wide unique name. Names can contain up to 255 ASCII characters including a-z, A-Z, 0-9, hyphen (-), and an underscore ( _ ). Names cannot start with a hyphen or include a blank character. Alphanumeric names are accepted. Also, do not use the .ckpt extension to name a checkpoint file system. This reserved keyword is used as a virtual directory entry to enable NFS or CIFS client access to online checkpoints in the PFS namespace and it should be used for this purpose only.Example:To create an automated daily checkpoint schedule occurring every day at 10:22, starting on October 5, 2007 and ending on October 11, 2007, and keeping the latest three checkpoints before refreshing them, type:$ nas_ckpt_schedule -create ufs01_ckpt_daily -filesystem ufs01 -recurrence daily -every 1 -start_on 2007-10-05 -end_on 2007-11-05 -runtimes 10:22 -keep 3

Page 41: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

41 of 72Version 5.5Using SnapSure on Celerra

Create an automated weekly checkpoint schedule

Action

To schedule the creation of checkpoints on a weekly basis, use this command syntax:$ nas_ckpt_schedule -create <name> -filesystem <name> -description <description> -recurrence weekly -every <number_of_weeks> -days_of_week <days> -start_on <YYYY-MM-DD> -end_on <YYYY-MM-DD> -runtimes <HH:MM> -ckpt_names <ckpt_name>

Where:<name> = checkpoint schedule name, which must be a system-wide unique name. Names can contain up to 128 ASCII characters, including a-z, A-Z, 0-9, hyphen (-), and underscore ( _ ). Names cannot start with a hyphen.<name> = file system from which to create the checkpoint; you can also use the file system ID. The syntax for the file system ID is -filesystem id=<n>.<number_of_weeks> = every number of weeks to create the checkpoints (for example, 2 runs the checkpoint schedule bimonthly); the default is 1.<description> = optional text describing the schedule being created.<days> = days (Mon, Tue, Wed, Thu, Fri, Sat, Sun) of the week to run the checkpoint schedule; use a comma to separate each day. <YYYY-MM-DD> = dates on which to start and end the checkpoint schedule. The schedule takes effect immediately unless -start_on is specified and runs indefinitely unless -end_on is specified. The start date cannot be in the past and an end date cannot be before the start date.<HH:MM> = time(s) on which the checkpoint creation occurs on the specified days. Specify this time using the 24-clock format (for example, 23:59). Use a comma to separate each runtime.<number_of_ckpts> = specifies the number of checkpoints to keep (create) before the checkpoints are refreshed. This is not used with the -ckpt_names option.<ckpt_name> = optional checkpoint name(s) created in the checkpoint schedule. Separate each checkpoint name with a comma. The number of names specified indicates the number of checkpoints to keep before refreshing them. If you do not enter any checkpoint names, the system provides default names for the scheduled checkpoints using the format ckpt_<schedule_name>_<nnn>, where <schedule_name> is the name of the associated checkpoint schedule and <nnn> is an incremental number, starting at 001. The checkpoint name must be a system-wide unique name. Names can contain up to 255 ASCII characters including a-z, A-Z, 0-9, hyphen (-), and an underscore ( _ ). Names cannot start with a hyphen or include a blank character. Alphanumeric names are accepted. Also, do not use the .ckpt extension to name a checkpoint file system.This reserved keyword is used as a virtual directory entry to enable NFS or CIFS client access to online checkpoints in the PFS namespace and it should be used for this purpose only.Example:To create an automated checkpoint schedule occurring every week on Wednesday at 10:22, starting on October 5, 2007 and ending on October 11, 2007, and keeping the latest four checkpoints before refreshing them, type:$ nas_ckpt_schedule -create ufs01_ckpt_weekly -filesystem ufs01 -recurrence weekly -every 1 -days_of_week Wed -start_on 2007-10-05 -end_on 2007-11-05 -runtimes 10:22 -keep 4

Page 42: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra42 of 72 Version 5.5

Create an automated monthly checkpoint schedule

Action

To schedule the creation of checkpoints on a monthly basis, use this command syntax:$ nas_ckpt_schedule -create <name> -filesystem <name> -description <description> -recurrence monthly -every <number_of_months> -days_of_month <days> -start_on <YYYY-MM-DD> -end_on <YYYY-MM-DD> -runtimes <HH:MM> {-keep <number_of_ckpts> | -ckpt_names <ckpt_name>}

Where:<name> = checkpoint schedule name, which must be a system-wide unique name. Names can contain up to 128 ASCII characters, including a-z, A-Z, 0-9, and period (.), hyphen (-), or underscore ( _ ). Names cannot start with a period or hyphen, or contain all numbers.<name> = file system from which to create the checkpoint; you can also use the file system ID. The syntax for the file system ID is -filesystem id=<n>.every <number_of_months> = every number of months to create the checkpoints (for example, 3 runs the checkpoint schedule every 3 months); the default is 1.<description> = optional text describing the schedule being created.<days> = one or more days of the months to run the automated checkpoint schedule; specify an integer between 1 and 31 and use a comma to separate days. <YYYY-MM-DD> = date(s) on which to start and end the checkpoint schedule; the default is the current date.The schedule takes effect immediately unless -start_on is specified and runs indefinitely unless -end_on is specified.<HH:MM> = time(s) on which the checkpoint creation occurs on the specified days; default is the current time. Specify this time using the 24-clock format (for example, 23:59) and use a comma to separate runtimes.<number_of_ckpts> = specifies the number of checkpoints to keep (create) before the checkpoints are refreshed.<ckpt_name> = optional checkpoint name(s) created in the checkpoint schedule. Separate each checkpoint name with a comma. The number of names specified indicates the number of checkpoints to keep before refreshing them. If you do not enter any checkpoint names, the system provides default names for the scheduled checkpoints using the format ckpt_<schedule_name>_<nnn>, where <schedule_name> is the name of the associated checkpoint schedule and <nnn> is an incremental number, starting at 001. The checkpoint name must be a system-wide unique name. Names can contain up to 255 ASCII characters including a-z, A-Z, 0-9, hyphen (-), and an underscore ( _ ). Names cannot start with a hyphen or include a blank character. Alpha-numeric names are accepted. Also, do not use the .ckpt extension to name a checkpoint file system. This reserved keyword is used as a virtual directory entry to enable NFS or CIFS client access to online checkpoints in the PFS namespace and it should be used for this purpose only.Example:To create an automated checkpoint schedule occurring every month on the 30th (skipping February) at 10:22, starting on October 5, 2007 and ending on October 11, 2007, and keeping the lastest four checkpoints before refreshing them, type:$ nas_ckpt_schedule -create ufs01_ckpt_monthly -filesystem ufs01 -recurrence monthly -every 1 -days_of_month 30 -start_on 2007-10-05 -end_on 2007-11-05 -runtimes 10:22 -keep 4

Page 43: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

43 of 72Version 5.5Using SnapSure on Celerra

List automated checkpoint schedules

Action

To list all the automated checkpoint schedules, use this command syntax:$ nas_ckpt_schedule -list

Example:$ nas_ckpt_schedule -list

Output

Id = 4Name = ufs01_ckpt_dailyDescription =Next Run = Thu Oct 05 10:22:00 EDT 2007

Id = 6Name = ufs01_ckpt_monthlyDescription =Next Run = Mon Oct 30 10:22:00 EST 2007

Id = 3Name = ufs01_ckpt_schDescription =Next Run = Thu Oct 05 10:22:00 EDT 2007

Id = 5Name = ufs01_ckpt_weeklyDescription =Next Run = Wed Oct 11 10:22:00 EDT 2007

Page 44: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra44 of 72 Version 5.5

Display automated checkpoint schedule information

Action

To display detailed property information on a checkpoint schedule, use this command syntax:$ nas_ckpt_schedule -info <name>

Where:<name> = checkpoint schedule name or checkpoint schedule IDExample:To display information on file system 5, type:$ nas_ckpt_schedule -info id=5

Output

Id = 5Name = ufs01_ckpt_weeklyDescription =Tasks = Checkpoint ckpt_ufs01_ckpt_weekly_001 on file system id=281, Checkpoint ckpt_ufs01_ckpt_weekly_002 on file system id=281, Checkpoint ckpt_ufs01_ckpt_weekly_003 on file system id=281, Checkpoint ckpt_ufs01_ckpt_weekly_004 on file system id=281Next Run = Wed Oct 11 10:22:00 EDT 2006State = PendingRecurrence = every 1 weeksStart On = Thu Oct 05 00:00:00 EDT 2006End On = Sun Nov 05 23:59:59 EST 2006At Which Times = 10:22On Which Days of Week = WedOn Which Days of Month =

Page 45: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

45 of 72Version 5.5Using SnapSure on Celerra

Modify an automated checkpoint schedule

Pause an automated checkpoint schedule

To change the properties of an automated checkpoint schedule, use this command syntax:$ nas_ckpt_schedule -modify <name> -name <new_name> -description <description> -recurrence {daily | weekly | monthly} -every {<number_of_days> | <number_of_weeks> | <number_of_months>} -days_of_week {Mon|Tue|Wed|Thu|Fri|Sat|Sun} -days_of_month <1-31> -start_on <YYYY-MM-DD> -end_on <YYYY-MM-DD> -runtimes <HH:MM>

Where:<name> = name of the checkpoint schedule to modify; you can also use the checkpoint schedule ID.<new_name> = new name of the checkpoint schedule. <description> = description of the checkpoint schedule.<1-31> = numeric days of the months to run the checkpoint schedule; use a comma to separate days. {daily|weekly|monthly} = interval of the checkpoint schedule. If you change the recurrence of a checkpoint schedule, you must also change the associated <number_of_days>, <number_of_weeks>, or <number_of_months>. For example, <days_of_week> is rejected if monthly is set for the checkpoint schedule interval.<number_of_days> = every number of days to create the checkpoints (for example, 2 generates checkpoints every 2 days).<number_of_weeks> = every number of weeks to create the checkpoints (for example, 2 runs the checkpoint schedule biweekly).<number_of_months> = every number of months to create the checkpoints (for example, 3 runs the checkpoint schedule every 3 months).<YYYY-MM-DD> = date(s) on which to start and end the checkpoint schedule. You can change the start date only if the checkpoint schedule is pending. <HH:MM> = time(s) on which the checkpoint creation occurs on the specified days. Specify this time using the 24-clock format (for example, 23:59) and use a comma to separate runtimes.Example:To change the name of the automated checkpoint schedule from ufs01_ckpt_sch to run_once_checkpoint, type: $ nas_ckpt_schedule -modify ufs01_ckpt_sch -name run_once_checkpoint

Action

To pause an active checkpoint schedule, stopping all associated tasks, including checkpoint creations, use this command syntax:$ nas_ckpt_schedule -pause <name>

Where:<name> = checkpoint schedule name; you can also use the checkpoint schedule ID.Example:To pause ufs01_ckpt_sch, type:$ nas_ckpt_schedule -pause ufs01_ckpt_sch

Page 46: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra46 of 72 Version 5.5

Resume a paused automated checkpoint schedule

Delete an automated checkpoint scheduleThis procedure does not remove the checkpoints in the schedule, but removes them from the automatic-refresh cycle. You can refresh these checkpoints manually, or delete them, as desired.

Action

To resume a paused automated checkpoint schedule, restarting all associated tasks, use this command syntax:$ nas_ckpt_schedule -resume <name>

Where:<name> = checkpoint schedule name; you can also use the checkpoint schedule IDExample:To resume ufs01_ckpt_sch, type:$ nas_ckpt_schedule -resume ufs01_ckpt_sch

Action

To delete an automated checkpoint schedule, use this command syntax:$ nas_ckpt_schedule -delete <name>

Where:<name> = checkpoint schedule name to delete; you can also use the checkpoint schedule IDExample:To delete ufs01_ckpt_sch, type:$ nas_ckpt_schedule -delete ufs01_ckpt_sch

Page 47: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

47 of 72Version 5.5Using SnapSure on Celerra

Restoring PFS from a checkpointYou can restore a PFS back to a point-in-time state using checkpoint data.

Before you restore PFS from a checkpointBefore restoring a PFS, ensure enough space exists on your system to support the operation. Read "SnapSure resource requirements" on page 17 for more information.

Note: EMC recommends that you do not create checkpoints under nested mount points. If you do so using Volume Shadow Copy (VSS), you cannot restore the PFS from a checkpoint of the PFS without first permanently unmounting the checkpoints and the PFS from the NMFS. Then you must mount the PFS and its checkpoint to another mount point, perform a restore with VSS, then reverse the process to mount the PFS and its checkpoints on the NMFS again.

!CAUTION!If you create a SnapSure checkpoint of any file system, and then write a file (for example, file1) into the file system and later restore the file system using the checkpoint, then file1 will not remain because it did not exist when the checkpoint was created.

Page 48: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra48 of 72 Version 5.5

Restore a PFS from a checkpoint

Action

To restore a PFS from a checkpoint, use this command syntax:$ /nas/sbin/rootfs_ckpt <fs_name> -Restore

Where:<fs_name> = the checkpoint from which you want to restore the PFSExample:To restore the PFS from checkpoint ufs1_ckpt1, type:$ /nas/sbin/rootfs_ckpt ufs1_ckpt1 -Restore

Output

operation in progress (not interruptible)...id = 1967name = ufs1_ckpt1acl = 0in_use = Truetype = ckptworm = offvolume = vp1068pool = clar_r5_economymember_of =rw_servers=ro_servers= server_2rw_vdms =ro_vdms =checkpt_of= ufs1 Wed Jan 19 17:07:30 EST 2005used = 38%full(mark)= 75%stor_devs = APM00042000817-002A,APM00042000817-001Ddisks = d44,d13 disk=d44 stor_dev=APM00042000817-002A addr=c16t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c48t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c0t2l10 server=server_2 disk=d44 stor_dev=APM00042000817-002A addr=c32t2l10 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c0t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c32t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c16t1l13 server=server_2 disk=d13 stor_dev=APM00042000817-001D addr=c48t1l13 server=server_2

Page 49: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

49 of 72Version 5.5Using SnapSure on Celerra

Note

• When using the -Restore option, <fs_name> is the name of the checkpoint from which you want to restore the PFS. (The PFS name is unnecessary since SnapSure inherently associates a checkpoint with its PFS. SnapSure prohibits restoring a PFS from a checkpoint of another PFS).

• If SnapSure needs to extend the SavVol during the restore process, the restore may fail upon first attempt. Retry the restore after SnapSure extends the SavVol.

• An HWM of 75 percent is set for the SavVol when the new checkpoint is created in the restore process. This value cannot change.

• In this example, the checkpoint has used 38 percent of the SavVol space and when it reaches 75 percent, the SavVol automatically extends by 10 GB (as space permits) to keep the checkpoints active.

• If you are restoring a PFS extended since the time the checkpoint which you are restoring from was created, read "Restoring from a checkpoint" on page 21.

Note: When the restore operation completes, the message Restore completed successfully appears in the /nas/log/sys_log file.

Page 50: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra50 of 72 Version 5.5

Accessing checkpointsYou can directly access point-in-time copies of client files/directories in SnapSure checkpoints using either of the following features:

◆ "Accessing checkpoints using CVFS"

◆ "Accessing checkpoints using SCSF" on page 59

Accessing checkpoints using CVFSYou can access SnapSure checkpoints using CVFS.

Before accessing checkpoints using CVFSCVFS version 2 supports NFS and CIFS clients, provides access to checkpoint directories from any directory or subdirectory of the PFS, and hides the checkpoint directory name when the PFS directories are listed (using ls -a from UNIX NFS clients, or dir for Windows CIFS clients). Hiding the checkpoint directory from the list provides a measure of access control, requiring clients to know the directory name to access the checkpoint items.

Read "Accessing online checkpoints" on page 21 for more CVFS details and other checkpoint file access features.

You can perform the following tasks using CVFS version 2:

◆ "Rename the virtual checkpoint directory"

◆ "Access checkpoints from NFS clients using CVFS version 2" on page 51

◆ "Access checkpoints from CIFS clients using CVFS version 2" on page 53

◆ "Disable checkpoint access with CVFS version 2" on page 56

◆ "Rename checkpoints with CVFS version 2" on page 57

Rename the virtual checkpoint directoryYou can change the virtual checkpoint directory name used by CVFS from the default name (.ckpt) by using a parameter in the slot_<x>/param file. The Celerra Network Server Parameters Guide has more information on modifying parameter files.

Step Action

1. Log in to the Control Station.

2. Open /nas/server/slot_<x>/param with a text editor.Where:<x> = Data Mover number

Page 51: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

51 of 72Version 5.5Using SnapSure on Celerra

Access checkpoints from NFS clients using CVFS version 2NFS (UNIX) clients can access checkpoints to recover point-in-time data with CVFS version 2.

3. Type param cvfs virtualDirName=<.name>When this line appears, it has .ckpt or the current name of the CVFS virtual directory (in the format .name) appended.

CAUTION!Do not specify names containing .ckpt for files, directories, and links because it is reserved as a virtual directory entry for NFS/CFS client access to online checkpoints in the PFS namespace.

4. Change the current name of the checkpoint directory to a name you choose. Do not type a dot (.) before the name you choose. The system assumes the dot for you. (For example, if you name the directory snapshot, then the system creates the name as .snapshot.)

Note: The directory name can contain up to 64 alphanumeric characters and include dashes and underscores.

5. Save and close the file.

6. Reboot the Data Mover, using this command syntax:$ server_cpu <movername> -reboot -monitor now

Where:<movername> = name of Data Mover controlled by the slot_<x>/param file. For example, slot_2/param affects server_2.

Step Action

1. List a client directory in the PFS to view the existing files and directories using this command syntax:$ ls -l <mount point>/<client_directory>

Where:<mount point> = the name of the mount point on which the PFS is mounted<client_directory> = the name of the client directory in the PFSExample:To list the files and directories contained in the client directory mpl in the PFS mounted on mount point /EMC, enter:$ ls -l /EMC/mpl

Output:drwxr-xr-x 2 32771 32772 80 Nov 21 8:05 2003 dir1drwxr-xr-x 2 root other 80 Nov 14 10:25 2003 resources-rw-r--r-- 1 32768 32772 292 Nov 19 11:15 2003 A1.dat-rw-r--r-- 1 32768 32772 292 Nov 19 11:30 2003 A2.dat

Step Action

Page 52: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra52 of 72 Version 5.5

2. Access a virtual checkpoint directory entry named .ckpt (the default name) from the client directory on the production file system. The checkpoints displayed are relative to the directory in which the .ckpt entry resides. Only mounted checkpoints will be displayed.Example:To list the virtual .ckpt entries for the client directory mpl on the PFS mounted on /EMC, type:$ ls -la /EMC/mpl/.ckpt

Output:drwxr-xr-x 5 root root 1024 Nov 19 08:02 2003_11_19_16.15.43_GMTdrwxr-xr-x 6 root root 1024 Nov 19 11:36 2003_11_19_16.39.39_GMTdrwxr-xr-x 7 root root 1024 Nov 19 11:42 2003_11_20_12.27.29_GMT

Note: In the example above, CVFS found three virtual checkpoint entries. These entries were created on a Control Station whose clock is set to Greenwich Mean Time (GMT) by default.

In Celerra Network Server version 5.3 and later, use the server_date command to instruct the Celerra system to use the local time zone of the Data Mover when constructing the entry name appearing in the virtual checkpoint directory whenever the checkpoint is created or refreshed. For example, EST is used for Eastern Standard Time if the Data Mover time zone is so configured using server_date. The server_date man page or the Celerra Network Server System Operations provide more details.If a checkpoint in this list is created before server_date is used to set the time zone, unmount and remount the checkpoint in this list to view the latest timestamp.If you change the default (.ckpt) directory name using the parameter shown in Rename the virtual checkpoint directory on page 50, replace .ckpt with .<name> in steps 2 and 3 of this procedure.

Note: If you have a linked file system and you display checkpoints, two results are possible: (1) if the linked file system has checkpoints, only the checkpoints of the linked file system appear; not the checkpoints of the source file system. (You can list the checkpoints of the source file system from any location, except under the linked file system.) (2) If the linked file system has no checkpoints, the following error message appears: Cannot find file or item <‘pathname’>.

Step Action

Page 53: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

53 of 72Version 5.5Using SnapSure on Celerra

Access checkpoints from CIFS clients using CVFS version 2 CIFS (Windows) clients can access checkpoints and recover point-in-time data with CVFS version 2.

3. Access a virtual checkpoint from the list of entries to view the point-in-time information available for recovery for that directory.Example:To view the contents of checkpoint entry 2003_11_19_16.39.39_GMT found on the client directory mpl on the PFS mounted on mount point EMC, type:$ ls -l /EMC/mpl/.ckpt/2003_11_19_16.39.39_GMT

Output:-rw-r--r-- 1 32768 32772 292 Nov 19 11:15 A1.dat-rw-r--r-- 1 32768 32772 292 Nov 19 11:30 A2.dat-rw-r--r-- 1 32768 32772 292 Nov 19 11:45 A3.datdrwxr-xr-x 2 root other 80 Nov 14 10:25 resources

Note: These read-only items represent a specific point-in-time view of the mpl directory. When these checkpoints are created, the directory dir1 did not exist on the mpl directory, which is why it does not appear in this list, but does when you list the directory’s current contents. Conversely, A3.dat appears in this list, but does not appear when you list the directory’s current contents because it is deleted since the checkpoint was created. Use these point-in-time items to restore those accidently deleted, or access the items to read point-in-time data, and so on.

Step Action

1. In the address field of an Explorer window, enter the name of a client directory on the PFS to view the existing files and directories.Example:Z:\

Step Action

Page 54: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra54 of 72 Version 5.5

2. Type .ckpt (or the name given to the virtual checkpoint directory) after the directory name to access the virtual directory entry from the client directory on the production file system. The checkpoints displayed are relative to the directory in which the .ckpt entry resides. Only mounted checkpoints are displayed.Example:

Note: In the example above, CVFS found four virtual checkpoint entries. These entries were created on a Control Station whose clock is set to Greenwich Mean Time (GMT) by default.

In Celerra Network Server version 5.3 and later, you can use the server_date command to direct the Celerra system to use the local time zone of the Data Mover when constructing the entry name appearing in the virtual checkpoint directory whenever the checkpoint is created or refreshed. For example, EST is used for Eastern Standard Time if the Data Mover time zone is so configured using server_date. The server_date man page or the Celerra Network Server System Operations offer for more details.If a checkpoint in this list is created before server_date is used to set the time zone, unmount and remount the checkpoint to view the latest timestamp.The checkpoint timestamp may indicate a different checkpoint creation/refresh time depending on the interface used to view the checkpoint. Read the problem description for "Differing checkpoint timestamps" on page 63 for more details.If you have changed the default .ckpt directory name using the parameter shown in "Rename the virtual checkpoint directory" on page 50, replace .ckpt with .<name> in steps 2 and 4 of this procedure.

Note: If you have a linked file system and you display checkpoints, two results are possible: (1) Only the checkpoints of the linked file system appear, not the checkpoints of the source file system. (You can list the checkpoints of the source file system from any location, except under the linked file system.) (2) If the linked file system lacks checkpoints, the following error message appears: Cannot find file or item <‘pathname’>.

Step Action

Page 55: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

55 of 72Version 5.5Using SnapSure on Celerra

3. Access (select) a virtual checkpoint from the list of entries to view the point-in-time information available for recovery of that directory.Example:

Note: The checkpoint directory contents (right window, above) represent a specific point-in-time view of directory Z:\. Clients can use these point-in-time items (for example, they can copy files 38.dat and 39.dat to the top level of directory Z) to restore missing files accidently deleted.

Step Action

Page 56: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra56 of 72 Version 5.5

Disable checkpoint access with CVFS version 2NFS and CIFS client access to CVFS version 2 checkpoint directories is enabled by default and disabled by changing the showHiddenCkpt parameter to zero in the Celerra Network Server. Disabling CVFS version 2 will also disable SCSF.

4. Clients can also search for virtual checkpoint directories from the subdirectory level as this feature is not restricted to root directories. To do so, clients type \.ckpt after the subdirectory name in the Explorer address field to display the virtual checkpoint directories it contains, as shown in the following example.

Note: In this example, only one virtual checkpoint directory is displayed because the other data was nonexistent when this checkpoint was created.

Step Action

1. Log in to the Control Station.

2. Open /nas/server/slot_<x>/param with a text editor.Where:<x> = Data Mover number

3. Type param cfs showHiddenCkpt=0If the line already appears, use showHiddenCkpt=0 to disable CVFS.

CAUTION!Do not change any other lines in this file without a thorough knowledge of the potential impact on the system. Contact EMC Customer Service for more information.

4. Save and close the file.

5. Reboot the Data Mover using this command syntax:$ server_cpu <movername> -reboot -monitor now

Where:<movername> = name of Data Mover controlled by the slot_<x>/param file. For example, slot_2/param affects server_2.

Step Action

Page 57: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

57 of 72Version 5.5Using SnapSure on Celerra

Rename checkpoints with CVFS version 2You can change the checkpoint name presented to NFS/CIFS clients when they list the CVFS virtual checkpoint directory. The default format of checkpoint names is yyy_mm_dd_hh.mm.ss_<Data_Mover_timezone>.

Step Action

1. Log in to the Control Station.

2. List the checkpoints on the server using this command syntax:$ server_mount server_<x>

Where:<x> = the Data Mover on which you want to list checkpointsExample:To list checkpoints on Data Mover 2, type:$ server_mount server_2

A list of checkpoints similar to the following appears:ufs1_ckpt3 on /ufs1_ckpt3 ckpt,perm,roufs1_ckpt2 on /ufs1_ckpt2 ckpt,perm,roufs1_ckpt1 on /ufs1_ckpt1 ckpt,perm,ro

3. Unmount the checkpoint you want to rename.Example:To unmount checkpoint ufs1_ckpt1, type:$ server_umount server_1 /ufs1_ckpt1

Note: To determine a checkpoint’s creation date to identify which one to rename, type: nas_fs -i or nas_fs -l.

Page 58: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra58 of 72 Version 5.5

4. Rename the checkpoint when you mount it using this command syntax:$ server_mount server_<x> -o cvfsname=<newname> <checkpointname> <mount point>

Where:<x> = the Data Mover on which you want to mount the checkpoint<newname> = the new name you want NFS/CIFS clients to see<checkpointname> = the current checkpoint name display by server_mount<mountpoint> = the mount point of the checkpoint

Note: The custom checkpoint name can contain up to 128 characters and must be a system-wide unique name. Names can include a-z, A-Z, 0-9, and period (.), hyphen (-), or underscore ( _ ). Names cannot start with a period or hyphen.

Example:To change the name of checkpoint ufs1_ckpt1 of ufs1 to Monday while mounting the checkpoint on Data Mover 2 on mount point EMC, type:$ server_mount server_2 -o cvfsname=Monday ufs1_ckpt1 /ufs1

Output:The renamed checkpoint appears in the list as follows:ufs1_ckpt3 on /ufs1_ckpt3 ckpt,perm,roufs1_ckpt2 on /ufs1_ckpt2 ckpt,perm,roufs1_ckpt1 on /ufs1_ckpt1 ckpt,perm,ro,cvfsname=Monday

When NFS and CIFS clients next list the checkpoints, ufs1_ckpt1 appears as Monday. This example shows the NFS client view when they type ls -l .ckpt from the virtual checkpoint directory:drwxr-xr-x 6 root root 1024 Nov 19 11:36 2003_11_19_16.39.39_GMTdrwxr-xr-x 7 root root 1024 Nov 19 11:42 2003_11_20_12.27.29_GMTdrwxr-xr-x 5 root root 1024 Nov 19 08:02 Monday

Note: Because of filename restrictions associated with DOS version 6.0 and earlier, clients of these DOS-based applications will continue to see checkpoint names in the standard DOS filename convention: 01050400.011 (ddmmyyfile.IDx). This filename is not changeable.

Step Action

Page 59: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

59 of 72Version 5.5Using SnapSure on Celerra

Accessing checkpoints using SCSFYou can use SCSF to enable Microsoft Windows Server 2003 and Microsoft XP clients to list, view, copy, and restore from files in checkpoints created with SnapSure.

Step Action

1. Right-click a PFS in the directory and select Properties from the menu that appears. A new window with a Previous versions tab enabled appears similar to the following:

Note: At least one checkpoint for the file or directory must exist before the Previous versions tab is visible. If the file is unchanged since the checkpoint was created or refreshed, it will not appear on the Previous versions tab. Each checkpoint is timestamped with the date it was created or last refreshed.

2. From the list of folders, select the version you want to view, copy, or use to restore a file or directory.

Page 60: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra60 of 72 Version 5.5

Enable or disable SCSFThe SCSF feature is enabled by default on the Celerra Network Server and you can disable it by modifying the parameter allowSnapSureVss. Disabling SCSF does not disable access to the .ckpt directory.

3. Click View, Copy, or Restore, as applicable to the operation you want to perform. A summary of each button’s behavior is as follows:View — Displays the file or directory but does not modify it.Copy — Lets you save a copy of the file or folder to a different location. A dialog box appears in which you can specify a new name and location for the copy.Restore — Lets you use the selected file or directory to replace the corresponding PFS file or directory. A dialog box appears to confirm or cancel the operation.Microsoft online help for Shadow Copy offers detailed usage information.

Step Action

1. To disable or to enable CIFS client access to checkpoint directories through SCSF, use this command syntax:$ server_param <movername> -facility <facility_name> -modify <param_name> -value <new_value>

Where:<movername> = name of the specified Data Mover<facility_name> = name of the facility to which the parameter belongs<param_name> = name of the parameter<new_value> = value you want to set for the specified parameterExamples:To disable CIFS client access to checkpoint directories through SCSF, type:$ server_param server_2 -facility cifs -modify allowSnapSureVss -value 0

To enable CIFS client access to checkpoint directories through SCSF, type:$ server_param server_2 -facility cifs -modify allowSnapSureVss -value 1

Note: Parameter and facility names are case-sensitive.

Output:server_2 : done

2. Reboot the Data Mover using this command syntax:$ server_cpu <movername> -reboot -monitor now

Where:<movername> = name of the specified Data Mover

Step Action

Page 61: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

61 of 72Version 5.5Using SnapSure on Celerra

Troubleshooting SnapSureAs part of an effort to continuously improve and enhance the performance and capabilities of its product lines, EMC periodically releases new versions of its hardware and software. Therefore, some functions described in this document may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, please contact your EMC representative.

Technical supportFor technical support, go to EMC Customer Service on Powerlink. To open a service request through Powerlink, you must have a valid support agreement. Please contact your EMC Sales Representative for details about obtaining a valid support agreement or to answer any questions about your account.

TelephoneU.S.: 800.782.4362 (SVC.4EMC)

Canada: 800.543.4782 (543.4SVC)

Worldwide: 508.497.7901

Note: Please do not request a specific support representative unless one has already been assigned to your particular system problem.

The Problem Resolution Roadmap for Celerra contains additional information about using Powerlink and resolving problems.

Troubleshooting stepsTable 3 lists the symptoms, error messages, and fixes or workarounds for problems you may encounter using SnapSure. The Celerra Network Server Error Messages Guide includes a complete list of error messages. For assistance in diagnosing

Page 62: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra62 of 72 Version 5.5

problems, check the Celerra Network Server system log (/nas/log/sys_log), which records SnapSure event information.

Table 3 Troubleshooting SnapSure

Problem description or error message Meaning or action

Checkpoint creation fails • The PFS is not mounted. Ensure the PFS is mounted (and mounted on a single Data Mover only) before using SnapSure. A PFS must be mounted on a VDM if its checkpoints are mounted on a VDM (and vice versa).

• The PFS is being restored and SnapSure cannot access it to create a checkpoint. Retry checkpoint creation when the PFS restoration is complete.

• Snapsure checkpoint creation was attempted during the unicode conversion process, producing the message: Error 5005: Input/output error. The checkpoint creation may actually succeed, but the checkpoint is not automatically mounted. You must manually mount any checkpoint(s) created during Unicode conversion after the process completes and stops.

• There is insufficient system or disk space available to create or extend the SavVol. Read "Planning considerations for SnapSure" on page 14 for details.

• A maximum of 96 PFS checkpoints already exists. Delete unwanted checkpoints and retry.

Checkpoint refresh fails • The PFS is not mounted. Ensure the PFS is mounted before using SnapSure.

• The PFS is being restored and SnapSure cannot access it to refresh a checkpoint. Retry the checkpoint refresh when the PFS restoration is complete.

• An automated checkpoint schedule (set up using the Celerra Manager, or Linux cron job script) is running while the PFS is unmounted. Mount the PFS and retry the refresh.

• There is insufficient system or disk space available to create or extend the SavVol. Consult "Planning considerations for SnapSure" on page 14 for details.

• Refresh failures are logged in the /nas/log/sys_log file. You can configure the Celerra Network Server to also provide email or SNMP trap notification of refresh failures. Configuring Celerra Events and Notifications provides more information.

• Read "Minimizing scheduled checkpoint-creation/refresh failures" on page 24 for more detailed information.

Checkpoint restore fails • If insufficient space exists in the SavVol to complete the restore process upon first attempt, the restore may fail as SnapSure extends the SavVol. Retry the restore.

Note: Use the nas_fs -i command to list a PFS checkpoint to view SavVol usage and the current HWM.

• A maximum of 96 PFS checkpoints already exists. The system cannot create the checkpoint required for the restore process. Delete any unwanted checkpoint(s) and retry the restore.

• There is insufficient system or memory space available to create or extend the SavVol. Read "Planning considerations for SnapSure" on page 14 for details.

Page 63: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

63 of 72Version 5.5Using SnapSure on Celerra

Deleting a checkpoint fails You cannot delete a mounted checkpoint, in use (such as one being restored from, or a new checkpoint created in the restore process), or part of a group. Make sure these conditions do not exist, then retry the delete.

Unmounting a PFS fails Once a checkpoint is mounted, the PFS can be temporarily unmounted, but it cannot be permanently unmounted until the checkpoint is unmounted.

Checkpoint access (using CVFS) fails

• The checkpoint is not mounted and cannot be seen by CVFS. Mount the checkpoint and retry.

Note: If the PFS is mounted read-only on multiple Data Movers, you cannot mount one of its checkpoints on the same multiple Data Movers.

• The checkpoint-access parameter is changed to disable checkpoint access through CVFS. Reset the parameter to enable checkpoint access, if desired. "Accessing checkpoints" on page 50 has the procedure to enable/disable CVFS checkpoint access.

Differing checkpoint timestamps

A checkpoint’s creation/refresh timestamp may indicate conflicting times if the checkpoint is viewed from different interfaces, as follows:• CLI (fs_ckpt) — uses the Control Station time and time zone• Celerra Manager — uses the Control Station time and time zone• SCSF — reports server time using the Windows client’s time

zone• CVFS (.ckpt) — uses the Data Mover’s time and time zone

Possible states of a checkpoint: • Active• Inactive• Restoring

• Active: The checkpoint is readable and maintaining an accurate point-in-time image of the PFS. A checkpoint becomes active when created and remains active unless it is deleted or runs out of space to store point-in-time information.

• Inactive: The checkpoint is not readable or accurate due to insufficient resources needed to store point-in-time information. "SnapSure resource requirements" on page 17 offers more information.

• Restoring: The checkpoint is used to restore a PFS to a point in time.

Note: You can use the nas_fs -i -all command to check the current state of a checkpoint. For example, if used = INACTIVE appears in this command’s output, the checkpoint is inactive.

Table 3 Troubleshooting SnapSure (continued)

Problem description or error message Meaning or action

Page 64: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra64 of 72 Version 5.5

Possible states of a checkpoint schedule:• Active• Complete• Paused• Pending

• Active: The schedule started to run and its automated checkpoint-creation/refresh cycle is in progress.

• Complete: The schedule reached its specified end-on date. It will not run again unless the end-on date is changed.

• Paused: The schedule is manually paused. It will not run again until it is manually resumed, at which point, its state becomes Active.

• Pending: The schedule is created, but its start-on date is not reached. When the start date is reached, the schedule becomes Active.

Note: Schedule states can only be checked using the Celerra Manager Checkpoints > Schedules tab.

Error 5005: Input/output error

SnapSure checkpoint creation is attempted during the unicode conversion process. The checkpoint creation may succeed but the checkpoint is not automatically mounted. You must manually mount any checkpoint(s) created during Unicode conversion after the process completes and stops.

Error 2238: Disk space: quota exceeded

CKPT volume allocation quota exceeded

• The Celerra Network Server detects insufficient space available on the system to create, extend, or refresh a checkpoint. Read "SnapSure resource requirements" on page 17 for more details.

• The volume (PFS or checkpoint SavVol) cannot be seen by the Data Mover. Check the volume’s location and ensure the Data Mover can see it, and then retry the SnapSure action you are attempting.

Cannot find file or item <‘pathname’>

If you have a linked file system and you display checkpoints, two results are possible: • If the linked file system has checkpoints, only the checkpoints of

the linked file system appear; not the checkpoints of the source file system. (You can list the checkpoints of the source file system from any location, except under the linked file system.)

• If the linked file system has no checkpoints, this error message appears.

Conversion paused The Celerra Network Server paused the conversion process due to insufficient space in the SavVol to create the checkpoint indices, or insufficient space on the system to extend the SavVol. The conversion automatically resumes once you create more space. "Upgrading SnapSure to Celerra Network Server version 5.5" on page 11 has more details.

File systems greater than 2 TB = disallowed

By default, support of file systems greater than 2 TB is enabled on the Celerra Network Server. However, if the checkpoint-conversion process pauses, fails, or is in progress (upon SnapSure upgrade to version 5.5) and the enabled state is not reached, this warning message appears in the output of the server_sysstat -blockmap command.

Table 3 Troubleshooting SnapSure (continued)

Problem description or error message Meaning or action

Page 65: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

65 of 72Version 5.5Using SnapSure on Celerra

This is a checkpoint for a C-WORM file system!

If you create a SnapSure checkpoint of a Celerra File-Level Retention Capability file system, write a file (for example, file1) into the file system, and later restore the file system using the checkpoint, file1 will not remain because it did not exist when the checkpoint was created. The Celerra Network Server provides a warning if any restore is attempted. Completing a File-Level Retention Capability file system restore operation requires fs_ckpt -Restore -Force (via the CLI) or clicking OK at the confirmation pop-up window (via the Celerra Manager).

The storage system you specify for the SavVol is not used for the SavVol

When you create a checkpoint that is not the first one of the PFS and specify a storage system for the SavVol differing from the array on which the PFS is built, the Celerra system ignores the request and uses the same array(s) as the PFS for the SavVol.

Memory quota exceeded The 40% limit for Data Mover memory allotted for SavVols (and Celerra Replicator ConfigVols) has been exceeded.After you resolve the Data Mover memory problem, refresh any checkpoints that became “Inactive” (as shown in the status field when you list checkpoints), or were not refreshed by the schedule when the Memory quota event occurred. Check the nas/sys/sys_log file for all messages relating this event to determine checkpoint restoration action.

Volume is unreachable or Volumes are not available

The Celerra detects insufficient space available on the system to create, extend, or refresh a checkpoint.The volume you are using (PFS or checkpoint SavVol) cannot be seen by the Data Mover.Check the location of the volume, make sure the Data Mover can see it, then retry the SnapSure action you are attempting.

Memory quota HWM reached

The system detects that 32% of the 40% maximum limit for Data Mover memory allotted for SavVols (and Celerra Replicator ConfigVols) has been reached.

Table 3 Troubleshooting SnapSure (continued)

Problem description or error message Meaning or action

Page 66: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra66 of 72 Version 5.5

Related informationFor specific information related to the features and functionality described in this technical module but beyond its scope, consult the following publications:

◆ Celerra Network Server Command Reference Manual

◆ Online Celerra man pages

◆ Celerra Network Server Release Notes

◆ Celerra Glossary

◆ Celerra Network Server Parameters Guide

◆ Configuring Celerra Events and Notifications

◆ Celerra Network Server Error Messages Guide

◆ Celerra Network Server System Operations

◆ Problem Resolution Roadmap for Celerra

◆ Configuring NDMP Backups to Disk on Celerra

◆ Using SRDF/S with Celerra for Disaster Recovery

◆ Using Celerra FileMover

◆ Configuring NDMP Backups on Celerra

The Celerra Network Server Documentation CD, supplied with your Celerra Network Server and available on Powerlink, provides general information on other EMC Celerra publications.

Customer training programsEMC customer training programs are designed to help you learn how EMC storage products work together and integrate within your environment to maximize your entire infrastructure investment. EMC customer training programs feature online and hands-on training in state-of-the-art labs conveniently located throughout the world. EMC customer training programs are developed and delivered by EMC experts. For program information and registration, refer to Powerlink, our customer and partner website.

Page 67: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

67 of 72Version 5.5Using SnapSure on Celerra

Appendix: Facility and event ID numbers for SNMP/email To configure the Celerra Network Server to send SNMP and event notification of SnapSure events, note the facility number and event numbers shown in Table 4. Then consult Configuring Celerra Events and Notifications for more information about the procedure.

Table 4 SnapSure facility and event ID numbers

Facility number Event ID

70 1 = High water mark of SavVol reached2 = Checkpoint inactive3 = Source file system restore done4 = Restore is in process5 = Checkpoint conversion is paused due to a full SavVol

91 1 = Scheduled SnapSure checkpoint creation failed2 = Scheduled SnapSure checkpoint refresh failed

137 10 = Checkpoint autoextension failed

Page 68: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra68 of 72 Version 5.5

Page 69: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

69 of 72Version 5.5Using SnapSure on Celerra

Index

Aaccessing checkpoints 21active checkpoint 63ATA drives 15automated checkpoint schedules

creating one 39daily 40deleting 46modifying 45monthly 42pausing 45weekly 41

automated checkpoint scheduling 23Automatic Volume Manager (AVM) 17automating, checkpoint schedules 39

Bbitmap definition 3blockmap 3

index file 3

Ccalculating Data Mover memory usage 17cautions 4

system 4Celerra File Mover 16Celerra Replicator

cautions 4checkpoint

accessing 50creating 28definition 3deleting 21, 38listing 31managing more than one 9refresh failures 24refreshing 20, 36renaming 57restore 21restore PFS from 21, 47, 63restoring client files from 50scheduling 23, 24states 63storage volume 17

checkpoint scheduleslisting 43

checkpoint schedules, automating 39CLI 26creating

checkpoints 28schedules 24

CVFS 21

Ddaily checkpoint schedule 40data migration 16default

HWMcheckpoint restore 21

SavVol size 17deleting checkpoints 21, 38disk selection 17

Eemail event ID number 67error messages 18events, configuring 67extension of SavVol 18

Ffailure

of refresh 24of restore 21

File Mover 16formula for Data Mover memory calculation 17

Gguidelines 15

HHWM 30, 35

SavVol default 17specifying a 0 percent 20

Iinactive checkpoint 63index file, blockmap 3

Llisting checkpoint schedules 43listing checkpoints 31

Mmanagement interfaces 26modifying checkpoint schedules 45monthly checkpoint schedule 42multiple

checkpoint schedules 24checkpoints 9

Ooptions

SavVol creation 18

Page 70: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

Using SnapSure on Celerra70 of 72 Version 5.5

Pparameter

for changing system space for SavVols 19for renaming a virtual checkpoint directory 51

pending checkpoint 63performance 14persistence 14planning to use SnapSure 14pool, storage 17procedures to use SnapSure via CLI 27

Rrefreshing checkpoints 20, 24, 36renaming

checkpoints 57virtual checkpoint directories 50

resource requirements, system 17restore

fails 21PFS from a checkpoint 63PFS from checkpoint 21, 47

restoring PFS from checkpoint 21restrictions 15roadmap 27

SSavVol 17

default size, creation 17definition 4disk selection 17extension 18

SavVolschanging system space for 19

scheduling checkpoints 23, 24SCSF (Shadow Copy for Shared Folders) 22, 59SNMP event facility number 67SRDF 16states 63system requirements 10system resource requirements 17

Ttelephone

using to contact EMC Customer Service 61terminology 3TimeFinder/FS 16troubleshooting SnapSure 61

Uupgrading to NAS V5.4 11user interface choices 26

Vvirtual

checkpoint directory renaming 51checkpoints, accessing 21Data Movers (VDMs) 16

Page 71: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

71 of 72Version 5.5Using SnapSure on Celerra

Notes

Page 72: Celerra Network Server Using SnapSure on Celerra...Using SnapSure on Celerra Version 5.5 3 of 72Introduction to SnapSure This technical module describes the Celerra® Network Server’s

About this technical moduleAs part of its effort to continuously improve and enhance the performance and capabilities of the Celerra Network Server product line, EMC from time to time releases new revisions of Celerra hardware and software. Therefore, some functions described in this document may not be supported by all revisions of Celerra software or hardware presently in use. For the most up-to-date information on product features, see your product release notes. If your Celerra system does not offer a function described in this document, contact your EMC Customer Support Representative for a hardware upgrade or software update.

Comments and suggestions about documentationYour suggestions will help us improve the accuracy, organization, and overall quality of the user documentation. Send a message to [email protected] with your opinions of this document.

Copyright © 1998-2007 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Version 5.5 72 of 72

Wweekly checkpoint schedule 41