emc celerra network server · emc® celerra® network server release 5.6.48 managing celerra...

122
EMC ® Celerra ® Network Server Release 5.6.48 Managing Celerra Volumes and File Systems with AVM P/N 300-004-148 REV A08 EMC Corporation Corporate Headquarters: Hopkintons, MA 01748-9103 1-508-435-1000 www.EMC.com

Upload: ngongoc

Post on 22-Nov-2018

240 views

Category:

Documents


0 download

TRANSCRIPT

EMC® Celerra® Network ServerRelease 5.6.48

Managing Celerra Volumes and File Systems with AVMP/N 300-004-148

REV A08

EMC CorporationCorporate Headquarters:

Hopkintons, MA 01748-91031-508-435-1000

www.EMC.com

Copyright © 1998 - 2010 EMC Corporation. All rights reserved.

Published March 2010

EMC believes the information in this publication is accurate as of its publication date. Theinformation is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATIONMAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TOTHE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIEDWARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires anapplicable software license.

For the most up-to-date regulatory document for your product line, go to the TechnicalDocumentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks onEMC.com.

All other trademarks used herein are the property of their respective owners.

Corporate Headquarters: Hopkinton, MA 01748-9103

2 Managing Celerra Volumes and File Systems with AVM 5.6.48

Contents

Preface.....................................................................................................7

Chapter 1: Introduction...........................................................................9System requirements.............................................................................................10Restrictions.............................................................................................................10

AVM restrictions..........................................................................................10Automatic file system extension restrictions...........................................11Virtual Provisioning restrictions...............................................................12CLARiiON restrictions...............................................................................13

Cautions..................................................................................................................13User interface choices...........................................................................................14Related information..............................................................................................18

Chapter 2: Concepts.............................................................................19AVM overview.......................................................................................................20System-defined storage pools..............................................................................20System-defined virtual storage pools.................................................................21User-defined storage pools..................................................................................21File system and automatic file system extension..............................................22AVM and automatic file system extension options..........................................22

AVM storage pools .....................................................................................22Disk types.....................................................................................................23System-defined storage pools....................................................................24RAID groups and storage characteristics................................................27User-defined storage pools .......................................................................28

Storage pool attributes..........................................................................................29System-defined storage pool volume and storage profiles.............................31

Managing Celerra Volumes and File Systems with AVM 5.6.48 3

CLARiiON system-defined storage pool algorithms.............................32CLARiiON system-defined storage pools for RAID 5, RAID 3,

and RAID 1/0 ATA support..................................................................35CLARiiON system-defined storage pools for EFD support.................37Symmetrix system-defined storage pools algorithm.............................38Virtual pools.................................................................................................40

File system and storage pool relationship.........................................................41Automatic file system extension.........................................................................43Virtual Provisioning..............................................................................................46Planning considerations.......................................................................................46

Chapter 3: Configuring.........................................................................51Configure disk volumes.......................................................................................52

Add CLARiiON user LUNs to a gateway system..................................53Add disk volumes to an integrated system.............................................54

Create file systems with AVM.............................................................................55Create file systems with system-defined storage pools.........................56Create file systems with user-defined storage pools..............................58Create the file system..................................................................................62Create file systems with automatic file system extension.....................65Create automatic file system extension-enabled file systems...............66

Extend file systems with AVM............................................................................69Extend file systems by size using storage pools.....................................70Extend file systems by volume using storage pools...............................73Extend file systems by using a different storage pool...........................75Enable automatic file system extension and options.............................77Enable Virtual Provisioning.......................................................................81Enable automatic extension, Virtual Provisioning, and all options

simultaneously.......................................................................................83Create file system checkpoints with AVM.........................................................85

Chapter 4: Managing............................................................................87List existing storage pools....................................................................................88Display storage pool details.................................................................................89Display storage pool size information...............................................................90

Display Symmetrix storage pool size information.................................92Modify system-defined and user-defined storage pool attributes.................93

Modify system-defined storage pool attributes......................................96Modify user-defined storage pool attributes...........................................99

4 Managing Celerra Volumes and File Systems with AVM 5.6.48

Contents

Extend a user-defined storage pool by volume..............................................103Extend a user-defined storage pool by size.....................................................104Extend a system-defined storage pool.............................................................105

Extend a system-defined storage pool by size......................................106Remove volumes from storage pools...............................................................107Delete user-defined storage pools.....................................................................108

Delete a user-defined storage pool and its volumes............................109

Chapter 5: Troubleshooting................................................................111AVM troubleshooting considerations...............................................................112EMC E-Lab Interoperability Navigator............................................................112Known problems and limitations.....................................................................112Error messages.....................................................................................................113EMC Training and Professional Services.........................................................114

Glossary................................................................................................115

Index.....................................................................................................119

Managing Celerra Volumes and File Systems with AVM 5.6.48 5

Contents

6 Managing Celerra Volumes and File Systems with AVM 5.6.48

Contents

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines,EMC periodically releases revisions of its hardware and software. Therefore, some functions describedin this document may not be supported by all versions of the software or hardware currently in use.For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, pleasecontact your EMC representative.

Managing Celerra Volumes and File Systems with AVM 5.6.48 7

Special notice conventions

EMC uses the following conventions for special notices:

A caution contains information essential to avoid data loss or damage to the system orequipment.

Important: An important note contains information essential to operation of the software.

Note: A note presents information that is important, but not hazard-related.

Hint: A note that provides suggested advice to users, often involving follow-on activity for aparticular action.

Where to get help

EMC support, product, and licensing information can be obtained as follows:

Product information — For documentation, release notes, software updates, or forinformation about EMC products, licensing, and service, go to the EMC Powerlinkwebsite (registration required) at http://Powerlink.EMC.com.

Troubleshooting — Go to Powerlink, search for Celerra Tools, and select CelerraTroubleshooting from the navigation panel on the left.

Technical support — For technical support, go to EMC Customer Service on Powerlink.After logging in to the Powerlink website, go to Support ➤ Request Support. To opena service request through Powerlink, you must have a valid support agreement.Contact your EMC Customer Support Representative for details about obtaining avalid support agreement or to answer any questions about your account.

Note: Do not request a specific support representative unless one has already been assigned toyour particular system problem.

Your comments

Your suggestions will help us continue to improve the accuracy, organization, and overallquality of the user publications.

Please send your opinion of this document to:

[email protected]

8 Managing Celerra Volumes and File Systems with AVM 5.6.48

Preface

1

Introduction

Automatic Volume Management (AVM) is an EMC Celerra NetworkServer feature that automates volume creation and management. By usingthe Celerra command options and interfaces that support AVM, systemadministrators can create and expand file systems without creating andmanaging the underlying volumes.

The Celerra automatic file system extension feature automatically extendsfile systems created with AVM when the file systems reach their specifiedhigh water mark (HWM). EMC Virtual Provisioning, also known as thinprovisioning, works with automatic file system extension and allows thefile system to grow on demand. With Virtual Provisioning, the spacepresented to the user or application is the maximum size setting, whileonly a portion of that space is actually allocated to the file system.

This document is part of the Celerra Network Server documentation setand is intended for system administrators responsible for creating andmanaging Celerra volumes and file systems by using AVM.

Topics included are:◆ System requirements on page 10◆ Restrictions on page 10◆ Cautions on page 13◆ User interface choices on page 14◆ Related information on page 18

Managing Celerra Volumes and File Systems with AVM 5.6.48 9

System requirements

Table 1 on page 10 describes the EMC® Celerra® Network Server software, hardware,network, and storage configurations.

Table 1. System requirements

Celerra Network Server version 5.6.48Software

No specific hardware requirementsHardware

No specific network requirementsNetwork

Any Celerra-qualified storage systemStorage

Restrictions

The restrictions listed in this section are applicable to AVM, automatic file system extension,EMC Virtual Provisioning™, and EMC CLARiiON®.

AVM restrictions

The restrictions applicable to AVM are:

◆ Create a file system by using only one storage pool. If you need to extend a file system,extend it by using either the same storage pool or by using another compatible storagepool. Do not extend a file system across storage systems unless it is absolutely necessary.

◆ File systems might reside on multiple disk volumes. Ensure that all disk volumes usedby a file system reside on the same storage system for file system creation and extension.This is to protect against storage-system and data unavailability.

◆ RAID 3 is only supported with EMC CLARiiON Advanced Technology-Attached (ATA)disk volumes.

◆ When building volumes on a Celerra Network Server attached to an EMC Symmetrix®storage system, use standard Symmetrix volumes (also called hypervolumes), notSymmetrix metavolumes.

◆ Use AVM to create the primary EMC TimeFinder®/FS (NearCopy or FarCopy) file system,if the storage pool attributes indicate that no sliced volumes are used in that storage pool.AVM does not support business continuance volumes (BCVs) in a storage pool withother disk types.

10 Managing Celerra Volumes and File Systems with AVM 5.6.48

Introduction

◆ AVM storage pools must contain only one disk type. Disk types cannot be mixed. Table4 on page 23 provides a complete list of disk types. Table 5 on page 24 provides a listof storage pools and the description of the associated disk types.

Automatic file system extension restrictions

The restrictions applicable to automatic file system extension are:

◆ Automatic file system extension does not work on MGFS, the EMC file system type usedwhile performing data migration from either CIFS or NFS to the Celerra Network Serverby using CDMS.

◆ Automatic file system extension is not supported on file systems created with manualvolume management. You can enable automatic file system extension on the file systemonly if it is created or extended by using an AVM storage pool.

◆ Automatic file system extension is not supported on file systems used with TimeFinderNearCopy or FarCopy.

◆ While automatic file system extension is running, the Control Station blocks all othercommands that apply to this file system. When the extension is complete, the ControlStation allows the commands to run.

◆ The Control Station must be running and operating properly for automatic file systemextension, or any other Celerra feature, to work correctly.

◆ Automatic file system extension cannot be used for any file system that is part of a remotedata facility (RDF) configuration. Do not use the nas_fs command with the -auto_extendoption for file systems associated with RDF configurations. Doing so generates the errormessage: Error 4121: operation not supported for file systems of type EMC SRDF®.

◆ Automatic file system extension cannot be used on MirrorView/Synchronous file systems.Do not use the nas_fs command with the -auto_extend option for CelerraMirrorView™/Synchronous file systems. Doing so generates the error message: operationnot permitted. Automatic File System Extension on <fs_name> requires local storage.

◆ The options associated with automatic file system extension can be modified only on filesystems mounted with read/write permission. If the file system is mounted read-only,you must remount the file system as read/write before modifying the automatic filesystem extension, HWM, or maximum size options.

◆ Enabling automatic file system extension and Virtual Provisioning does not automaticallyreserve the space from the storage pool for that file system. Administrators must ensurethat adequate storage space exists, so that the automatic extension operation can succeed.When there is not enough storage space available to extend the file system to the requestedsize, automatic file system extension extends the file system to use all the available storage.For example, if automatic file system extension requires 6 GB but only 3 GB is available,the file system automatically extends to 3 GB. Although the file system was partiallyextended, an error message appears indicating there was not enough storage spaceavailable to perform automatic extension. When there is no available storage, automatic

Restrictions 11

Introduction

file system extension fails. You must manually extend the file system to recover fromthis issue.

◆ Automatic file system extension is supported with EMC Celerra Replicator™. Enableautomatic file system extension only on the source file system in a replication scenario.The destination file system synchronizes with the source file system and extendsautomatically. Do not enable automatic file system extension on the destination filesystem.

◆ You cannot create iSCSI dense LUNs on file systems with automatic file system extensionenabled. You cannot enable automatic file system extension on a file system if there is astorage mode iSCSI LUN present on the file system. You will receive an error, “Error2216: <fs_name>: item is currently in use by iSCSI.” However, iSCSI virtually provisionedLUNs are supported on file systems with automatic file system extension enabled.

◆ Automatic file system extension is not supported on the root file system of a Data Moveror on the root file system of a Virtual Data Mover (VDM).

Virtual Provisioning restrictions

The restrictions applicable to Virtual Provisioning are:

◆ Celerra supports Virtual Provisioning on Symmetrix DMX-4 and CLARiiON CX4 diskvolumes.

◆ Virtually provisioned devices cannot be included in an EMC Celerra MirrorView™ setup.

◆ The options associated with Virtual Provisioning can be modified only on file systemsmounted with read/write permission. If the file system is mounted read-only, you mustremount the file system as read/write before modifying the Virtual Provisioning, HWM,or maximum size options.

◆ Celerra virtually provisioned objects (either iSCSI LUNs or File Systems) should not beused with Symmetrix or CLARiiON virtually provisioned devices. A single file systemshould not span virtual and standard Symmetrix or CLARiiON volumes.

◆ Virtual Provisioning is supported with EMC Celerra Replicator. Enable VirtualProvisioning only on the source file system in a replication scenario. The destination filesystem synchronizes with the source file system and extends automatically. Do not enableVirtual Provisioning on the destination file system.

◆ With Virtual Provisioning enabled, the NFS, CIFS, and FTP clients see the actual size ofthe Replicator destination file system, while they see the virtually provisioned maximumsize of the source file system. Interoperability considerations on page 46 provides moreinformation on using automatic file system extension with Celerra Replicator.

◆ Virtual Provisioning is supported on the primary file system, but not supported withprimary file system checkpoints. NFS, CIFS, and FTP clients cannot see the virtuallyprovisioned maximum size of any EMC SnapSure™ checkpoint file system.

12 Managing Celerra Volumes and File Systems with AVM 5.6.48

Introduction

◆ If a file system is created using a virtual storage pool, the -vp option of the nas_fscommand cannot be enabled because Celerra Virtual Provisioning and CLARiiON VirtualProvisioning cannot coexist on a file system.

◆ Closely monitor Symmetrix Thin Pool space that contains virtually provisioned devices.Use the command /usr/symcli/bin/symcfg list -pool -thin -all to display pool usage.

CLARiiON restrictions

The restrictions applicable to CLARiiON are:

◆ EMC does not recommend creating system RAID group and control LUNs on CLARiiONvirtual (thin) pools and virtual LUNs.

◆ CLARiiON virtual pools only support RAID 5 and RAID 6:

• RAID 5 is the default, with a minimum of 3 drives (2+1). EMC recommends usingmultiples of 5 drives.

• RAID 6 has a minimum of 4 drives (2+2). EMC recommends using multiples of 8drives.

◆ CLARiiON virtual pools do not support Enterprise Flash Drives (EFD).

◆ The EMC Navisphere®Manager is required to provision virtual devices on the CLARiiON.Any platforms that do not provide Navisphere access cannot use this feature.

◆ Closely monitor CLARiiON Thin Pool space that contains virtually provisioned devices.Use the command nas_pool -size <AVMvirtual pool name> and look for the physical usageinformation. An alert is generated when a CLARiiON Thin Pool runs out of space.

Cautions

If any of this information is unclear, contact your EMC Customer Support Representativefor assistance:

◆ All parts of a file system must use the same type of disk storage and be stored on a singlestorage system. Spanning more than one storage system increases the chance of data lossor data unavailability or both.

◆ If you plan to set quotas on a file system to control the amount of space that users andgroups can consume, turn on quotas immediately after creating the file system. Turningon quotas later, when the file system is in use, can cause temporary file system disruption,including slow file system access.UsingQuotas on Celerra contains instructions on turningon quotas and general quotas information.

◆ If your user environment requires international character support (that is, support ofnon-English character sets or Unicode characters), configure the Celerra Network Serverto support this feature before creating file systems. Using International Character Sets with

Restrictions 13

Introduction

Celerra contains instructions to support and configure international character supporton a Celerra Network Server.

◆ If you plan to create TimeFinder/FS (local, NearCopy, or FarCopy) snapshots, do not useslice volumes (nas_slice) when creating the production file system (PFS). Instead, usethe full portion of the disk presented to the Celerra Network Server. Using slice volumesfor a PFS slated as the source for snapshots wastes storage space and can result in lossof PFS data.

◆ Automatic file system extension is interrupted during Celerra software upgrades. Ifautomatic file system extension is enabled, the Control Station continues to capture theHWM events, but the actual file system extension does not start until the Celerra upgradeprocess completes.

◆ Celerra Virtual Provisioning allows you to specify a value above the maximum supportedstorage capacity for the system. If you receive an alert message that you are running outof space, or if you reach the system’s storage capacity limits and have virtually provisionedCelerra resources that are not fully allocated, you may need to either delete unnecessarydata, enable Celerra Data Deduplication to try to reduce file system storage usage, ormigrate data to a different system that has space.

◆ Insufficient space on a Symmetrix Thin Pool that contains a virtually provisioned devicemight result in a Data Mover panic and data unavailability. To avoid this situation,pre-allocate 100 percent of the TDEV when binding it to the Thin Pool. If you do not use100 percent pre-allocation, there is the possibility of overallocation; therefore, you mustclosely monitor the pool usage.

◆ Insufficient space on a CLARiiON Thin Pool that contains a virtually provisioned devicemight result in a Data Mover panic and data unavailability. You cannot pre-allocate spaceon a CLARiiON Thin Pool so you must closely monitor the thin pool usage to avoidrunning out of space.

User interface choices

The Celerra Network Server offers flexibility in managing networked storage that is basedon your support environment and interface preferences. This document describes how touse AVM by using the command line interface (CLI). You can also perform many of thesetasks by using one of the Celerra management applications:

◆ Celerra Manager — Basic Edition

◆ Celerra Manager — Advanced Edition

◆ Celerra Monitor

◆ Microsoft Management Console (MMC) snap-ins

◆ Active Directory Users and Computers (ADUC) extensions

For additional information about managing your Celerra:

◆ Learning about EMC Celerra on the EMC Celerra Network Server Documentation CD

14 Managing Celerra Volumes and File Systems with AVM 5.6.48

Introduction

◆ Celerra Manager online help

◆ Application’s online help system on the EMC Celerra Network Server Documentation CD

InstallingCelerraManagementApplications includes instructions on launching Celerra Manager,and on installing the MMC snap-ins and the ADUC extensions.

Table 2 on page 15 identifies the storage pool tasks you can perform in each interface, andthe command syntax or the path to the Celerra Manager page to use to perform the task.Unless otherwise noted in the task, the operations apply to user-defined and system-definedstorage pools. The Celerra Network Server Command Reference Manual contains informationon the commands described in Table 2 on page 15.

Table 2. Storage pool tasks supported by platform

Celerra ManagerCelerra Control Station CLITask

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, and click New.nas_pool -create -name<name> -volumes <volumes>

Create a new user-defined storagepool by volumes.

Note: Applies only to user-definedstorage pools.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, and click New.nas_pool -create -name<name> -size<integer>[M|G|T]

Create a new user-defined storagepool by size.

Note: Applies only to user-definedstorage pools.

-template <system_pool_name>

-num_stripe_members <num>

-stripe_size <num>

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools.nas_pool -list

List existing storage pools.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, and double-click thestorage pool name.

nas_pool -info <name>

Note: When you perform this operation in theCLI, the total_potential_mb does not includethe space in the storage pool in the output.

Display storage pool details.

Note: When you perform this opera-tion from Celerra Manager, thetotal_potential_mb represents thetotal available storage, including thestorage pool.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, and view thenas_pool -size <name>

Display storage pool size informa-tion.

Storage Capacity and Storage Used(%)

columns.

User interface choices 15

Introduction

Table 2. Storage pool tasks supported by platform (continued)

Celerra ManagerCelerra Control Station CLITask

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, double-click thenas_pool -modify{<name>|id=<id>}-default_slice_flag {y|n}

Specify whether AVM uses slicevolumes or entire unused disk vol-umes from the storage pool to cre-ate or expand a file system.

storage pool name to open its proper-ties page, and select or clear Slice

Pool Volumes by Default? as required.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, double-click thenas_pool -modify{<name>|id=<id>} -is_dynamic{y|n}

Specify whether AVM extends thestorage pool automatically withunused disk volumes whenever thepool needs more space.

Note: Applies only to system-de-fined storage pools.

storage pool name to open its proper-ties page, and select or clearAutomatic Extension Enabled as re-quired.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, double-click thenas_pool -modify{<name>|id=<id>}-is_greedy{y|n}

Specifying y tells AVM to allocatenew, unused disk volumes to thestorage pool when creating or ex-panding, even if there is availablespace in the pool. Specifying n tellsAVM to allocate all available stor-age pool space to create or expanda file system before adding vol-umes to the pool.

Note: Applies only to system-de-fined storage pools.

storage pool name to open its proper-ties page, and select or clear Obtain

Unused Disk Volumes as required.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, select the storagenas_pool -xtend{<name>|id=<id>}-volumes <volume_name>

Add volumes to a user-definedstorage pool.

Note: Applies only to user-definedstorage pools.

pool you want to extend, click Extend,and select one or more volumes toadd to the pool.[,<volume_name>,...]

16 Managing Celerra Volumes and File Systems with AVM 5.6.48

Introduction

Table 2. Storage pool tasks supported by platform (continued)

Celerra ManagerCelerra Control Station CLITask

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, select the storagenas_pool -xtend{<name>|id=<id>}-size <integer> [M|G|T]

Extend a storage pool by size andspecify a storage system fromwhich to allocate storage.

Note: Applies to system-definedstorage pools only when theis_dynamic attribute for the storagepool is set to n.

pool you want to extend, and clickExtend. Select the Storage System tobe used to extend the file system, andtype the size requested in MB, GB, orTB.

Note: The drop-down list shows allthe available storage systems, and thevolumes shown are only those createdon the storage system that is highlight-ed.

-storage <system_name>

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, select the storagenas_pool -shrink{<name>|id=<id>}-volumes <volume_name>

Remove volumes from a storagepool.

pool you want to shrink, click Shrink,and select one or more volumes notin use, to be removed from the pool.[,<volume_name>,...] [-deep]

The -deep setting is optional, and is used torecursively remove all members.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, select the storagenas_pool -delete{<name>|id=<id>}[-deep]

Delete a storage pool.

Note: Applies only to user-definedstorage pools.

pool you want to delete, and clickDelete.The -deep setting is optional, and is used to

recursively remove all members.

Select Celerras ➤ [Celerra_name] ➤

Storage ➤ Pools, double-click thenas_pool -modify{<name>|id=<id>} -name <name>

Change the name of a storagepool.

Note: Applies only to user-definedstorage pools.

storage pool name to open its proper-ties page, and type the new name inthe Name text box.

Select Celerras ➤ File Systems ➤

New, and select Automatic Extension

Enabled.

$ nas_fs -name <name>

-type <type> -createpool=<pool>storage=<system_name>

Create a file system with automaticfile system extension enabled.

{size=<integer>[T|G|M]}-auto_extend {no|yes}

User interface choices 17

Introduction

Related information

Specific information related to the features and functionality described in this guide areincluded in:

◆ Celerra Network Server Command Reference Manual

◆ Online Celerra man pages

◆ Celerra Network Server Parameters Guide

◆ Configuring NDMP Backups to Disk on Celerra

◆ Controlling Access to Celerra System Objects

◆ Managing Celerra Volumes and File Systems Manually

EMC Celerra Network Server Documentation CD

The EMC Celerra Network Server Documentation CD, supplied with Celerra and alsoavailable on the EMC Powerlink® website, provides the complete set of EMC Celerracustomer publications. After logging in to Powerlink, go to Support ➤ TechnicalDocumentation and Advisories ➤ Hardware/Platforms Documentation ➤ Celerra NetworkServer. On this page, click Add to Favorites. The Favorites section on your Powerlinkhome page provides a link that takes you directly to this page.

To request an EMC Celerra Network Server Documentation CD, send an email request to:

[email protected]

Celerra Support Demos

Celerra Support Demos are available on Powerlink. Use these instructional videos tolearn how to perform a variety of Celerra configuration and management tasks. Afterlogging in to Powerlink, go to Support ➤ Product and Diagnostic Tools ➤ Celerra Tools➤ Celerra Support Demos.

Celerra wizards

Celerra wizards can be used to perform set up and configuration tasks. Using Wizards toConfigure Celerra provides you with an overview of the steps required to configure aCelerra Network Server by using the Set Up Celerra wizard.

18 Managing Celerra Volumes and File Systems with AVM 5.6.48

Introduction

2

Concepts

The AVM feature automatically creates and manages file system storage.AVM is storage-system independent and supports existing requirementsfor automatic storage allocation (SnapSure, SRDF, and IP replication).

Topics included are:◆ AVM overview on page 20◆ System-defined storage pools on page 20◆ System-defined virtual storage pools on page 21◆ User-defined storage pools on page 21◆ File system and automatic file system extension on page 22◆ AVM and automatic file system extension options on page 22◆ Storage pool attributes on page 29◆ System-defined storage pool volume and storage profiles on page 31◆ File system and storage pool relationship on page 41◆ Automatic file system extension on page 43◆ Virtual Provisioning on page 46◆ Planning considerations on page 46

Managing Celerra Volumes and File Systems with AVM 5.6.48 19

AVM overview

You can configure file systems created with AVM to automatically extend. The automaticfile system extension feature allows you to configure a file system to extend automatically,without system administrator intervention, to support file system operations. Automaticfile system extension causes the file system to extend when it reaches the specified usagepoint, the HWM. You set the size for the file system you create, and also the maximum sizeto which you want the file system to grow. The Virtual Provisioning option lets you presentthe maximum size of the file system to the user or application, of which only a portion isactually allocated. Virtual Provisioning allows the file system to slowly grow on demandas the data is written.

Note: Enabling Virtual Provisioning with automatic file system extension does not automaticallyreserve the space from the storage pool for that file system. Administrators must ensure that adequatestorage space exists, so that the automatic extension operation can succeed. If the available storage isless than the maximum size setting, then automatic extension fails. Users receive an error messagewhen the file system becomes full, even though it appears that there is free storage space in the filesystem.

To create file systems, use one or more types of AVM storage pools:

◆ System-defined storage pools◆ System-defined virtual storage pools◆ User-defined storage pools

System-defined storage pools

System-defined storage pools are predefined and available with the Celerra Network Server.You cannot create or delete these predefined storage pools because they are set up to makemanaging volumes and file systems easier than manually managing them. You can modifysome of the attributes of the system-defined storage pools, but this is unnecessary.

AVM system-defined storage pools do not preclude the use of user-defined storage poolsor manual volume and file system management, but instead give system administrators asimple volume and file system management tool. With Celerra command options andinterfaces that support AVM, you can use system-defined storage pools to create and expandfile systems without manually creating and managing stripe volumes, slice volumes, ormetavolumes. If your applications do not require precise placement of file systems onparticular disks or on particular locations on specific disks, using AVM is an easy way foryou to create file systems.

EFD drives behave differently than FC or ATA drives, and AVM therefore uses differentlogic to configure file systems on EFDs. In order to configure EFDs for maximumperformance, AVM may select more disk volumes than are needed to satisfy the requestedcapacity. While the individual disk volumes are no longer available for manual volumemanagement, the unused EFD space is still available for creating additional file systems or

20 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

extending existing file systems. CLARiiON system-defined storage pools for EFD supporton page 37 contains additional information about using EFD drives.

AVM system-defined storage pools are adequate for most high availability and performanceconsiderations. Each system-defined storage pool manages the details of allocating storageto file systems. When you create a file system by using AVM system-defined storage pools,storage is automatically allocated from the pool to the new file system. After the storage isallocated to that pool, the storage pool can dynamically grow and shrink to meet the filesystem needs.

System-defined virtual storage pools

System-defined virtual storage pools are automatically created during the normal storagediscovery process (diskmark) process. A system-defined virtual storage pool contains a setof disks on which thin LUNs can be created for use by the Virtual Provisioning capability.When the last virtual disk volume from a specific virtual CLARiiON storage pool is deleted,the system-defined virtual AVM storage pool and its profiles are automatically removed.

User-defined storage pools

User-defined storage pools allow you to create containers or pools of storage, filled withmanually created volumes. When the applications require precise placement of file systemson particular disks or locations on specific disks, consider using AVM user-defined storagepools for more control. User-defined storage pools also allow you to reserve disk volumesso that the system-defined storage pools cannot use them.

If the applications require precise placement of file systems on particular disks or locationson specific disks, AVM user-defined storage pools give you more control. They also allowyou to reserve disk volumes so that the system-defined storage pools cannot use them.

User-defined storage pools provide a better option for those who want more control overtheir storage allocation while still using the more automated management tool. User-definedstorage pools are not as automated as the system-defined storage pools. You must specifysome attributes of the storage pool and the storage system from which the space is allocatedto create file systems. While somewhat less involved than creating volumes and file systemsmanually, using these storage pools requires more manual involvement on your part thanthe system-defined storage pools. When you create a file system by using a user-definedstorage pool, you must create the storage pool, choose and add volumes to it either bymanually selecting and building the volume structure or by auto-selection, expand it withnew volumes when required, and remove volumes you no longer require in the storagepool.

Auto-selection is performed by choosing a minimum size and a system pool which describesthe disk attributes. With auto-selection, whole disk volumes are taken from the volumesavailable in the system pool and placed in the user pool according to the selected stripeoptions. The auto-selection uses the same AVM algorithms that choose which disk volumesto stripe in a system pool. System-defined storage pool volume and storage profiles on page31 describes the AVM algorithms used.

System-defined virtual storage pools 21

Concepts

File system and automatic file system extension

You can create or extend file systems with AVM storage pools and configure the file systemto automatically extend as needed. You can enable automatic file system extension on a filesystem when it is created, or you can enable and disable it at any later time by modifyingthe file system. The options that work with automatic file system extension are:

◆ HWM◆ Maximum size◆ Virtual Provisioning

The HWM is the point at which the file system must be extended to meet the usage demand.The default HWM is 90 percent.

The default supported maximum size for any file system is 16 TB.

With automatic file system extension, the maximum size is the size to which the file systemcould grow, up to the supported 16 TB. Setting the maximum size is optional with automaticfile system extension, but mandatory with Virtual Provisioning. With Virtual Provisioningenabled, users and applications see the maximum size, while only a portion of that size isactually allocated to the file system.

Automatic file system extension allows the file system to grow as needed without systemadministrator intervention, making it easier to meet system operations requirementscontinuously, without interruptions.

AVM and automatic file system extension options

AVM provides a range of options for configuring your storage. The Celerra Network Servercan choose the configuration and placement of the file systems by using system-definedstorage pools, or you can create a user-defined storage pool and define its attributes.

AVM storage pools

An AVM storage pool is a container or pool of volumes. Table 3 on page 22 lists the majordifference between system-defined and user-defined storage pools.

Table 3. System-defined and user-defined storage pool difference

User-defined storage poolsSystem-defined storage poolsFunctionality

Manual only — Administrators mustmanage the volume configuration, addi-tion, and removal of storage from thesestorage pools

Automatic, but the dynamic behavior can be dis-abled

Ability to grow and shrink

22 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

Chapter 4 provides more detailed information.

Disk types

A storage pool must contain volumes from only one disk type.

Table 4 on page 23 lists the available disk types associated with the storage pools and thedisk type descriptions.

Table 4. Disk types

DescriptionDisk type

Standard CLARiiON disk volumes.CLSTD

CLARiiON Advanced Technology-Attached (ATA) disk volumes.CLATA

CLARiiON Serial Attached SCSI (SAS) disk volumes.CLSAS

CLARiiON Fibre Channel (FC) Enterprise Flash Drives (EFD) disk volumes.CLEFD

Standard Symmetrix disk volumes, typically RAID 1 configuration.STD

Symmetrix FC disk volumes, set up as source for mirrored storage that uses SRDF func-tionality.

R1STD

Standard Symmetrix disk volume that is a mirror of another standard Symmetrix disk volumeover RDF links.

R2STD

High performance Symmetrix disk volumes built on Enterprise Flash Drives, typically RAID5 configuration.

EFD

Standard Symmetrix disk volumes built on SATA drives, typically RAID 1 configuration.ATA

Symmetrix SATA disk volumes, set up as source for mirrored storage that uses SRDFfunctionality.

R1ATA

Symmetrix SATA disk volumes, set up as target for mirrored storage using SRDF function-ality.

R2ATA

CLARiiON ATA disk volumes for use with MirrorView/Synchronous.CMATA

Standard CLARiiON disk volumes for use with MirrorView/Synchronous.CMSTD

CLARiiON CLEFD disk volumes used with MirrorView/Synchronous.CMEFD

CLARiiON SAS disk volumes used with MirrorView/Synchronous.CMSAS

Business continuance volume (BCV) for use by TimeFinder/FS operations.BCV

AVM and automatic file system extension options 23

Concepts

Table 4. Disk types (continued)

DescriptionDisk type

BCV built from SATA disks for use by TimeFinder/FS operations.BCVA

BCV built from SATA disks that is mirrored to a different Symmetrix over RDF links, RAID1 configuration; used as a source volume by TimeFinder/FS operations.

R1BCA

BCV built from SATA disks that is a mirror of another BCV over RDF links; used as a targetof destination volume by TimeFinder/FS operations.

R2BCA

BCV that is mirrored to a different Symmetrix over RDF links, RAID 1 configuration; usedas a source volume by TimeFinder/FS operations.

R1BCV

BCV that is a mirror of another BCV over RDF links; used as a target of destination volumeby TimeFinder/FS operations.

R2BCV

System-defined storage pools

Choosing system-defined storage pools to build the file system is the easiest way to managevolumes and file systems. They are associated with the type of attached storage system youhave. If you have a CLARiiON storage system attached, the CLARiiON storage pools areavailable to you through the Celerra Network Server. If you have a Symmetrix storagesystem attached, the Symmetrix storage pools are available to you through the CelerraNetwork server.

System-defined storage pools are dynamic by default. The AVM feature adds and removesvolumes automatically from the storage pool as needed. Table 5 on page 24 lists thesystem-defined storage pools supported on the Celerra Network Server. Table 6 on page 27 containsadditional information about RAID group combinations for system-defined storage pools.

Note: A storage pool can include disk volumes of only one type.

Table 5. System-defined storage pools

DescriptionStorage pool name

Designed for high performance and availability at medium cost.This storagepool uses STD disk volumes (typically RAID 1).

symm_std

Designed for high performance and availability at low cost. This storagepool uses ATA disk volumes (typically RAID 1).

symm_ata

24 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

Table 5. System-defined storage pools (continued)

DescriptionStorage pool name

Designed for high performance and availability at medium cost, specificallyfor storage that will be mirrored to a remote Celerra Network Server that

symm_std_rdf_src

uses SRDF, or to a local Celerra Network Server that uses TimeFinder/FS.Using SRDF/S with Celerra for Disaster Recovery and Using TimeFinder/FS,NearCopy, and FarCopy with Celerra provide more information about theSRDF feature.

Designed for high performance and availability at medium cost, specificallyas a mirror of a remote Celerra Network Server using SRDF. This storage

symm_std_rdf_tgt

pool uses Symmetrix R2STD disk volumes. Using SRDF/S with Celerra forDisaster Recovery provides more information about the SRDF feature.

Designed for archival performance and availability at low cost, specificallyfor storage mirrored to a remote Celerra Network Server using SRDF. This

symm_ata_rdf_src

storage pool uses Symmetrix R1ATA disk volumes. Using SRDF/S withCelerra for Disaster Recovery provides more information about the SRDFfeature.

Designed for archival performance and availability at low cost, specificallyas a mirror of a remote Celerra Network Server using SRDF. This storage

symm_ata_rdf_tgt

pool uses Symmetrix R2ATA disk volumes. Using SRDF/S with Celerra forDisaster Recovery provides more information about the SRDF feature.

Designed for very high performance and availability at high cost.This storagepool uses EFD disk volumes (typically RAID 5).

symm_efd

Designed for high performance and availability at low cost. This storagepool uses CLSTD disk volumes created from RAID 1 mirrored-pair diskgroups.

clar_r1

Designed for high availability at low cost. This storage pool uses CLSTDdisk volumes created from RAID 6 disk groups.

clar_r6

Designed for medium performance and availability at low cost. This storagepool uses CLSTD disk volumes created from 4+1 RAID 5 disk groups.

clar_r5_performance

Designed for medium performance and availability at low cost. This storagepool uses CLSTD disk volumes created from 8+1 RAID 5 disk groups.

clar_r5_economy

Designed for use with infrequently accessed data, such as archive retrieval.This storage pool uses CLATA disk drives in a RAID 5 configuration.

clarata_archive

Designed for archival performance and availability at low cost. This AVMstorage pool uses LCFC, SATA II, and CLATA disk drives in a RAID 3 con-figuration.

clarata_r3

Designed for high availability at low cost. This storage pool uses CLATAdisk volumes created from RAID 6 disk groups.

clarata_r6

AVM and automatic file system extension options 25

Concepts

Table 5. System-defined storage pools (continued)

DescriptionStorage pool name

Designed for high performance and availability at medium cost.This storagepool uses two CLATA disk volumes in a RAID 1/0 configuration.

clarata_r10

Designed for medium performance and availability at medium cost. Thisstorage pool uses CLARiiON Serial Attached SCSI (SAS) disk volumescreated from RAID 5 disk groups.

clarsas_archive

Designed for high availability at medium cost.This storage pool uses CLSASdisk volumes created from RAID 6 disk groups.

clarsas_r6

Designed for high performance and availability at medium cost.This storagepool uses two CLSAS disk volumes in a RAID 1/0 configuration.

clarsas_r10

Designed for very high performance and availability at high cost.This storagepool uses CLARiiON CLEFD disk volumes created from 4+1 and 8+1 RAID5 disk groups.

clarefd_r5

Designed for high performance and availability at medium cost.This storagepool uses two CLEFD disk volumes in a RAID 1/0 configuration.

clarefd_r10

Designed for high performance and availability at low cost. This storagepool uses CMSTD disk volumes created from RAID 1 mirrored-pair diskgroups for use with MirrorView/Synchronous.

cm_r1

Designed for medium performance and availability at low cost. This storagepool uses CMSTD disk volumes created from 4+1 RAID 5 disk groups foruse with MirrorView/Synchronous.

cm_r5_performance

Designed for medium performance and availability at low cost. This storagepool uses CMSTD disk volumes created from 8+1 RAID 5 disk groups foruse with MirrorView/Synchronous.

cm_r5_economy

Designed for high availability at low cost. This storage pool uses CMSTDdisk volumes created from RAID 6 disk groups for use with MirrorView/Syn-chronous.

cm_r6

Designed for use with infrequently accessed data, such as archive retrieval.This storage pool uses CMATA disk drives in a RAID 5 configuration for usewith MirrorView/Synchronous.

cmata_archive

Designed for archival performance and availability at low cost. This AVMstorage pool uses CMATA disk drives in a RAID 3 configuration for use withMirrorView/Synchronous.

cmata_r3

Designed for high availability at low cost. This storage pool uses CMATAdisk volumes created from RAID 6 disk groups for use with MirrorView/Syn-chronous.

cmata_r6

26 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

Table 5. System-defined storage pools (continued)

DescriptionStorage pool name

Designed for high performance and availability at medium cost.This storagepool uses two CMATA disk volumes in a RAID 1/0 configuration for use withMirrorView/Synchronous.

cmata_r10

Designed for medium performance and availability at medium cost. Thisstorage pool uses CMSAS disk volumes created from RAID 5 disk groupsfor use with MirrorView/Synchronous.

cmsas_archive

Designed for high availability at low cost. This storage pool uses CMSASdisk volumes created from RAID 6 disk groups for use with MirrorView/Syn-chronous.

cmsas_r6

Designed for high performance and availability at medium cost.This storagepool uses two CMSAS disk volumes in a RAID 1/0 configuration for use withMirrorView/Synchronous.

cmsas_r10

Designed for very high performance and availability at high cost.This storagepool uses CMEFD disk volumes created from 4+1 and 8+1 RAID 5 diskgroups for use with MirrorView/Synchronous.

cmefd_r5

Designed for high performance and availability at medium cost.This storagepool uses two CMEFD disk volumes in a RAID 1/0 configuration for use withMirrorView/Synchronous.

cmefd_r10

RAID groups and storage characteristics

Table 6 on page 27 correlates the storage array to the RAID groups for system-definedstorage pools.

Table 6. RAID group combinations

RAID 1RAID 6RAID 5Storage

1+1 RAID 1/04+2 RAID 62+1 RAID 5

3+1 RAID 5

4+1 RAID 5

5+1 RAID 5

NX4 SAS or

SATA

1+1 RAID 14+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

8+1 RAID 5

NS20 /

NS40 /

NS80 FC

AVM and automatic file system extension options 27

Concepts

Table 6. RAID group combinations (continued)

RAID 1RAID 6RAID 5Storage

Not supported4+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

6+1 RAID 5

8+1 RAID 5

NS20 /

NS40 /

NS80 ATA

1+1 RAID 1/04+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

8+1 RAID 5

NS-120 /

NS-480 /

NS-960 FC

1+1 RAID 1/04+2 RAID 6

6+2 RAID 6

12+2 RAID 6

4+1 RAID 5

6+1 RAID 5

8+1 RAID 5

NS-120 /

NS-480 /

NS-960 ATA

1+1 RAID 1/0Not supported4+1 RAID 5

8+1 RAID 5

NS-120 /

NS-480 /

NS-960 EFD

User-defined storage pools

For some customer environments, more user control is required than the system-definedstorage pools offer. One way for administrators to have more control is to create their ownstorage pools and define the attributes of the storage pool.

AVM user-defined storage pools allow you to have more control over how the storage isallocated to file systems. Administrators can create a storage pool, and choose and addvolumes to it either by manually selecting and building the volume structure or byauto-selection, expand it with new volumes when required, and remove volumes you nolonger require in the storage pool.

Auto-selection is performed by choosing a minimum size and a system pool which describesthe disk attributes. With auto-selection, whole disk volumes are taken from the volumesavailable in the system pool and placed in the user pool according to the selected stripeoptions. The auto-selection uses the same AVM algorithms that choose which disk volumesto stripe in a system pool. When extending a user-defined storage pool, AVM references thelast pool member’s volume structure and makes the best effort to keep the underlyingvolume structures consistent. System-defined storage pool volume and storage profiles onpage 31 contains additional information.

While user-defined storage pools have attributes similar to system-defined storage pools,user-defined storage pools are not dynamic. They require administrators to explicitly addand remove volumes manually.

28 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

If you define the storage pool, you must also explicitly add and remove storage from thestorage pool and define the attributes for that storage pool. Use the nas_pool command tolist, create, delete, extend, shrink, and view storage pools, and to modify the attributes ofstorage pools. Create file systems with AVM on page 55 and Chapter 4 provide moreinformation.

Understanding how AVM storage pools work enables you to determine whethersystem-defined storage pools or user-defined storage pools, or both, are appropriate for theenvironment. It is also important to understand the ways in which you can modify thestorage-pool behavior to suit your file system requirements. Modify system-defined anduser-defined storage pool attributes on page 93 provides a list of all the attributes and theprocedures to modify them.

Storage pool attributes

System-defined and user-defined storage pools have attributes that control how they createvolumes and file systems. Table 7 on page 29 lists the storage pool attributes, the type ofentry, the value, whether the attribute is modifiable and for which storage pools, and thedescription of the attribute. The system-defined storage pools are shipped with the CelerraNetwork Server. They are designed to optimize performance based on the hardwareconfiguration. Each of the system-defined storage pools has associated profiles that definethe kind of storage used, and how new storage is added to, or deleted from, the storagepool.

Table 7. Storage pool attributes

DescriptionModifiableValuesAttribute

Unique name. If a name is not specifiedduring creation, one is automatically gener-ated.

Yes

User-defined storage pools

Quoted stringname

A text description.

Default is “” (blank string).

Yes

User-defined storage pools

Quoted stringdescription

Access control level.Yes

User-defined storage pools

Integer. For exam-ple, 0.acl

Controlling Access to Celerra System Ob-jects contains instructions to manage accesscontrol levels.

Storage pool attributes 29

Concepts

Table 7. Storage pool attributes (continued)

DescriptionModifiableValuesAttribute

Answers the question, can AVM slicemember volumes to meet the file systemrequest?

A y entry tells AVM to create a slice of exact-ly the correct size from one or more membervolumes.

An n entry gives the primary or source filesystem exclusive access to one or moremember volumes.

Note: If using TimeFinder or automatic filesystem extension, this attribute should beset to n.You cannot restore file systemsbuilt with sliced volumes to a previous stateby using TimeFinder/FS.

Yes

System-defined and user-de-fined storage pools

“y” | “n”default_slice_flag

Note: Only applicable if volume_profile isnot blank.

Answers the question, is this storage poolallowed to automatically add or removemember volumes? The default answer is n.

Yes

System-defined storage pools

“y” | ”n”is_dynamic

Note: Only applicable if volume_profile isnot blank.

This field answers the question, is this stor-age pool greedy?

When a storage pool receives a request forspace, a greedy storage pool attempts tocreate a new member volume beforesearching for free space in existing membervolumes.The attribute value for this storagepool is y.

A storage pool that is not greedy uses allavailable space in the storage pool beforecreating a new member volume. The at-tribute value for this storage pool is n.

Yes

System-defined storage pools

“y” | ”n”is_greedy

The system-defined storage pools are designed for use with the Symmetrix and CLARiiONstorage systems. The structure of volumes created by AVM might differ greatly dependingon the type of storage system used by the various storage pools. This difference allows AVM

30 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

to exploit the architecture of current and future block storage devices that are attached tothe Celerra Network Server.

Figure 1 on page 31 shows how the different storage pools are associated with the diskvolumes for each storage-system type attached. The nas_disk -list command lists the diskvolumes. These are the Celerra Network Server’s representation of the LUNs exported fromthe attached storage system.

Note: Any given disk volume must be a member of only one storage pool.

Figure 1. AVM system-defined storage pools

System-defined storage pool volume and storage profiles

Volume profiles are the set of rules and parameters that define how new storage is addedto a system-defined storage pool. A volume profile defines a standard method of buildinga large section of storage from a set of disk volumes. This large section of storage can beadded to a storage pool that might contain similar large sections of storage. Thesystem-defined storage pool is responsible to satisfy requests for any amount of storage.

Users cannot create or delete system-defined storage pools and their associated profiles.Users can list, view, and extend the system-defined storage pools, and also modify storagepool attributes.

Volume profiles have an attribute named storage_profile. A volume profile’s storage profiledefines the rules and attributes that are used to aggregate some number of disk volumes(listed by the nas_disk -list command) into a volume that can be added to a system-definedstorage pool. A volume profile uses its storage profile to determine the set of disk volumes

System-defined storage pool volume and storage profiles 31

Concepts

to select (or match existing Celerra disk volumes), where a given disk volume might matchthe rules and attributes of a storage profile.

CLARiiON system-defined storage pool algorithms on page 32, CLARiiON system-definedstorage pools for RAID 5, RAID 3, and RAID 1/0 ATA support on page 35, CLARiiONsystem-defined storage pools for EFD support on page 37, and Symmetrix system-definedstorage pools algorithm on page 38 explain how these profiles help system-defined storagepools aggregate the disk volumes into storage pool members, place the members into storagepools, and then build file systems for each storage-system type. When using thesystem-defined storage pools without modifications, through the Celerra Manager or thecommand line interface (CLI), this activity is transparent to users.

CLARiiON system-defined storage pool algorithms

When you request for a new file system that requires new storage, AVM attempts to createthe most optimal stripe volume for a CLARiiON storage system. System-defined storagepools for CLARiiON storage systems work with LUNs of a specific type, for example, 4+1RAID 5 LUNs for the clar_r5_performance storage pool.

CLARiiON storage systems integrated models use CLARiiON storage templates to createthe LUNs that the Celerra Network Server recognizes as disk volumes. CLARiiON storagetemplates are a combination of template definition files and scripts (you see just the scripts)that create RAID groups and bind LUNs on CLARiiON storage systems. These CLARiiONstorage templates are invoked through the CLARiiON setup script (root only) or throughCelerra Manager. Celerra NS600/NS600S/NS700/NS700S with Integrated Array Setup Guidecontains more information on using CLARiiON storage templates with Celerra.

Disk volumes exported from a CLARiiON storage system are relatively large and mightvary in size from approximately 18 GB to 136 GB, depending on physical disk size. ACLARiiON system also has two storage processors (SPs). Most CLARiiON storage templatescreate two LUNs per RAID group, one owned by SP A, and the other by SP B. Only theCLARiiON RAID 3 storage templates create both LUNs owned by one of the SPs.

If no disk volumes are found when a request for space is made, AVM considers the storagepool attributes, and initiates the next step based on these settings:

◆ The is_greedy setting indicates if the storage pool must add a new member volume tomeet the request, or if it must use all the available space in the storage pool before addinga new member volume. AVM then checks the is_dynamic setting.

◆ The is_dynamic setting indicates if the storage pool can dynamically grow and shrink.If set to yes, it allows AVM to automatically add a member volume to meet the request.If set to no, and a member volume must be added to meet the request, then the user mustmanually add the member volume to the storage pool.

◆ The file-system request slice flag indicates if the file system can be built on a slice volumefrom a member volume.

◆ The default_slice_flag setting indicates if AVM can slice storage pool member volumesto meet the request.

32 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

Most of the system-defined storage pools for CLARiiON storage systems first search forfour same-size disk volumes, from different buses, different SPs, and different RAID groups.

The absolute criteria that the volumes must meet are:

◆ A disk volume cannot exceed 2 TB.◆ Disk volumes must match the type specified in the storage pool storage profile.◆ Disk volumes must be of the same size.◆ No two disk volumes can come from the same RAID group.◆ Disk volumes must be on a single storage system.

If found, AVM stripes the LUNs together and inserts the stripe into the storage pool.

If AVM cannot find the four disk volumes that are bus-balanced, it looks for four same-sizedisk volumes that are SP-balanced from different RAID groups, and if not found, AVM thensearches for four same-size disk volumes from different RAID groups.

Next, if AVM has been unable to satisfy these requirements, it looks for three same-size diskvolumes that are SP-balanced from different RAID groups, and so on, until the only optionleft is for AVM to use one disk volume. The criteria that the one disk volume must meet are:

◆ A disk volume cannot exceed 2 TB.◆ A disk volume must match the type specified in the storage pool storage profile.◆ If multiple volumes match the first two criteria, then the disk volume must be from the

least-used RAID group.

System-defined storage pool volume and storage profiles 33

Concepts

Figure 2 on page 34 shows the algorithm used to create a file system by adding a poolmember to the AVM CLARiiON system-defined storage pools clar_r1, clar_r5_performance,and clar_r5_economy.

Figure 2. clar_r1, clar_r5_performance, and clar_r5_economy storage pool algorithm

Figure 3 on page 34 shows the structure of a clar_r5_performance storage pool. The volumesin the storage pools are balanced between SP A and SP B.

Figure 3. clar_r5_performance storage pool structure

34 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

CLARiiON system-defined storage pools for RAID 5, RAID 3, and RAID 1/0ATA support

The three CLARiiON system-defined storage pools that provide support for the ATAenvironment are clarata_r3, clarata_archive, and clarata_r10.

The clarata_r3 storage pool follows the basic CLARiiON algorithm explained inSystem-defined storage pool volume and storage profiles on page 31, but uses only onedisk volume and does not allow striping of volumes. One of the applications for this poolis backup to disk. Users can manage the RAID 3 disk volumes manually in a user-definedstorage pool. However, using the system-defined storage pool clarata_r3 helps users maximizethe benefit from AVM disk selection algorithms. The clarata_r3 storage pool supports onlyCLARiiON ATA drives, not FC drives.

The criteria that the one disk volume must meet are:

◆ Disk volume cannot exceed 2 TB.◆ Disk volume must match the type specified in the storage pool storage profile.◆ If multiple volumes match the first two criteria, then the disk volume must be from the

least-used RAID group.

System-defined storage pool volume and storage profiles 35

Concepts

Figure 4 on page 36 shows the storage pool clarata_r3 algorithm.

Figure 4. clarata_r3 storage pool algorithm

The storage pools clarata_archive and clarata_r10 differ from the basic CLARiiON algorithm.These storage pools use two disk volumes, or a single disk volume, and all ATA drives arethe same.

36 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

The following figure shows the profile algorithm used to create a file system with theclarata_archive and clarata_r10 storage pools.

Figure 5. clarata_archive and clarata_r10 storage pools algorithm

CLARiiON system-defined storage pools for EFD support

Celerra provides the clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools for EFDdrive support on the CLARiiON storage system. AVM uses the same disk selection algorithmand volume structure for each EFD pool. However, the algorithm differs from the standardCLARiiON algorithm explained in System-defined storage pool volume and storage profileson page 31 and is outlined next. The algorithm adheres to EMC best practices to achievethe overall best performance and use of EFD drives. Users can also manually manage EFDdrives in user-defined pools.

The AVM algorithm used for disk selection and volume structure for all EFD system-definedpools is as follows:

1. The LUN creation process is responsible for storage processor balancing. By default,running the setup_clariion command on integrated systems sets up storage processorbalancing, which is recommended.

2. Use a default stripe width of 256K (provided in the profile). The stripe member count inthe profile is ignored and should be left at 1.

3. When two or more LUNs of the same size are available, always stripe LUNs; otherwise,concatenate LUNs.

4. No RAID group balancing or RAID group usage is considered.

5. No order is applied to the LUNs being striped together except that all LUNs from thesame RAID group in the stripe will be next to each other. For example, storage processorbalanced order is not applied.

System-defined storage pool volume and storage profiles 37

Concepts

6. Use a maximum of two RAID groups from which to take LUNs.

7. If only one RAID group is available, use every same size LUN in the RAID group. Thismaximizes the LUN count and meets the size requested.

8. If only two RAID groups are available, use every same size LUN in each RAID group.This maximizes the LUN count and meets the size requested.

Figure 6 on page 38 shows the profile algorithm used to create a file system with theclarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools.

Figure 6. clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools algorithm

Symmetrix system-defined storage pools algorithm

AVM works differently with Symmetrix storage systems because of the size and uniformityof the disk volumes involved. Typically, the disk volumes exported from a Symmetrixstorage system are small and uniform in size. The aggregation strategy used by Symmetrixstorage pools is primarily to combine many small disk volumes into larger volumes thatcan be used by file systems. AVM attempts to distribute the input/output (I/O) to as manySymmetrix directories as possible. The Symmetrix storage system can distribute I/O amongthe physical disks by using slicing and striping on the storage system, but this is less of aconcern for the AVM aggregation strategy.

A Symmetrix storage pool creates a stripe volume across one set of Symmetrix disk volumes,or creates a metavolume, as necessary to meet the request. The stripe or metavolume is

38 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

added to the Symmetrix storage pool. When the administrator asks for n GB space from theSymmetrix storage pool, the space is allocated from this system-defined storage pool. AVMadds and takes from the system-defined storage pool as required. The stripe size is set inthe system-defined profiles, and you cannot modify the stripe size of a system-definedstorage pool. The default stripe size for Symmetrix storage pool is 32 KB. Multi-path filesystem (MPFS) requires a stripe depth of 32 KB or greater.

The algorithm that AVM uses looks for a set of eight disk volumes, and if not found, a setof four disk volumes, and if not found, then a set of two disk volumes, and finally one diskvolume. AVM stripes the disk volumes together, if the disk volumes are all of the same size.If the disk volumes are not the same size, AVM creates a metavolume on top of the diskvolumes. AVM then adds the stripe or the metavolume to the storage pool.

If AVM cannot find any disk volumes, it looks for a metavolume in the storage pool thathas space, takes a slice from that metavolume, and makes a metavolume over that slice.

Figure 7 on page 39 shows the AVM algorithm used to create the file system with aSymmetrix system-defined storage pool.

Figure 7. Symmetrix storage pool algorithm

Figure 8 on page 39 shows the structure of a Symmetrix storage pool.

Figure 8. Symmetrix storage pool structure

System-defined storage pool volume and storage profiles 39

Concepts

All this system-defined storage pool activity is transparent to users and provides an easyway to create and manage file systems. The system-defined storage pools do not allow usersto have much control over how AVM aggregates storage to meet file system needs, but mostusers prefer ease-of-use over control.

When users make a request for a new file system that uses the system-defined storage pools,AVM:

◆ Determines if more volumes need to be added to the storage pool; if so, selects and addsvolumes.

◆ Selects an existing, available storage pool volume to use for the file system and mightslice it to obtain the correct size for the file system request. If the request is larger thanthe largest volume, AVM concatenates the volumes to create the size required to meetthe request.

◆ Places a metavolume on the resulting volume and builds the file system within themetavolume.

◆ Returns the file system information to the user.

All system-defined storage pools have specific, predictable rules for getting disk volumesinto storage pools, provided by their associated profiles.

Virtual pools

AVM builds a virtual (thin) file system as follows:

1. AVM initially selects unused disks (LUNs) that are storage processor balanced.

2. AVM orders the disks for each storage processor based on the most unused space (thelargest free disks are selected first).

3. AVM selects as many disks as necessary to satisfy the space request and places thesedisks in the AVM virtual pool.

Note: AVM decides which storage processor to first select a disk from based on the storage processorwith the least amount of space used in the virtual pool. Then AVM alternates storage processorsfor selecting each subsequent disk.

4. AVM takes the selected pool members and concatenates whole disks or slices of disks(LUNs) into a single volume structure (a metavolume) on which the file system is built.Stripes are not used for virtual file systems. Slices of disks are used when there are nofree whole disks available to take space from.

40 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

Figure 9 on page 41 shows the virtual pool algorithm.

Figure 9. Virtual pool algorithm

File system and storage pool relationship

When you request for a file system that uses a system-defined storage pool, AVM consumesdisk volumes either by adding new members to the pool, or by using existing pool members.To create a file system by using a user-defined storage pool, either create the storage pooland add the volumes you want to use manually before creating the file system, or let AVMcreate the user pool by size.

Deleting a file system associated with either a system-defined or user-defined storage poolreturns the unused space to the storage pool. But the storage pool might continue to reserve

File system and storage pool relationship 41

Concepts

that space for future file system requests. Figure 10 on page 42 shows two file systems builtfrom an AVM storage pool.

Figure 10. File systems built by AVM

As Figure 11 on page 42 shows, if FS2 is deleted, the storage used for that file system isreturned to the storage pool, and AVM continues to reserve it, as well as any other membervolumes available in the storage pool, for a future request. This is true of system-definedand user-defined storage pools.

Figure 11. FS2 deletion returns storage to the storage pool

If FS1 is also deleted, the storage that was used for the file systems is no longer required forfile systems.

A system-defined storage pool removes the volumes from the storage pool and returns thedisk volumes to the storage system for use with other features or storage pools. You canchange the attributes of a system-defined storage pool so it is not dynamic, and does notgrow and shrink dynamically. Doing that increases your direct involvement in managingthe volume structure of the storage pool, including adding and removing volumes.

A user-defined storage pool does not have any capability to add and remove volumes. Touse volumes contained in a user-defined storage pool for another purpose, you must remove

42 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

the volumes. Remove volumes from storage pools on page 107 provides more informationon removing volumes. Otherwise, the user-defined storage pool continues to reserve thespace for use by that pool.

Figure 12 on page 43 shows that the storage pool container continues to exist after the filesystems are deleted, and AVM continues to reserve the volumes for future requests of thatstorage pool.

Figure 12. FS1 deletion leaves storage pool container with volumes

If you have modified the attributes that control the dynamic behavior of a system-definedstorage pool, use the procedure in Remove volumes from storage pools on page 107 to removevolumes from the system-defined storage pool.

For a user-defined storage pool, to reuse the volumes for other purposes, remove the volumesor delete the storage pool.

Automatic file system extension

Automatic file system extension works only when an AVM storage pool is associated witha file system. You can enable or disable automatic file system extension when you create afile system or modify the file system properties later.

Create file systems with AVM on page 55 provides the procedure to create file systems withAVM system-defined or user-defined storage pools and enable automatic file systemextension on a newly created file system. Enable automatic file system extension and optionson page 77 provides the procedure to modify an existing file system and enable automaticfile system extension.

You can set the HWM and maximum size for automatic file system extension. The ControlStation might attempt to extend the file system several times, depending on these settings.

HWM

The HWM identifies the threshold for initiating automatic file system extension. TheHWM value must be between 50 percent and 99 percent. The default HWM is 90 percentof the file system size.

Automatic file system extension guarantees that the file system usage is at least 3 percentbelow the HWM. For example, a 100 GB file system reaches its 80 percent HWM at 80

Automatic file system extension 43

Concepts

GB. The file system then automatically extends to 120 GB and is now at 66.67 percentusage (80 GB), which is well below the 80 percent HWM for the 120 GB file system:

◆ If automatic file system extension is disabled, when the file system reaches the HWM,an HWM event notification is sent. You must then manually extend the file system.Ignoring the notification could cause data loss.

◆ If automatic file system extension is enabled on a file system, when the file systemreaches the HWM, an automatic extension event notification is sent to sys_log andthe file system automatically extends without any administrative action:

• A file system that is smaller than 20 GB extends by its size when it reaches theHWM. For example, a 3 GB file system, after reaching its HWM (for example,default of 90 percent), automatically extends to 6 GB.

• A file system that is larger than 20 GB extends by 5 percent of its size or 20 GB,whichever is larger, when it reaches the HWM. For example, a 100 GB file systemextends to 120 GB, and a 500 GB file system extends to 525 GB.

Maximum size

The default maximum size for any file system is 16 TB. The maximum size for automaticfile system extension is from 3 MB up to 16 TB. If Virtual Provisioning is enabled andthe selected storage pool is a traditional RAID group (non-virtual CLARiiON thin) storagepool, the maximum size is required; otherwise, this field is optional. The extension sizeis also dependent on having additional space in the storage pool associated with the filesystem.

Automatic file extension conditions

◆ If the file system size reaches the specified maximum size, the file system cannotextend beyond that size, and the automatic extension operation is rejected.

◆ If the available space is less than the extend size, the file system extends by themaximum available space.

◆ If only the HWM is set with automatic file system extension enabled, the file systemautomatically extends when that HWM is reached, if there is space available and thefile system size is less than 16 TB.

◆ If only the maximum size is specified with automatic file system extension enabled,the file system automatically extends when the default HWM of 90 percent is reached,if there is space available and the maximum size has not been reached. If the filesystem reaches or exceeds the set maximum size, automatic extension is rejected.

◆ If the HWM or maximum file size is not set, but either automatic file system extensionor Virtual Provisioning is enabled, the file system’s HWM and maximum size are setto the default values of 90 percent and 16 TB, respectively.

44 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

Calculating automatic file system extension size

During each automatic file system extension, fs_extend_handler, located on the ControlStation (/nas/sbin/fs_extend_handler), calculates the extension size by using the algorithmshown in Figure 13 on page 45.

Figure 13. Automatic file system extension size calculation

Automatic file system extension 45

Concepts

Virtual Provisioning

The Virtual Provisioning option allows you to allocate storage capacity based on anticipatedneeds, while you dedicate only the resources you currently need. Combining automatic filesystem extension and Virtual Provisioning lets you grow the file system gradually as needed.

When Virtual Provisioning is enabled and a virtual storage pool is not being used, the virtualmaximum file system size is reported to NFS and CIFS clients; if a virtual storage pool isbeing used, the actual file system size is reported to NFS and CIFS clients.

Note: Enabling Virtual Provisioning with automatic file system extension does not automaticallyreserve the space from the storage pool for that file system. Administrators must ensure that adequatestorage space exists, so that the automatic extension operation can succeed. If the available storage isless than the maximum size setting, automatic extension fails. Users receive an error message whenthe file system becomes full, even though it appears that there is free space in the file system.

Planning considerations

This section covers important volume and file system planning information and guidelines,interoperability considerations, storage pool characteristics, and Celerra upgradeconsiderations that you need to know when implementing AVM and automatic file systemextension.

Review these topics:

◆ Celerra Network Server file system management and the nas_fs command◆ The Celerra SnapSure feature (checkpoints) and the fs_ckpt command◆ Celerra Network Server volume management concepts (metavolumes, slice volumes,

stripe volumes, and disk volumes) and the nas_volume, nas_server, nas_slice, andnas_disk commands

◆ RAID technology◆ Symmetrix storage systems◆ CLARiiON storage systems

Interoperability considerations

Consider these when using Celerra automatic file system extension with replication:

◆ Enable automatic extension and Virtual Provisioning only on the source file system.The destination file system is synchronized with the source and extends automatically.

◆ When the source file system reaches its HWM, the destination file system automaticallyextends first and then the source file system automatically extends. Set up the sourcereplication file system with automatic extension enabled, as explained in Create file

46 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

systems with automatic file system extension on page 65, or modify an existing sourcefile system to automatically extend, by using the procedure Enable automatic filesystem extension and options on page 77.

◆ If the extension of the destination file system succeeds but the extension of the sourcefile system fails, the automatic extension operation stops functioning. You receive anerror message indicating that the failure is due to the limitation of available disk spaceon the source side. Manually extend the source file system to make the source anddestination file systems the same size, by using the nas_fs -xtend <fs_name> -optionsrc_only command. Using Celerra Replicator (V1) and Using Celerra Replicator (V2)contain instructions to recover from this situation.

Other interoperability considerations are:

◆ The automatic extension and Virtual Provisioning configuration is not moved overto the destination file system during Replicator failover. If you intend to reverse thereplication and the destination file system becomes the source, you must enableautomatic extension on the new source file system.

◆ With Virtual Provisioning enabled, NFS, CIFS, and FTP clients see the actual size ofthe Replicator destination file system, while they see the virtually provisionedmaximum size on the source file system. Table 8 on page 47 describes this client view.

Table 8. Client view of Replicator source and destination file systems

Source file system withVirtual Provisioning

Source file system withoutVirtual Provisioning

Destination file systemClient view

Maximum sizeActual sizeActual sizeClients see

Using Celerra Replicator (V1) and Using Celerra Replicator (V2) contain more informationon using automatic file system extension with Celerra Replicator.

AVM storage pool considerations

Consider these AVM storage pool characteristics:

◆ System-defined storage pools have a set of rules governing how the Celerra NetworkServer manages storage. User-defined storage pools have attributes that you definefor each storage pool.

◆ All system-defined storage pools (virtual and non-virtual) are dynamic; they acquireand release disk volumes as needed. Administrators can modify the attribute todisable this dynamic behavior.

◆ User-defined storage pools are not dynamic; they require administrators to explicitlyadd and remove volumes manually. You are allowed to choose disk volume storagefrom only one of the attached storage systems when creating a user-defined storagepool.

◆ Striping never occurs above the storage-pool level.

Planning considerations 47

Concepts

◆ The system-defined CLARiiON storage pools attempt to use all free disk volumesbefore maximizing use of the partially used volumes. This behavior is considered tobe a “greedy” attribute. You can modify the attributes that control this greedy behaviorin system-defined storage pools. Modify system-defined and user-defined storagepool attributes on page 93 describes the procedure.

Another option is to create user-defined storage pools to group disk volumes to keepsystem-defined storage pools from using them. Create file systems with user-definedstorage pools on page 58 provides more information on creating user-defined storagepools. You can create a storage pool to reserve disk volumes, but never create filesystems from that storage pool. You can move the disk volumes out of the reservinguser-defined storage pool if you need to use them for file system creation or otherpurposes.

◆ The system-defined Symmetrix storage pools maximize the use of disk volumesacquired by the storage pool before consuming more. This behavior is considered tobe a “not greedy” attribute.

◆ AVM does not perform storage system operations necessary to create new diskvolumes, but consumes only existing disk volumes. You might have to add LUNs toyour storage system and configure new disk volumes, especially if you createuser-defined storage pools.

◆ A file system might use many or all the disk volumes that are members of asystem-defined storage pool.

◆ You can use only one type of disk volume in a user-defined storage pool. For example,if you create a storage pool and then add a disk volume based on ATA drives to thepool, add only other ATA-based disk volumes to the pool to extend it.

◆ SnapSure checkpoint SavVols might use the same disk volumes as the file system ofwhich the checkpoints are made.

◆ AVM does not add members to the storage pool if the amount of space requested ismore than the sum of the unused and available disk volumes, but less than or equalto the available space in an existing system-defined storage pool.

◆ Some AVM system-defined storage pools designed for use with CLARiiON storagesystems acquire pairs of storage-processor balanced disk volumes with the sameRAID type, disk count, and size. When reserving disk volumes from a CLARiiONstorage system, it is important to reserve them in similar pairs. Otherwise, AVMmight not find matching pairs, and the number of usable disk volumes might be morelimited than was intended.

Create file systems with AVM on page 55 provides more information on creating filesystems by using the different pool types. Managing Celerra Volumes and File SystemsManually contains instructions to recover from this situation.

Related information on page 18 provides a list of related documentation.

48 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

Upgrading Celerra software

When you upgrade to Celerra Network Server version 5.6 software, all system-definedstorage pools are available.

The system-defined storage pools for the currently attached storage systems with availablespace appear in the output when you list storage pools, even if AVM is not used on theCelerra Network Server. If you have not used AVM in the past, these storage pools arecontainers and do not consume storage until you request for a file system by using AVM.

If you have used AVM in the past, in addition to the system-defined storage pools, anyuser-defined storage pools you have created also appear when you list the storage pools.

Automatic file system extension is interrupted during Celerra software upgrades. If automaticfile system extension is enabled, the Control Station continues to capture HWM events, butactual file system extension does not start until the Celerra upgrade process completes.

File system and automatic file system extension considerations

Consider your environment, most important file systems, file system sizes, and expectedgrowth, before implementing AVM. Follow these general guidelines when planning touse AVM in your environment:

◆ Create the most important and most used file systems first to access them quicklyand easily. AVM system-defined storage pools use free disk volumes to create a newfile system. For example, there are 40 disk volumes on the storage system. AVM takeseight disk volumes, creates stripe1, slice1, metavolume1, and then creates the filesystem ufs1:

• Assuming the default behavior of the system-defined storage pool, AVM useseight more disk volumes, creates stripe2, and builds file system ufs2, even thoughthere is still space available in stripe1.

• File systems ufs1 and ufs2 are on different sets of disk volumes and do not shareany LUNs, making it easier to locate and access them.

◆ If you plan to create user-defined storage pools, consider LUN selection and striping,and do your own disk volume aggregation before putting the volumes into the storagepool. This ensures that the file systems are not built on a single LUN. Disk volumeaggregation is a manual process for user-defined storage pools.

◆ For file systems with sequential I/O, two LUNs per file system are generally sufficient.

◆ If you use AVM for file systems with sequential I/O, consider modifying the attributeof the storage pool to restrict slicing.

◆ Automatic file system extension does not alleviate the need for appropriate file systemusage planning. Create the file systems with adequate space to accommodate theestimated file system usage. Allocating too little space to accommodate normal filesystem usage makes the Control Station rapidly and repeatedly attempt to extend

Planning considerations 49

Concepts

the file system. If the Control Station cannot adequately extend the file system toaccommodate the usage quickly enough, the automatic extension fails. Knownproblems and limitations on page 112 provides more information on how to identifyand recover from this issue.

Note: When planning file system size and usage, consider setting the HWM, so that the free spaceabove the HWM setting is a certain percentage above the largest average file for that file system.

◆ Use of AVM with a single-enclosure CLARiiON storage system could limitperformance because AVM does not stripe between or across RAID group 0 and otherRAID groups. This is the only case where striping across 4+1 RAID 5 and 8+1 RAID5 is suggested.

◆ If you want to set a stripe size that is different from the default stripe size forsystem-defined storage pools, create a user-defined storage pool. Create file systemswith user-defined storage pools on page 58 provides more information.

◆ Take disk contention into account when creating a user-defined pool.

◆ If you have disk volumes you would like to reserve, so that the system-defined storagepools do not use them, consider creating a user-defined storage pool and add thosespecific volumes to it.

50 Managing Celerra Volumes and File Systems with AVM 5.6.48

Concepts

3

Configuring

The tasks to configure volumes and file systems with AVM are:◆ Configure disk volumes on page 52◆ Create file systems with AVM on page 55◆ Extend file systems with AVM on page 69◆ Create file system checkpoints with AVM on page 85

Managing Celerra Volumes and File Systems with AVM 5.6.48 51

Configure disk volumes

EMC Celerra system network servers that are gateway network-attached storage (NAS)systems and that connect to EMC Symmetrix and CLARiiON arrays are:

◆ Celerra NS500G◆ Celerra NS500GS◆ Celerra NS600G◆ Celerra NS600GS◆ Celerra NS700G◆ Celerra NS700GS◆ Celerra NS704G◆ Celerra NS40G◆ Celerra NS-G8

A Celerra gateway system stores data on CLARiiON user LUNs or Symmetrix hypervolumes.If the user LUNs or hypervolumes are not configured correctly on the array, Celerra AVMand Celerra Manager cannot be used to manage the storage.

Typically, EMC support personnel does the initial setup of disk volumes on these gatewaystorage systems.

However, if your Celerra gateway system is attached to a CLARiiON array and you wantto add disk volumes to the configuration, use the procedure outlined in this section:

1. Use EMC Navisphere Manager or the EMC Navisphere CLI to create CLARiiON userLUNs.

2. Use either Celerra Manager or the CLI to make the new user LUNs available to the CelerraNetwork Server as disk volumes.

The user LUNs must be created before you create Celerra file systems.

To add CLARiiON user LUNs, you must be familiar with EMC Navisphere Manager or theEMC Navisphere CLI and the process of creating RAID groups and CLARiiON user LUNsfor the Celerra volumes. The documentation for EMC Navisphere Manager and EMCNavisphere CLI, available on Powerlink, describes how to create RAID groups and userLUNs.

If the disk volumes are configured by EMC experts, go to Create file systems with AVM onpage 55.

52 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Add CLARiiON user LUNs to a gateway system

1. Create RAID groups and CLARiiON user LUNs (as needed for Celerra volumes) byusing EMC Navisphere Manager or EMC Navisphere CLI. Ensure that you add the LUNsto the Celerra gateway system’s storage group:

• Always create the user LUNs in balanced pairs, one owned by SP A and one ownedby SP B. The paired LUNs must be the same size.

• FC disks must be configured as RAID 1, RAID 5, or RAID 6. The paired LUNs do nothave to be in the same RAID group. Table 6 on page 27 lists the valid RAID groupand storage array combinations; Gateway models use the same combinations as theNS-80 (for CX3 backends) or the NS-960 (for CX4 backends).

• ATA disks must be configured as RAID 3, RAID 5, or RAID 6 . All LUNs in a RAIDgroup must belong to the same SP; create pairs by using LUNs from two RAID groups.Table 6 on page 27 lists the valid RAID group and storage array combinations;Gateway models use the same combinations as the NS-80 (for CX3 backends) or theNS-960 (for CX4 backends).

• The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs.

Use these settings when creating user LUNs:

• RAID Type: RAID 1, RAID 5, or RAID 6 for FC disks and RAID 3, RAID 5, or RAID6 for ATA disks

• LUN ID: Select the first available value• Element Size: 128• Rebuild Priority: ASAP• Verify Priority: ASAP• Enable Read Cache: Selected• Enable Write Cache: Selected• Enable Auto Assign: Cleared (off)• Number of LUNs to Bind: 2• Alignment Offset: 0• LUN size: Must not exceed 2 TB

Note: If you create 4+1 RAID 3 LUNs, the Number of LUNs to Bind value should be 1.

• When you add the LUN to the storage group for a gateway system, set the HLU to16 or greater.

2. Perform one of these steps to make the new user LUNs available to the Celerra system:

Configure disk volumes 53

Configuring

• Using Celerra Manager:

a. Open the Storage System page for the Celerra system (Storage ➤ Systems).

b. Click Rescan.

• Using the CLI, run the nas_diskmark -mark -all command.

Note: Do not change the host LUN identifier of the Celerra LUNs after rescanning. This mightcause data loss or unavailability.

Add disk volumes to an integrated system

EMC suggests configuring unused or new disk devices on a CLARiiON storage systemusing the Provisioning wizard. The Provisioning wizard is only available for integratedCelerra models (NX4 and NS non-gateway systems excluding NS80), including Fibre Channelenabled models, attached to a single CLARiiON backend.

To open the Provisioning wizard:

1. Select Celerras ➤ [Celerra_name] ➤ Storage ➤ Systems and right-click the CLARiiONsystem.

2. From the menu, select Provision Storage.

Note: To use the Provisioning wizard, you must log in to Celerra Manager by using the defaultadministrative user accounts root or nasadmin or by using an administrative user account that belongsto the predefined group storage (which is associated with the role storage_admin).

You can also use the setup_clariion script to provision the disks on integrated (non-FC)systems. This tool binds the CLARiiON storage system data LUNs on the xPEs and DAEs,and makes them accessible to the Data Movers.

After the installation completes, the Celerra /nas/sbin/setup_clariion script can be run toensure your RAID groups and LUN settings are appropriate for your Celerra serverconfiguration. The script provides array templates for pre-defined setup and a User_Definedconfiguration option that prompts you for a shelf template for each enclosure.

You can run the setup_clariion script stand-alone or by using Celerra Manager (Celerras ➤

[Celerra_name] ➤ Storage ➤ Systems ➤ [System_name], then click Configure) on integratedCLARiiON storage when there are unbound disks to configure. The Celerra Manager versiononly supports the array templates for CX and CX3 backends. CX4 backends must use theUser_Defined mode with /nas/sbin/setup_clariion CLI script or the Provisioning wizard.

The setup_clariion script allows you to configure CLARiiON arrays on a shelf-by-shelf basisusing predefined configuration templates. For each enclosure (xPE or DAE), the scriptexamines your specific hardware configuration and gives you a choice of appropriatetemplates. From the choices available, you can select the shelf template you want. In otherwords, you can mix combinations of RAID configurations on the same storage system. The

54 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

script then combines the shelf templates into a custom, User_Defined array template foreach CLARiiON system, and then configures your array.

Create file systems with AVM

This section describes the procedures to create a Celerra file system by using AVM storagepools and explains how to create file systems by using the automatic file system extensionfeature.

You can enable automatic file system extension on new or existing file systems if the filesystem has an associated AVM storage pool. When you enable automatic file systemextension, use the nas_fs command options to adjust the HWM value, set a maximum filesize to which the file system can be extended, and enable Virtual Provisioning. Create filesystems with automatic file system extension on page 65 provides more information.

You can create file systems by using system-defined, system-defined virtual, or user-definedstorage pools with automatic file system extension enabled or disabled. Specify the storagesystem from which to allocate space for the type of storage pool being created.

Choose one or more of these procedures to create file systems:

◆ Create file systems with system-defined storage pools on page 56

The simplest way to create file systems without having to create the underlying volumestructure.

◆ Create file systems with user-defined storage pools on page 58

Allows more administrative control of the underlying volumes and placement of the filesystem. Use these storage pools to prevent the system-defined storage pools from usingcertain volumes.

◆ Create file systems with automatic file system extension on page 65

Allows you to create a file system that automatically extends when it reaches a certainthreshold by using space from either a system-defined or a user-defined storage pool.

Create file systems with AVM 55

Configuring

Create file systems with system-defined storage pools

When you create a Celerra file system by using the system-defined storage pools, it is notnecessary to create volumes before setting up the file system. AVM allocates space to thefile system from the storage pool you specify, residing on the storage system associated withthat storage pool, and automatically creates any required volumes when it creates the filesystem. This ensures that the file system and its extensions are created from the same typeof storage, with the same cost, performance, and availability characteristics.

The storage system appears as a number associated with the storage system, and is dependenton the type of attached storage system. A CLARiiON storage system appears as a set ofintegers, prefixed with APM, for example, APM00033900124-0019. A Symmetrix storagesystem appears as a set of integers, for example, 002804000190-003C.

To create a file system with system-defined storage pools:

1. Obtain the list of available system-defined and system-defined virtual storage pools bytyping:

$ nas_pool -list

Output:

id inuse acl name1 y 0 symm_std2 n 0 clar_r13 y 0 clar_r5_performance4 y 0 clar_r5_economy5 n 0 clarata_r36 n 0 clarata_archive7 n 0 symm_std_rdf_src8 n 0 clar_r140 y 0 engineer_APM008440166641 y 0 tp1_FCNTR074200038

2. Display the size of a specific storage pool by using this command syntax:

$ nas_pool -size <name>

where:

<name> = name of the storage pool

Example:

To display the size of the clar_r5_performance storage pool, type:

$ nas_pool -size clar_r5_performance

Output:

id = 3name = clar_r5_performanceused_mb = 128000avail_mb = 0total_mb = 260985potential_mb = 260985

56 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Note: To display the size of all storage pools, use the -all option instead of the <name> option.

3. Obtain the system name of an attached Symmetrix storage system by typing:

$ nas_storage -list

Output:

id acl name serial number1 0 000183501491 000183501491

4. Obtain information of a specific Symmetrix storage system in the list by using thiscommand syntax:

$ nas_storage -info <system_name>

where:

<system_name> = name of the storage system

Example:

To obtain information about the Symmetrix storage system 000183501491, type:

$ nas_storage -info 000183501491

Output:

type num slot ident stat scsi vols ports p0_stat p1_stat p2_stat p3_statR1 1 1 RA-1A Off NA 0 1 Off NA NA NADA 2 2 DA-2A On WIDE 25 2 On Off NA NADA 3 3 DA-3A On WIDE 25 2 On Off NA NASA 5 5 SA-5A On ULTRA 0 2 On On NA NASA 12 12 SA-12A On ULTRA 0 2 Off On NA NADA 14 14 DA-14A On WIDE 27 2 On Off NA NADA 15 15 DA-15A On WIDE 26 2 On Off NA NAR1 16 16 RA-16A On NA 0 1 On NA NA NAR2 17 1 RA-1B Off NA 0 1 Off NA NA NADA 18 2 DA-2B On WIDE 26 2 On Off NA NADA 19 3 DA-3B On WIDE 27 2 On Off NA NASA 21 5 SA-5B On ULTRA 0 2 On On NA NASA 28 13 SA-12B OnULTRA 0 2 On On NA NADA 30 14 DA-14B On WIDE 25 2 On Off NA NADA 31 15 DA-15B On WIDE 25 2 On Off NA NAR2 32 16 RA-16B On NA 0 1 On NA NA NA

5. Create a file system by size with a system-defined storage pool by using this commandsyntax:

$ nas_fs -name <fs_name> -create size=<size> pool=<pool> storage=<system_name>

where:

<fs_name> = name of the file system

<size> = amount of space to add to the file system; specify the size in GB by typing<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), orin TB by typing <number>T (for example, 1T)

<pool> = name of the storage pool

Create file systems with AVM 57

Configuring

<system_name> = name of the storage system from which space for the file system isallocated

Example:

To create a file system ufs1 of size 10G with a system-defined storage pool, type:

$ nas_fs -name ufs1 -create size=10G pool=symm_std storage=00018350149

Note: To mirror the file system with SRDF, you must specify the symm_std_rdf_src storage pool.This directs AVM to allocate space from volumes configured when installing for remote mirroringby using SRDF. Using SRDF/S with Celerra for Disaster Recovery contains more information.

Output:

id = 1name = ufs1acl = 0in_use = Falsetype = uxfsvolume = avm1pool = symm_stdmember_of =rw_servers =ro_servers =rw_vdms =ro_vdms =auto_ext = no,virtual_provision=nodeduplication= offstor_devs = 00018350149disks = d20,d12,d18,d10

Note: The Celerra Network Server Command Reference Manual contains information on the optionsavailable for creating a file system with the nas_fs command.

Create file systems with user-defined storage pools

The AVM system-defined storage pools are available for use with the Celerra NetworkServer. If you require more manual control than the system-defined storage pools allow,create a user-defined storage pool and then create the file system by using that pool.

Note: Create a user-defined storage pool and define its attributes to reserve disk volumes so that yoursystem-defined storage pools cannot use them.

Before you begin

Prerequisites include:

◆ A user-defined storage pool can be created either by using manual volume managementor by letting AVM create the storage pool by using a specified size. If you use manualvolume management, you must first stripe the volumes together and add the resulting

58 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

volumes to the storage pool you create.Managing Celerra Volumes and File SystemsManuallydescribes the steps to create and manage volumes.

◆ You cannot use disk volumes you have reserved for other purposes. For example, youcannot use any disk volumes reserved for a system-defined storage pool. ControllingAccess to Celerra System Objects contains more information on access control levels.

◆ AVM system-defined storage pools designed for use with CLARiiON storage systemsacquire pairs of storage-processor balanced disk volumes that have the same RAID type,disk count, and size. Modify system-defined and user-defined storage pool attributes onpage 93 provides more information.

◆ When creating a user-defined storage pool to reserve disk volumes from a CLARiiONstorage system, use storage-processor balanced disk volumes with these same qualities.Otherwise, AVM cannot find matching pairs, and the number of usable disk volumesmight be more limited than was intended.

To create a file system with a user-defined storage pool:

◆ Create a user-defined storage pool by volumes on page 60◆ Create a user-defined storage pool by size on page 60◆ Create the file system on page 62◆ Create file systems with automatic file system extension on page 65◆ Create automatic file system extension-enabled file systems on page 66

Create file systems with AVM 59

Configuring

Create a user-defined storage pool by volumes

To create a user-defined storage pool (from which space for the file system is allocated) byvolumes, add volumes to the storage pool and define the storage pool attributes.

Action

To create a user-defined storage pool by volumes, use this command syntax:

$ nas_pool -create -name <name> -acl <acl> -description <desc> -volumes<volume_name>[,<volume_name>,...] -default_slice_flag {y|n}

where:

<name> = name of the storage pool

<acl> = designates an access control level for the new storage pool; default value is 0

<desc> = assigns a comment to the storage pool; type the comment within quotes

<volume_name> = names of the volumes to add to the storage pool; can be a metavolume, slice volume, stripe volume,or disk volume; use a comma to separate each volume name

-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensedfrom the storage pool; if set to y, then members might be sliced and if set to n, then the members of the storage poolcannot be sliced, and volumes specified cannot be built on a slice.

Example:

To create a user-defined storage pool named marketing with a description, with the disk members d126, d127, d128, andd129 specified, and allow the volumes to be built on a slice, type:

$ nas_pool -create -name marketing -description “storage pool for marketing” -volumesd126,d127,d128,d129 -default_slice_flag y

Output

id = 5name = marketingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = CLSTDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Create a user-defined storage pool by size

To create a user-defined storage pool (from which space for the file system is allocated) bysize, specify a template pool, size of the pool, minimum stripe size, and number of stripemembers.

60 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Action

To create a user-defined storage pool by size, use this command syntax:

$ nas_pool -create -name <name> -acl <acl> -description <desc>

-default_slice_flag {y|n} -size <integer>[M|G|T] -storage <system_name>

-template <system_pool_name> -num_stripe_members <num_stripe_mem>

-stripe_size <num>

where:

<name> = name of the storage pool

<acl> = designates an access control level for the new storage pool; default value is 0

<desc> = assigns a comment to the storage pool; type the comment within quotes

-default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensedfrom the storage pool; if set to y, then members might be sliced and if set to n, then the members of the storage poolcannot be sliced, and volumes specified cannot be built on a slice.

<integer> = size of the storage pool, an integer between 1 and 1024; specify the size in GB (default) by typing <integer>G(for example, 250G), in MB by typing <integer>M (for example, 500M), or in TB by typing <integer>T (for example, 1T)

<system_name> = storage system on which one or more volumes will be created, to be added to the storage pool

<system_pool_name> = system pool template used to create the user pool; required when the -size option is specified;the user pool will be created using the profile attributes of the specified system pool template

<num_stripe_mem> = number of stripe members used to create the user pool; only works when both the -size and-template options are also specified; it overrides the number of stripe members attribute of the specified system pooltemplate

<num> = stripe size used to create the user pool; only works when both the -size and -template options are also specified;it overrides the stripe size attribute of the specified system pool template

Example:

To create a 20 GB user-defined storage pool named marketing with a description, using the clar_r5_performance pool,and that contains 4 stripe members with a stripe size of 32768 KB, and allow the volumes to be built on a slice, type:

$ nas_pool -create -name marketing -description “storage pool for marketing”-default_slice_flag y -size 20G -template clar_r5_performance -num_stripe_members4 -stripe_size 32768

Create file systems with AVM 61

Configuring

Output

id = 5name = marketingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = v213default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = CLSTDserver_visibility = server_2,server_3template_pool = clar_r5_performancenum_stripe_members = 4stripe_size = 32768

Create the file system

To create a file system, you must first create a user-defined storage pool. Create a user-definedstorage pool by volumes on page 60 and Create a user-defined storage pool by size on page60 provide more information.

Use this procedure to create a file system by specifying a user-defined storage pool and anassociated storage system.

1. List the storage system by typing:

$ nas_storage -list

Output:

id acl name serial number1 0 APM00033900125 APM00033900125

2. Get detailed information of a specific attached storage system in the list by using thiscommand syntax:

$ nas_storage -info <system_name>

where:

<system_name> = name of the storage system

Example:

To get detailed information of the storage system APM00033900125, type:

$ nas_storage -info APM00033900125

Output:

62 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

id = 1arrayname = APM00033900125name = APM00033900125model_type = RACKMOUNTmodel_num = 630db_sync_time = 1073427660 == Sat Jan 6 17:21:00 EST 2007num_disks = 30num_devs = 21num_pdevs = 1num_storage_grps = 0num_raid_grps = 10cache_page_size = 8wr_cache_mirror = Truelow_watermark = 70high_watermark = 90unassigned_cache = 0failed_over = Falsecaptive_storage = TrueActive SoftwareNavisphere = 6.6.0.1.43ManagementServer = 6.6.0.1.43Base = 02.06.630.4.001

Storage ProcessorsSP Identifier = Asignature = 926432microcode_version = 2.06.630.4.001serial_num = LKE00033500756prom_rev = 3.00.00agent_rev = 6.6.0 (1.43)phys_memory = 3968sys_buffer = 749read_cache = 32write_cache = 3072free_memory = 115raid3_mem_size = 0failed_over = Falsehidden = Truenetwork_name = spaip_address = 128.221.252.200subnet_mask = 255.255.255.0gateway_address = 128.221.252.100num_disk_volumes = 11 - root_disk root_ldisk d3 d4 d5 d6 d8

d13 d14 d15 d16SP Identifier = Bsignature = 926493microcode_version = 2.06.630.4.001serial_num = LKE00033500508prom_rev = 3.00.00agent_rev = 6.6.0 (1.43)phys_memory = 3968raid3_mem_size = 0failed_over = Falsehidden = Truenetwork_name = OEM-XOO25IL9VL9ip_address = 128.221.252.201subnet_mask = 255.255.255.0gateway_address = 128.221.252.100num_disk_volumes = 4 - disk7 d9 d11 d12

Create file systems with AVM 63

Configuring

Note: This is not a complete output.

3. Create the file system from a user-defined storage pool and designate the storage systemon which you want the file system to reside by using this command syntax:

$ nas_fs -name <fs_name> -type <type> -create <volume_name> pool=<pool>storage=<system_name>

where:

<fs_name> = name of the file system

<type> = type of the file system UxFS (default), mgfs, or rawfs

<volume_name> = name of the volume

<pool> = name of the storage pool

<system_name> = name of the storage system on which the file system resides

Example:

To create the file system ufs1 from a user-defined storage pool and designate theAPM00033900125 storage system on which you want the file system ufs1 to reside, type:

$ nas_fs -name ufs1 -type uxfs -create MTV1 pool=marketing storage=APM00033900125

Output:

id = 2name = ufs1acl = 0in_use = Falsetype = uxfsvolume = MTV1pool = marketingmember_of = root_avm_fs_group_2rw_servers =ro_servers =rw_vdms =ro_vdms =auto_ext = no,virtual_provision=nodeduplication= offstor_devs = APM00033900125-0111disks = d6,d8,d11,d12

64 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Create file systems with automatic file system extension

Use the -auto_extend option of the nas_fs command to enable automatic file system extensionon a new file system created with AVM; the option is disabled by default.

Note: Automatic file system extension does not alleviate the need for appropriate file system usageplanning. Create the file systems with adequate space to accommodate the estimated file system usage.Allocating too little space to accommodate normal file system usage makes the Control Station rapidlyand repeatedly attempt to extend the file system. If the Control Station cannot adequately extend thefile system to accommodate the usage quickly enough, the automatic extension fails.

If automatic file system extension is disabled and the file system reaches 90 percent full, awarning message is written to the sys_log. Any action necessary is at the administrator’sdiscretion.

Note: You do not have to set the maximum size for a newly created file system when you enableautomatic file system extension. The default maximum size is 16 TB. With automatic file systemextension enabled, even if the HWM is not set, the file system automatically extends up to 16 TB, ifthe storage space is available in the storage pool.

Use this procedure to create a file system with a system-defined storage pool and a CLARiiONstorage system, and enable automatic file system extension.

Action

To create a file system with automatic file system extension enabled, use this command syntax:

$ nas_fs -name <fs_name> -type <type> -create size=<size> pool=<pool>storage=<system_name> -auto_extend {no|yes}

where:

<fs_name> = name of the file system

<type> = type of the file system

<size> = amount of space to add to the file system; specify the size in GB by typing <number>G (for example, 250G),in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T)

<pool> = name of the storage pool from which to allocate space to the file system

<system_name> = name of the storage system associated with the storage pool

Example:

To enable automatic file system extension as you create a new 10 GB file system, from a system-defined storage pool,and a CLARiiON storage system, type:

$ nas_fs -name ufs1 -type uxfs -create size=10G pool=clar_r5_performancestorage=APM00042000814 -auto_extend yes

Create file systems with AVM 65

Configuring

Output

id = 434name = ufs1acl = 0in_use = Falsetype = uxfsworm = offvolume = v1634pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers =ro_servers =rw_vdms =ro_vdms =auto_ext = hwm=90%,virtual_provision=nodeduplication= offstor_devs = APM00042000814-001D,APM00042000814-001A,

APM00042000814-0019,APM00042000814-0016disks = d20,d12,d18,d10

Create automatic file system extension-enabled file systems

When you create a file system with automatic file system extension enabled, you can set thepoint at which you want the file system to automatically extend (the HWM) and the maximumsize to which the file system can grow. You can also enable Virtual Provisioning at the sametime that you create or extend a file system. Enable automatic file system extension andoptions on page 77 provides information on modifying the automatic file system extensionoptions.

If you set the slice=no option on the file system, the actual file system size might be biggerthan the size that you specify for the file system, and could exceed the maximum size. Inthis case, a warning indicates the file system size might exceed the maximum size, and theautomatic extension fails. If you do not specify the file system slice option (-optionslice=yes|no) when you create the file system, the file system defaults to the setting of thestorage pool. Modify system-defined and user-defined storage pool attributes on page 93provides more information.

Note: If the actual file system size is above the HWM when Virtual Provisioning is enabled, the clientsees the actual file system size instead of the specified maximum size.

Enabling automatic file system extension and Virtual Provisioning does not automaticallyreserve the space from the storage pool for that file system. Administrators must ensurethat adequate storage space exists, so that the automatic extension operation can succeed.If the available storage is less than the maximum size setting, automatic extension fails.Users receive an error message when the file system becomes full, even though it appearsthat there is free space in the file system. The file system must be manually extended.

Use this procedure to simultaneously set the automatic file system extension options whenyou are creating the file system:

66 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

1. Create a file system of a specified size, enable automatic file system extension and VirtualProvisioning, and set the HWM and the maximum file system size simultaneously byusing this command syntax:

$ nas_fs -name <fs_name> -type <type> -create size=<integer>[T|G|M]pool=<pool> storage=<system_name> -auto_extend {no|yes} -vp {yes|no}-hwm <50-99>% -max_size <integer>[T|G|M]

where:

<fs_name> = name of the file system

<type> = type of the file system

<integer> = size requested in MB, GB, or TB; the maximum size is 16 TB

<pool> = name of the storage pool

<system_name> = attached storage system on which the file system and storage poolreside

<50-99> = percentage between 50 and 99, at which you want the file system toautomatically extend

Example:

To create a 10 MB, UxFS from an AVM storage pool, with automatic file system extensionenabled, maximum file system size of 200M, HWM of 90 percent, and Virtual Provisioningenabled, type:

$ nas_fs -name ufs2 -type uxfs -create size=10M pool=clar_r5_performance

-auto_extend yes -vp yes -hwm 90% -max_size 200M

Output:

id = 435name = ufs2acl = 0in_use = Falsetype = uxfsworm = offvolume = v1637pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers =ro_servers =rw_vdms =ro_vdms =auto_ext = hwm=90%,max_size=200M,virtual_provision=yesdeduplication= offstor_devs = APM00042000814-001D,APM00042000814-001A,

APM00042000814-0019,APM00042000814-0016disks = d20,d12,d18,d10

Note: When you enable Virtual Provisioning on a new or existing file system, you must also specifythe maximum size to which the file system can automatically extend.

2. Verify the settings for the specific file system after enabling automatic file system extensionby using this command syntax:

Create file systems with AVM 67

Configuring

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Example:

To verify the settings for file system ufs2 after enabling automatic file system extension,type:

$ nas_fs -info ufs2

Output:

id = 2name = ufs2acl = 0in_use = Falsetype = uxfsworm = offvolume = v1637pool = clar_r5_performancerw_servers =ro_servers =rw_vdms =ro_vdms =backups = ufs2_snap1,ufs2_snap2auto_ext = hwm=66%,max_size=16769024M,virtual_provision=yesdeduplication= offstor_devs = APM00042000814-001D,APM00042000814-001A,

APM00042000814-0019,APM00042000814-0016disks = d20,d12,d18,d10

You can also set the options -hwm and -max_size on each file system with automatic filesystem extension enabled. When enabling Virtual Provisioning on a file system, you mustset the maximum size but setting the high water mark is optional.

68 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Extend file systems with AVM

Increase the size of a Celerra file system nearing its maximum capacity by extending the filesystem. You can:

◆ Extend a file system by size to add space if the file system has an associatedsystem-defined or user-defined storage pool. You can also specify the storage systemfrom which to allocate space. Extend file systems by size using storage pools on page70 provides instructions.

◆ Extend a file system by volume if the file system has an associated system-defined oruser-defined storage pool. Extend file systems by volume using storage pools on page73 provides instructions.

◆ Extend a file system by using a storage pool other than the one used to create the filesystem. Extend file systems by using a different storage pool on page 75 providesinstructions.

◆ Extend an existing file system by enabling automatic file system extension on that filesystem. Enable automatic file system extension and options on page 77 providesinstructions.

◆ Extend an existing file system by enabling Virtual Provisioning on that file system. EnableVirtual Provisioning on page 81 provides instructions.

Managing Celerra Volumes and File Systems Manually contains the instructions to extend filesystems manually.

Extend file systems with AVM 69

Configuring

Extend file systems by size using storage pools

All file systems created by using the AVM feature have an associated storage pool.

Extend a file system created with either a system-defined storage pool (either virtual ornon-virtual) or a user-defined storage pool by specifying the size and the name of the filesystem. AVM allocates storage from the storage pool to the file system. You can also specifythe storage system you want to use. If you do not, the last storage system associated withthe storage pool is used.

Note: A file system created using a system-defined virtual storage pool can be extended on its existingpool or by using a compatible system-defined virtual storage pool that contains the same disk type.

Use this procedure to extend a file system by size:

1. Check the file system configuration to confirm that the file system has an associatedstorage pool by using this command syntax:

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Note: If you see a storage pool defined in the output, the file system was created with AVM andhas an associated storage pool.

Example:

To check the file system configuration to confirm that file system ufs1 has an associatedstorage pool, type:

$ nas_fs -info ufs1

Output:

id = 8name = ufs1acl = 0in_use = Falsetype = uxfsvolume = v121pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00023700165-0111disks = d7,d13

2. Extend the file system by size by using this command syntax:

70 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

$ nas_fs -xtend <fs_name> size=<size> pool=<pool> storage=<system_name>

where:

<fs_name> = name of the file system

<size> = amount of space to add to the file system; specify the size in GB by typing<number>G (for example, 250G), in MB by typing <number>M (for example, 500M), orin TB by typing <number>T (for example, 1T)

<pool> = name of the storage pool

<system_name> = name of the storage system; if you do not specify a storage system, thedefault storage system is the one on which the file system resides and if the file systemspans multiple storage systems, the default is any one of the storage systems on whichthe file system resides

Note: The first time you extend the file system without specifying a storage pool, the default storagepool is the one used to create the file system. If you specify a storage pool that is different fromthe one used to create the file system, the next time you extend this file system without specifyinga storage pool, the last pool in the output list is the default.

Example:

To extend the size of file system ufs1 by 10M, type:

$ nas_fs -xtend ufs1 size=10M pool=clar_r5_performance storage=APM00023700165

Output:

id = 8name = ufs1acl = 0in_use = Falsetype = uxfsvolume = v121pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00023700165-0111disks = d7,d13,d19,d25,d30,d31,d32,d33

3. Check the size of the file system after extending it to confirm that the size increased byusing this command syntax:

$ nas_fs -size <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the size of file system ufs1 after extending it to confirm that the size increased,type:

Extend file systems with AVM 71

Configuring

$ nas_fs -size ufs1

Output:

total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)volume: total = 138096 (sizes in MB)

72 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Extend file systems by volume using storage pools

You can extend a file system manually by specifying the volumes to add.

Note: With user-defined storage pools, you can manually create the underlying volumes, includingstriping, before adding the volume to the storage pool. Managing Celerra Volumes and FileSystems Manually describes the detailed procedures needed to perform these tasks before creating orextending the file system.

If you do not specify a storage system when extending the file system, the default storagesystem is the one on which the file system resides. If the file system spans multiple storagesystems, the default is any one of the storage systems on which the file system resides.

Use this procedure to extend the file system by volume, using the same user-defined storagepool that was used to create the file system:

1. Check the configuration of the file system to confirm the associated user-defined storagepool by using this command syntax:

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the configuration of file system ufs3 to confirm the associated user-definedstorage pool, type:

$ nas_fs -info ufs3

Output:

id = 10name = ufs3acl = 0in_use = Falsetype = uxfsvolume = V121pool = marketingmember_of =rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00033900165-0111disks = d7,d8

Note: The user-defined storage pool used to create the file system is defined in the output aspool=marketing.

2. Extend the file system by volume by using this command syntax:

Extend file systems with AVM 73

Configuring

$ nas_fs -xtend <fs_name> <volume_name> pool=<pool> storage=<system_name>

where:

<fs_name> = name of the file system

<volume_name> = name of the volume to add to the file system

<pool> = storage pool associated with the file system; it can be user-defined orsystem-defined

<system_name> = name of the storage system on which the file system resides

Example:

To extend file system ufs3, type:

$ nas_fs -xtend ufs3 v121 pool=marketing storage=APM00023700165

Output:

id = 10name = ufs3acl = 0in_use = Falsetype = uxfsvolume = v121pool = marketingmember_of =rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00023700165-0111disks = d7,d8,d13,d14

Note: The next time you extend this file system without specifying a storage pool, the last pool inthe output list is the default.

3. Check the size of the file system after extending it to confirm that the size increased byusing this command syntax:

$ nas_fs -size <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the size of file system ufs3 after extending it to confirm that the size increased,type:

$ nas_fs -size ufs3

Output:

total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)volume: total = 138096 (sizes in MB)

74 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Extend file systems by using a different storage pool

You can use more than one storage pool to extend a file system. Ensure that the storagepools have space allocated from the same storage system to prevent the file system fromspanning more than one storage system.

Note: A file system created using a system-defined virtual storage pool can be extended on its existingpool or by using a compatible system-defined virtual storage pool that contains the same disk type.

Use this procedure to extend the file system by using a different storage pool than the oneused to create the file system:

1. Check the file system configuration to confirm that the file system has an associatedstorage pool by using this command syntax:

$ nas_fs -info <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the file system configuration to confirm that file system ufs2 has an associatedstorage pool, type:

$ nas_fs -info ufs2

Output:

id = 9name = ufs2acl = 0in_use = Falsetype = uxfsvolume = v121pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00033900165-0111disks = d7,d13

Note: The storage pool used earlier to create or extend the file system is shown in the output asassociated with this file system.

2. Optionally, extend the file system by using a different storage pool other than the oneused to create the file system, by using this command syntax:

$ nas_fs -xtend <fs_name> size=<size> pool=<pool>

where:

Extend file systems with AVM 75

Configuring

<fs_name> = name of the file system

<size> = amount of space you want to add to the file system; specify the size in GB bytyping <number>G (for example, 250G), in MB by typing <number>M (for example,500M), or in TB by typing <number>T (for example, 1T)

<pool> = name of the storage pool

Example:

To extend file system ufs2 by using a different storage pool other than the one used tocreate the file system, type:

$ nas_fs -xtend ufs2 size=10M pool=clar_r5_economy

Output:

id = 9name = ufs2acl = 0in_use = Falsetype = uxfsvolume = v123pool = clar_r5_performance,clar_r5_economymember_of = root_avm_fs_group_3,root_avm_fs_group_4rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00033900165-0112disks = d7,d13,d19,d25

Note: The storage pools used to create and extend the file system now appear in the output. Thereis only one storage system from which space for these storage pools is allocated.

3. Check the file system size after extending it to confirm the increase in size by using thiscommand syntax:

$ nas_fs -size <fs_name>

where:

<fs_name> = name of the file system

Example:

To check the size of file system ufs2 after extending it to confirm the increase in size,type:

$ nas_fs -size ufs2

Output:

total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)volume: total = 138096 (sizes in MB)

76 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Enable automatic file system extension and options

You can automatically extend an existing file system created with AVM system-defined oruser-defined storage pools. The file system automatically extends by using space from thestorage system and storage pool with which the file system is associated.

If you set the (slice=no) option on the file system, the actual file system size might be biggerthan the size you specify for the file system, and could exceed the maximum size. In thiscase, you receive a warning indicating that the file system size might exceed the maximumsize, and automatic extension fails. If you do not specify the file system slice option (-optionslice=yes|no) when you create the file system, the file system defaults to the setting of thestorage pool.

Modify system-defined and user-defined storage pool attributes on page 93 describes theprocedure to modify the default_slice_flag attribute on the storage pool.

Use the -modify option to enable automatic extension on an existing file system. You canalso set the HWM and maximum size.

To enable automatic file system extension and options:

◆ Enable automatic file system extension on page 78◆ Set the HWM on page 79◆ Set the maximum file system size on page 80

You can also enable Virtual Provisioning at the same time that you create or extend a filesystem. Enable Virtual Provisioning on page 81 describes the procedure to enable VirtualProvisioning on an existing file system.

Enable automatic extension, Virtual Provisioning, and all options simultaneously on page83 describes the procedure to simultaneously enable automatic extension, VirtualProvisioning, and all options on an existing file system.

Extend file systems with AVM 77

Configuring

Enable automatic file system extension

If the HWM or maximum size is not set, the file system automatically extends up to thedefault maximum size of 16 TB when the file system reaches the default HWM of 90 percent,if the space is available.

An error message appears if you try to enable automatic file system extension on a file systemcreated manually.

Note: The HWM is 90 percent by default when you enable automatic file system extension.

Action

To enable automatic extension on an existing file system, use this command syntax:

$ nas_fs -modify <fs_name> -auto_extend {no|yes}

where:

<fs_name> = name of the file system

Example:

To enable automatic extension on the existing file system ufs3, type:

$ nas_fs -modify ufs3 -auto_extend yes

Output

id = 28name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=90%,virtual_provision=nostor_devs = APM00042000818-001F,APM00042000818-001D

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

78 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Set the HWM

With automatic file system extension enabled on an existing file system, use the -hwm optionto set a threshold. To specify a threshold, type an integer between 50 and 99 percent; thedefault is 90 percent.

If the HWM or maximum size is not set, the file system automatically extends up to thedefault maximum size of 16 TB when the file system reaches the default HWM of 90 percent,if the space is available. The value for the maximum size, if specified, has an upper limit of16 TB.

Action

To set the HWM on an existing file system, with automatic file system extension enabled, use this command syntax:

$ nas_fs –modify <fs_name> -hwm <50-99>%

where:

<fs_name> = name of the file system

<50-99> = an integer representing the file system usage point at which you want it to automatically extend

Example:

To set the HWM to 85 percent on the existing file system ufs3, with automatic extension already enabled, type:

$ nas_fs -modify ufs3 -hwm 85%

Output

id = 28name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=85%,virtual_provision=nostor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

Extend file systems with AVM 79

Configuring

Set the maximum file system size

Use the -max_size option to specify a maximum size to which a file system can grow. Tospecify the maximum size, type an integer and specify T for TB, G for GB (default), or M forMB.

When you enable automatic file system extension, the file system automatically extends upto the default maximum size of 16 TB. Set the HWM at which you want the file system toautomatically extend. If the HWM is not set, the file system automatically extends up to 16TB when the file system reaches the default HWM of 90 percent, if the space is available.

Action

To set the maximum file system size with automatic file system extension already enabled, use this command syntax:

$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M]

where:

<fs_name> = name of the file system

<integer> = maximum size requested in MB, GB, or TB

Example:

To set the maximum file system size on the existing file system, type:

$ nas_fs -modify ufs3 -max_size 16T

Output

id = 28name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=85%,max_size=16769024M,virtual_provision=nostor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

80 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Enable Virtual Provisioning

You can enable Virtual Provisioning at the same time that you create or extend a file system.Use the -vp option to enable Virtual Provisioning. You must also specifically set the maximumsize to which you want the file system to automatically extend. An error message appearsif you attempt to enable Virtual Provisioning and do not set the maximum size. Set themaximum file system size on page 80 describes the procedure to set the maximum filesystem size.

The upper limit for the maximum size is 16 TB. The maximum size you set is the file systemsize that is presented to users, if the maximum size is larger than the actual file system size.

Note: Enabling automatic file system extension and Virtual Provisioning does not automatically reservethe space from the storage pool for that file system. Administrators must ensure that adequate storagespace exists, so that the automatic extension operation can succeed. If the available storage is less thanthe maximum size setting, automatic extension fails. Users receive an error message when the filesystem becomes full, even though it appears that there is free space in the file system. The file systemmust be manually extended.

Enable Virtual Provisioning on the source file system when the feature is used in a replicationsituation. With Virtual Provisioning enabled, NFS, CIFS, and FTP clients see the actual sizeof the Replicator destination file system, while they see the virtually provisioned maximumsize of the Replicator source file system. Interoperability considerations on page 46 providesadditional information.

Action

To enable Virtual Provisioning with automatic extension enabled on the file system, use this command syntax:

$ nas_fs -modify <fs_name> -max_size <integer>[T|G|M] -vp {yes|no}

where:

<fs_name> = name of the file system

<integer> = size requested in MB, GB, or TB

Example:

To enable Virtual Provisioning, type:

$ nas_fs -modify ufs1 -max_size 16T -vp yes

Extend file systems with AVM 81

Configuring

Output

id = 27name = ufs3acl = 0in_use = Truetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_2ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=85%,max_size=16769024M,virtual_provision=yesstor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

82 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Enable automatic extension, Virtual Provisioning, and all optionssimultaneously

Note: An error message appears if you try to enable automatic file system extension on a file systemthat was created without using a storage pool.

Action

To simultaneously enable automatic file system extension and Virtual Provisioning on an existing file system, and to setthe HWM and the maximum size, use this command syntax:

$ nas_fs -modify <fs_name> -auto_extend {no|yes} -vp {yes|no}-hwm <50-99>% -max_size <integer>[T|G|M]

where:

<fs_name> = name of the file system

<50-99> = an integer representing the file system usage point at which you want it to automatically extend

<integer> = size requested in MB, GB, or TB

Example:

To modify a UxFS to enable automatic extension, enable Virtual Provisioning, set a maximum file system size of 16 TB,with an HWM of 90 percent, type:

$ nas_fs -modify ufs4 -auto_extend yes -vp yes -hwm 90% -max_size 16T

Output

id = 29name = ufs4acl = 0in_use = Falsetype = uxfsworm = offvolume = v157pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers=ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=90%,max_size=16769024M,virtual_provision=yesstor_devs = APM00042000818-001F,APM00042000818-001D,

APM00042000818-0019,APM00042000818-0016disks = d20,d18,d14,d11

Extend file systems with AVM 83

Configuring

Verify the maximum size of the file system

Automatic file system extension fails when the file system reaches the maximum size.

Action

To force an extension to determine whether the maximum size has been reached, use this command syntax:

$ nas_fs -xtend <fs_name> size=<size>

where:

<fs_name> = name of the file system

<size> = size to extend the file system by, in GB, MB, or TB

Example:

To force an extension to determine whether the maximum size has been reached, type:

$ nas_fs -xtend ufs1 size=4M

Output

id = 759name = ufs1acl = 0in_use = Truetype = uxfsworm = offvolume = v2459pool = clar_r5_performancemember_of = root_avm_fs_group_3rw_servers= server_4ro_servers=rw_vdms =ro_vdms =auto_ext = hwm=90%,max_size=16769024M (reached)virtual_provision=yes<<<stor_devs = APM00041700549-0018disks = d10disk=d10 stor_dev=APM00041700549-0018 addr=c16t1l8 server=server_4disk=d10 stor_dev=APM00041700549-0018 addr=c32t1l8 server=server_4disk=d10 stor_dev=APM00041700549-0018 addr=c0t1l8 server=server_4disk=d10 stor_dev=APM00041700549-0018 addr=c48t1l8 server=server_4

84 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

Create file system checkpoints with AVM

Use either AVM system-defined or user-defined storage pools to create file systemcheckpoints. Specify the storage system on which you want the file system checkpoint toreside.

Use this procedure to create the checkpoint, specifying a storage pool and storage system:

Note: You can only specify the storage pool for the checkpoint SavVol when there are no existingcheckpoints of the PFS.

1. Obtain the list of available storage systems by typing:

$ nas_storage -list

Note: To obtain more detailed information on the storage system and associated names use the-info option instead.

2. Create the checkpoint by using this command syntax:

$ fs_ckpt <fs_name> -name <name> -Create [size=<integer>[T|G|M|%]] pool=<pool>storage=<system_name>

where:

<fs_name> = name of the file system for which you want to create a checkpoint

<name> = name of the checkpoint

<integer> = amount of space to allocate to the checkpoint; type the size in TB, GB, orMB

<pool> = name of the storage pool

<system_name> = storage system on which the file system checkpoint resides

Note: Virtual Provisioning is not supported with checkpoints. NFS, CIFS, and FTP clients cannotsee the virtually provisioned maximum size of a SnapSure checkpoint file system.

Example:

To create the checkpoint ckpt1, type:

$ fs_ckpt ufs1 -name ckpt1 -Create size=10G pool=clar_r5_performance

storage=APM00023700165

Output:

Create file system checkpoints with AVM 85

Configuring

id = 1name = ckpt1acl = 0in_use = Falsetype = uxfsvolume = V126pool = clar_r5_performancemember_of =rw_servers=ro_servers=rw_vdms =ro_vdms =stor_devs = APM00023700165-0111disks = d7,d8

86 Managing Celerra Volumes and File Systems with AVM 5.6.48

Configuring

4

Managing

The tasks to manage AVM storage pools are:◆ List existing storage pools on page 88◆ Display storage pool details on page 89◆ Display storage pool size information on page 90◆ Modify system-defined and user-defined storage pool attributes on

page 93◆ Extend a user-defined storage pool by volume on page 103◆ Extend a user-defined storage pool by size on page 104◆ Extend a system-defined storage pool on page 105◆ Remove volumes from storage pools on page 107◆ Delete user-defined storage pools on page 108

Managing Celerra Volumes and File Systems with AVM 5.6.48 87

List existing storage pools

When the existing storage pools are listed, all the system-defined storage pools and threeuser-defined storage pools (marketing, engineering, and sales) appear in the output. Allexisting storage pools are listed, regardless of whether they are in use.

Action

To list all existing system-defined and user-defined storage pools, type:

$ nas_pool -list

Output

id inuse acl name1 y 0 symm_std2 n 0 clar_r13 y 0 clar_r5_performance4 y 0 clar_r5_economy5 y 0 marketing6 y 0 engineering7 y 0 sales8 n 0 clarata_r39 n 0 clarata_archive10 n 0 symm_std_rdf_src11 n 0 clar_r140 y 0 engineer_APM00844016641 y 0 tp1_FCNTR074200038

88 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Display storage pool details

Action

To display detailed information for a specified system-defined, system-defined virtual, or user-defined storage pool, usethis command syntax:

$ nas_pool -info <name>

where:

<name> = name of the storage pool

Example:

To display detailed information for the storage pool marketing, type:

$ nas_pool -info marketing

Output

id = 5name = marketingdescription = storage pool for marketingacl = 0in_use = Trueclients = fs24,fs26members = d320,d319default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Truedisk_type = CLSTDserver_visibility = server_2,server_3template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Display storage pool details 89

Managing

Display storage pool size information

The size information of the storage pool appears in the output. If there is more than onestorage pool, the output shows the size information for all the storage pools.

The storage pool size information includes:

◆ The total used space in the storage pool in MB (used_mb).◆ The total unused space in the storage pool in MB (avail_mb).◆ The total used and unused space in the storage pool in MB (total_mb).◆ The total space available from all sources in MB that could potentially be added to the

storage pool (potential_mb). For user-defined storage pools, the output for potential_mbis 0 because they must be manually extended and shrunk. In this example, total_mb andpotential_mb are the same because the total storage in the storage pool is equal to thetotal potential storage available.

Note: If either non–MB-aligned disk volumes or disk volumes of different sizes are striped together,truncation of storage might occur. The total amount of space added to a pool might be different thanthe total amount taken from potential storage. Total space in the pool includes the truncated space,but potential storage does not include the truncated space.

In Celerra Manager, the potential MB in the output represents the total available storage,including the storage pool. In the CLI, the output for potential_mb does not include thespace in the storage pool.

Note: Use the -size -all option to display the size information for all storage pools.

Action

To display the size information for a specific storage pool, use this command syntax:

$ nas_pool -size <name>

where:

<name> = name of the storage pool

Example:

To display the size information for the clar_r5_performance storage pool, type:

$ nas_pool -size clar_r5_performance

90 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Output

id = 3name = clar_r5_performanceused_mb = 128000avail_mb = 0total_mb = 260985potential_mb = 260985

Action

To display the size information for a specific virtual storage pool, use this command syntax:

$ nas_pool -size <name>

where:

<name> = name of the storage pool

Example:

To display the size information for the ThinPool0 storage pool, type:

$ nas_pool -size ThinPool0_APM00084401664

Output

id = 49name = ThinPool0_APM00084401664used_mb = 0avail_mb = 0total_mb = 0potential_mb = 1023Physical storage usage in Thin Pool Thin Pool 0 on APM00084401664used_mb = 2048avail_mb = 1093698total_mb = 1095746

Display storage pool size information 91

Managing

Display Symmetrix storage pool size information

Sliced volumes do not appear in the output because the Symmetrix storage poolsdefault_slice_flag value is set to no.

Use the -size -all option to display the size information for all storage pools.

Action

To display the size information of Symmetrix-related storage pools, use this command syntax:

$ nas_pool -size <name> -slice y

where:

<name> = name of the storage pool

Example:

To request size information for the Symmetrix symm_std storage pool, type:

$ nas_pool -size symm_std -slice y

Output

id = 5name = symm_stdused_mb = 128000avail_mb = 0total_mb = 260985potential_mb = 260985

Note

◆ Use the -slice y option to include any space from sliced volumes in the available result.

◆ The size information for the system-defined storage pool named clar_r5_performance appears in the output. If youhave more storage pools, the output shows the size information for all the storage pools.

◆ used_mb is the used space in the specified storage pool in MB.

◆ avail_mb is the amount of unused available space in the storage pool in MB.

◆ total_mb is the total of used and unused space in the storage pool in MB.

◆ potential_mb is the potential amount of storage that can be added to the storage pool available from all sources inMB. For user-defined storage pools, the output for potential_mb is 0 because they must be manually extended andshrunk. In this example, total_mb and potential_mb are the same because the total storage in the storage pool isequal to the total potential storage available.

◆ If either non–MB-aligned disk volumes or disk volumes of different sizes are striped together, truncation of storagemight occur. The total amount of space added to a pool might be different than the total amount taken from potentialstorage. Total space in the pool includes the truncated space, but potential storage does not include the truncatedspace.

92 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Modify system-defined and user-defined storage pool attributes

System-defined and user-defined storage pools have attributes that control how they managethe volumes and file systems. The following table lists the modifiable storage pool attributes,the value, and the attribute description.

Table 9. Storage pool attributes

DescriptionModifiableValuesAttribute

Unique name. If a name is not specifiedduring creation, one is automatically gener-ated.

Yes

User-defined storage pools

Quoted stringname

A text description.

Default is “” (blank string).

Yes

User-defined storage pools

Quoted stringdescription

Access control level.Yes

User-defined storage pools

Integer. For exam-ple, 0.acl

Controlling Access to Celerra System Ob-jects contains instructions to manage accesscontrol levels.

Answers the question, can AVM slicemember volumes to meet the file systemrequest?

A y entry tells AVM to create a slice of exact-ly the correct size from one or more membervolumes.

An n entry gives the primary or source filesystem exclusive access to one or moremember volumes.

Note: If using TimeFinder or automatic filesystem extension, this attribute should beset to n.You cannot restore file systemsbuilt with sliced volumes to a previous stateby using TimeFinder/FS.

Yes

System-defined and user-de-fined storage pools

“y” | “n”default_slice_flag

Note: Only applicable if volume_profile isnot blank.

Answers the question, is this storage poolallowed to automatically add or removemember volumes? The default answer is n.

Yes

System-defined storage pools

“y” | ”n”is_dynamic

Modify system-defined and user-defined storage pool attributes 93

Managing

Table 9. Storage pool attributes (continued)

DescriptionModifiableValuesAttribute

Note: Only applicable if volume_profile isnot blank.

This field answers the question, is this stor-age pool greedy?

When a storage pool receives a request forspace, a greedy storage pool attempts tocreate a new member volume beforesearching for free space in existing membervolumes.The attribute value for this storagepool is y.

A storage pool that is not greedy uses allavailable space in the storage pool beforecreating a new member volume. The at-tribute value for this storage pool is n.

Yes

System-defined storage pools

“y” | ”n”is_greedy

You can change the attribute default_slice_flag for system-defined and user-defined storagepools. It indicates whether member volumes can be sliced. If the storage pool has membervolumes built on one or more slices, you cannot set this value to n.

Action

To modify the default_slice_flag for a system-defined or user-defined storage pool, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -default_slice_flag {y|n}

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To modify a storage pool named marketing and change the default_slice_flag to prevent members of the pool from beingsliced when space is dispensed, type:

$ nas_pool -modify marketing -default_slice_flag n

94 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Output

id = 5name = marketingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag= Falseis_user_defined = Truevirtually_provisioned= Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members= N/Astripe_size = N/A

Note

◆ When the default_slice_flag is set to y, it appears as True in the output.

◆ If using automatic file system extension, the default_slice_flag should be set to n.

Modify system-defined and user-defined storage pool attributes 95

Managing

Modify system-defined storage pool attributes

The system-defined storage pool’s attributes that can be modified are:

◆ -is_dynamic: Indicates whether the system-defined storage pool is allowed toautomatically add or remove member volumes.

◆ -is_greedy: If this is set to y, the system-defined storage pool attempts to create newmember volumes before using space from existing member volumes. A system-definedstorage pool that is not greedy (set to n) consumes all the existing space in the storagepool before trying to add additional member volumes.

The tasks to modify the attributes of a system-defined storage pool are:

◆ Modify the -is_greedy attribute of a system-defined storage pool on page 97◆ Modify the -is_dynamic attribute of a system-defined storage pool on page 98

96 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Modify the -is_greedy attribute of a system-defined storage pool

Action

To modify the -is_greedy attribute of a specific system-defined storage pool to allow the storage pool to use new volumesrather than existing volumes, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -is_greedy {y|n}

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To change the attribute -is_greedy to false, for the storage pool named clar_r5_performance, type:

$ nas_pool -modify clar_r5_performance -is_greedy n

Output

id = 3name = clar_r5_performancedescription = CLARiiON RAID5 4plus1acl = 0in_use = Falseclients =members =default_slice_flag = Trueis_user_defined = Falsevirtually_provisioned= Falsevolume_profile = clar_r5_performance_vpis_dynamic = Trueis_greedy = Falsenum_stripe_members = 4stripe_size = 32768

Note

The n entered in the example delivers a False answer to the is_greedy attribute in the output.

Modify system-defined and user-defined storage pool attributes 97

Managing

Modify the -is_dynamic attribute of a system-defined storage pool

Action

To modify the -is_dynamic attribute of a specific system-defined storage pool to not allow the storage pool to add or removenew members, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -is_dynamic {y|n}

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To change the attribute -is_dynamic to false to not allow the storage pool to add or remove new members, for the storagepool named clar_r5_performance, type:

$ nas_pool -modify clar_r5_performance -is_dynamic n

Output

id = 3name = clar_r5_performancedescription = CLARiiON RAID5 4plus1acl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Falsevirtually_provisioned= Falsevolume_profile = clar_r5_performance_vpis_dynamic = Falseis_greedy = Falsenum_stripe_members = 4stripe_size = 32768

Note

The n entered in the example delivers a False answer to the is_dynamic attribute in the output.

98 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Modify user-defined storage pool attributes

The user-defined storage pool’s attributes that can be modified are:

◆ -name: Changes the name of the specified user-defined storage pool to the new name.◆ -acl: Designates an access control level for a user-defined storage pool. The default value

is 0.◆ -description: Changes the description comment for the user-defined storage pool.

The tasks to modify the attributes of a user-defined storage pool are:

◆ Modify the name of a user-defined storage pool on page 100◆ Modify the access control of a user-defined storage pool on page 101◆ Modify the description of a user-defined storage pool on page 102

Modify system-defined and user-defined storage pool attributes 99

Managing

Modify the name of a user-defined storage pool

Action

To modify the name of a specific user-defined storage pool, use this command syntax:

$ nas_pool -modify <name> -name <new_name>

where:

<name> = old name of the storage pool

<new_name> = new name of the storage pool

Example:

To change the name of the storage pool named marketing to purchasing, type:

$ nas_pool -modify marketing -name purchasing

Output

id = 5name = purchasingdescription = storage pool for marketingacl = 0in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Note

The name change to purchasing appears in the output.The description does not change unless the administrator changesit.

100 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Modify the access control of a user-defined storage pool

Controlling Access to Celerra System Objects contains instructions to manage access controllevels.

Note: The access control level change to 1 appears in the output. The description does not changeunless the administrator modifies it.

Action

To modify the access control level for a specific user-defined storage pool, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -acl <acl>

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<acl> = designates an access control level for the new storage pool; default value is 0

Example:

To change the access control level for the storage pool named purchasing, type:

$ nas_pool -modify purchasing -acl 1000

Output

id = 5name = purchasingdescription = storage pool for marketingacl = 1000in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Modify system-defined and user-defined storage pool attributes 101

Managing

Modify the description of a user-defined storage pool

Action

To modify the description of a specific user-defined storage pool, use this command syntax:

$ nas_pool -modify {<name>|id=<id>} -description <description>

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<description> = descriptive comment about the pool or its purpose; type the comment within quotes

Example:

To change the descriptive comment for the storage pool named purchasing, type:

$ nas_pool -modify purchasing -description “storage pool for purchasing”

Output

id = 15name = purchasingdescription = storage pool for purchasingacl = 1000in_use = Falseclients =members = d126,d127,d128,d129default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

102 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Extend a user-defined storage pool by volume

You can add a slice volume, a metavolume, a disk volume, or a stripe volume to auser-defined storage pool.

Action

To extend an existing user-defined storage pool by volumes, use this command syntax:

$ nas_pool -xtend {<name>|id=<id>} -volumes [<volume_name>,...]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<volume_name> = names of the volumes separated by commas

Example:

To extend the volumes for the storage pool named engineering, with volumes d130, d131, d132, and d133, type:

$ nas_pool -xtend engineering -volumes d130,d131,d132,d133

Output

id = 6name = engineeringdescription =acl = 0in_use = Falseclients =members = d126,d127,d128,d129,d130,d131,d132,d133default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Note

The original volumes (d126, d127, d128, and d129) appear in the output, followed by the volumes added in the example.

Extend a user-defined storage pool by volume 103

Managing

Extend a user-defined storage pool by size

Action

To extend the volumes for an existing user-defined storage pool by size, use this command syntax:

$ nas_pool -xtend {<name>|id=<id>} -size <integer> [M|G|T][-storage <system_name>]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<system_name> = storage system on which one or more volumes will be created, to be added to the storage pool

Example:

To extend the volumes for the storage pool named engineering, by a size of 1 GB, type:

$ nas_pool -xtend engineering -size 1G

Output

id = 6name = engineeringdescription =acl = 0in_use = Falseclients =members = d126,d127,d128,d129,d130,d131,d132,d133default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

104 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Extend a system-defined storage pool

Specifying a size by which you want AVM to expand a system-defined pool and turningoff the dynamic behavior of the system pool keeps it from consuming additional diskvolumes. Doing this:

◆ Uses the disk selection algorithms that AVM uses to create system-defined storage poolmembers.

◆ Prevents system-defined storage pools from rapidly consuming a large number of diskvolumes.

You can specify the storage system from which to allocate space to the pool. The dynamicbehavior of the system-defined storage pool must be turned off by using the nas_pool-modify command before extending the pool.

On successful completion, the system-defined storage pool expands by at least the specifiedsize. The storage pool might expand more than the requested size. The behavior is the sameas when the storage pool is expanded during a file-system creation.

If a storage system is not specified and the pool has members from a single storage system,then the default is the existing storage system. If a storage system is not specified and thepool has members from multiple storage systems, the existing set of storage systems is usedto extend the storage pool.

If a storage system is specified, space is allocated from the specified storage system:

◆ The specified pool must be a system-defined pool.◆ The specified pool must have the is_dynamic attribute set to n, or false. Modify

system-defined storage pool attributes on page 96 provides instructions to change theattribute.

◆ There must be enough disk volumes to satisfy the size requested.

Extend a system-defined storage pool 105

Managing

Extend a system-defined storage pool by size

Action

To extend a system-defined storage pool by size and specify a storage system from which to allocate space, use thiscommand syntax:

$ nas_pool -xtend {<name>|id=<id>} -size <integer> -storage <system_name>

where:

<name> = name of the system-defined storage pool

<id> = ID of the storage pool

<integer> = size requested in MB or GB; default size unit is MB

<system_name> = name of the storage system from which to allocate the storage

Example:

To extend the system-defined clar_r5_performance storage pool by size and designate the storage system from which toallocate space, type:

$ nas_pool -xtend clar_r5_performance -size 128M -storage APM00023700165-0011

Output

id = 3name = clar_r5_performancedescription = CLARiiON RAID5 4plus1acl = 0in_use = Falseclients =members = v216default_slice_flag = Falseis_user_defined = Falsevirtually_provisioned= Falsedisk_type = CLSTDserver_visibility = server_2,server_3volume_profile = clar_r5_performance_vpis_dynamic = Falseis_greedy = Falsenum_stripe_members = 4stripe_size = 32768

106 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Remove volumes from storage pools

Action

To remove volumes from a system-defined or user-defined storage pool, use this command syntax:

$ nas_pool -shrink {<name>|id=<id>} -volumes [<volume_name>,...]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

<volume_name> = names of the volumes separated by commas

Example:

To remove volumes d130 and d133 from the storage pool named marketing, type:

$ nas_pool -shrink marketing -volumes d130,d133

Output

id = 5name = marketingdescription = storage pool for marketingacl = 1000in_use = Falseclients =members = d126,d127,d128,d129,d131,d132default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsedisk_type = STDserver_visibility = server_2,server_3,server_4template_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Remove volumes from storage pools 107

Managing

Delete user-defined storage pools

You can delete only a user-defined storage pool that is not in use. You must remove allstorage pool member volumes before deleting a user-defined storage pool. This delete actiononly removes the volumes in the specified storage pool and deletes the storage pool, not thevolumes. System-defined storage pools cannot be deleted.

Action

To delete a user-defined storage pool, use this command syntax:

$ nas_pool -delete <name>

where:

<name> = name of the storage pool

Example:

To delete the user-defined storage pool named sales, type:

$ nas_pool -delete sales

Output

id = 7name = salesdescription =acl = 0in_use = Falseclients =members =default_slice_flag = Trueis_user_defined = Truetemplate_pool = N/Anum_stripe_members = N/Astripe_size = N/A

108 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

Delete a user-defined storage pool and its volumes

The -deep option deletes the storage pool and also recursively deletes each member of thestorage pool unless it is in use or is a disk volume.

Action

To delete a user-defined storage pool and the volumes in it, use this command syntax:

$ nas_pool -delete {<name>|id=<id>} [-deep]

where:

<name> = name of the storage pool

<id> = ID of the storage pool

Example:

To delete the storage pool named sales, type:

$ nas_pool -delete sales -deep

Output

id = 7name = salesdescription =acl = 0in_use = Falseclients =members =default_slice_flag = Trueis_user_defined = Truevirtually_provisioned= Falsetemplate_pool = N/Anum_stripe_members = N/Astripe_size = N/A

Delete user-defined storage pools 109

Managing

110 Managing Celerra Volumes and File Systems with AVM 5.6.48

Managing

5

Troubleshooting

As part of an effort to continuously improve and enhance the performanceand capabilities of its product lines, EMC periodically releases new versionsof its hardware and software. Therefore, some functions described in thisdocument may not be supported by all versions of the software orhardware currently in use. For the most up-to-date information on productfeatures, refer to your product release notes.

If a product does not function properly or does not function as describedin this document, please contact your EMC Customer SupportRepresentative.

Problem Resolution Roadmap for Celerra contains additional informationabout using Powerlink and resolving problems.

Topics included are:◆ AVM troubleshooting considerations on page 112◆ EMC E-Lab Interoperability Navigator on page 112◆ Known problems and limitations on page 112◆ Error messages on page 113◆ EMC Training and Professional Services on page 114

Managing Celerra Volumes and File Systems with AVM 5.6.48 111

AVM troubleshooting considerations

Consider these steps when troubleshooting AVM:

◆ Obtain all files and subdirectories in /nas/log/ and /nas/volume/ from the Control Stationbefore reporting problems, which helps to diagnose the problem faster. Additionally,save any files in /nas/tasks when problems are seen from Celerra Manager. The supportmaterial script collects information related to Celerra Manager and APL.

◆ Set the environment variable NAS_REPLICATE_DEBUG=1 to log additional informationin /nas/log/nas_log.al.tran.

◆ Report any useful SYR data.

EMC E-Lab Interoperability Navigator

The EMC E-Lab™ Interoperability Navigator is a searchable, web-based application thatprovides access to EMC interoperability support matrices. It is available athttp://Powerlink.EMC.com. After logging in to Powerlink, go to Support ➤ Interoperabilityand Product LifeCycle Information ➤ E-Lab Interoperability Navigator.

Known problems and limitations

Table 10 on page 112 describes known problems that might occur when using AVM andautomatic file system extension and presents workarounds.

Table 10. Known problems and workarounds

WorkaroundSymptomKnown problem

Place the newly marked disks in a user-defined storage pool.This protects themfrom being used by system-definedstorage pools (and manual volumemanagement).

Temporary disks might be used by AVMsystem-defined storage pools orcheckpoint extension.

AVM system-defined storage pools andcheckpoint extensions recognize tempo-rary disks as available disks.

112 Managing Celerra Volumes and File Systems with AVM 5.6.48

Troubleshooting

Table 10. Known problems and workarounds (continued)

WorkaroundSymptomKnown problem

Alleviate this timing issue by loweringthe HWM on a file system to ensureautomatic extension can accommodatenormal file system activity.

Set the HWM to allow enough freespace in the file system to accommo-date write operations to the largest av-erage file in that file system. For exam-ple, if you have a file system that is 100GB, and the largest average file in thatfile system is 20 GB, set the HWM forautomatic extension to 70%.

Changes made to the 20 GB file mightcause the file system to reach theHWM, or 70 GB. There is 30 GB ofspace left in the file system to handlethe file changes, and to initiate andcomplete automatic extension withoutfailure.

An error message indicating the failureof automatic extension start, and a fullfile system.

In an NFS environment, the write activ-ity to the file system starts immediatelywhen a file changes. When the file sys-tem reaches the HWM, it begins to au-tomatically extend but might not finishbefore the Control Station issues a filesystem full error. This causes an auto-matic extension failure.

In a CIFS environment, the CIFS/Win-dows Microsoft client does PersistentBlock Reservation (PBR) to reserve thespace before the writes begin. As a re-sult, the file system full error occursbefore the HWM is reached and beforeautomatic extension is initiated.

Error messages

As of version 5.6, all new event, alert, and status messages provide detailed informationand recommended actions to help you troubleshoot the situation.

To view message details, use any of these methods:

◆ Celerra Manager:

• Right-click an event, alert, or status message and select to view Event Details, AlertDetails, or Status Details.

◆ CLI:

• Type nas_message -info <MessageID>, where <MessageID> is the messageidentification number.

◆ Celerra Network Server Error Messages Guide:

• Use this guide to locate information about messages that are in the earlier-releasemessage format.

◆ Powerlink:

Error messages 113

Troubleshooting

Use the text from the error message's brief description or the message's ID to searchthe Knowledgebase on Powerlink. After logging in to Powerlink, go to Support ➤Search Support.

EMC Training and Professional Services

EMC Customer Education courses help you learn how EMC storage products work togetherwithin your environment in order to maximize your entire infrastructure investment. EMCCustomer Education features online and hands-on training in state-of-the-art labsconveniently located throughout the world. EMC customer training courses are developedand delivered by EMC experts. Go to EMC Powerlink at http://Powerlink.EMC.com forcourse and registration information.

EMC Professional Services can help you implement your Celerra Network Server efficiently.Consultants evaluate your business, IT processes, and technology and recommend waysyou can leverage your information for the most benefit. From business plan toimplementation, you get the experience and expertise you need, without straining your ITstaff or hiring and training new personnel. Contact your EMC representative for moreinformation.

114 Managing Celerra Volumes and File Systems with AVM 5.6.48

Troubleshooting

Glossary

A

automatic file system extensionConfigurable Celerra file system feature that automatically extends a file system created orextended with AVM when the high water mark (HWM) is reached.

See also high water mark.

Automatic Volume Management (AVM)Feature of the Celerra Network Server that creates and manages volumes automatically withoutmanual volume management by an administrator. AVM organizes volumes into storage poolsthat can be allocated to file systems.

See also Virtual Provisioning.

C

Celerra Data Migration Service (CDMS)Feature for migrating file systems from NFS and CIFS source file servers to a Celerra NetworkServer. The online migration is transparent to users once it starts.

D

disk volumeOn Celerra systems, a physical storage unit as exported from the storage array. All other volumetypes are created from disk volumes.

See also metavolume, slice volume, stripe volume, and volume.

F

file systemMethod of cataloging and managing the files and directories on a storage system.

Managing Celerra Volumes and File Systems with AVM 5.6.48 115

H

high water mark (HWM)Trigger point at which the Celerra Network Server performs one or more actions, such assending a warning message, extending a volume, or updating a file system, as directed by therelated feature's software/parameter settings.

L

logical unit number (LUN)Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is thelast part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but theterm is often used to refer to the logical unit itself.

M

metavolumeOn a Celerra system, a concatenation of volumes, which can consist of disk, slice, or stripevolumes. Also called a hypervolume or hyper. Every file system must be created on top of aunique metavolume.

See also disk volume, slice volume, stripe volume, and volume.

S

slice volumeOn a Celerra system, a logical piece or specified area of a volume used to create smaller, moremanageable units of storage.

See also disk volume, metavolume, stripe volume, and volume.

storage poolAutomatic Volume Management (AVM), a Celerra feature, organizes available disk volumesinto groupings called storage pools. Storage pools are used to allocate available storage toCelerra file systems. Storage pools can be created automatically by AVM or manually by theuser.

storage systemArray of physical disk devices and their supporting processors, power supplies, and cables.

stripe volumeArrangement of volumes that appear as a single volume. Allows for stripe units, that cut acrossthe volume and are addressed in an interlaced manner. Stripe volumes make load balancingpossible.

See also disk volume, metavolume, slice volume, and stripe volume.

system-defined storage poolPredefined AVM storage pools that are set up to help you easily manage both storage volumestructures and file system provisioning using AVM.

116 Managing Celerra Volumes and File Systems with AVM 5.6.48

Terminology

T

thin LUNA LUN whose storage capacity grows by using a shared virtual (thin) pool of storage whenneeded.

thin poolA user-defined CLARiiON storage pool that contains a set of disks on which thin LUNs can becreated.

U

Universal Extended File System (UxFS)High-performance, Celerra Network Server default file system, based on traditional BerkeleyUFS, enhanced with 64-bit support, metadata logging for high availability, and severalperformance enhancements.

user-defined storage poolsUser-created storage pools containing volumes that are manually added. User-defined storagepools provide an appropriate option for users who want control over their storage volumestructures while still using the automated file system provisioning functionality of AVM toprovision file systems from the user-defined storage pools.

V

Virtual ProvisioningConfigurable Celerra file system feature that lets you allocate storage based on long-termprojections, while you dedicate only the file system resources you currently need. Users—NFSor CIFS clients—and applications see the virtual maximum size of the file system of which onlya portion is physically allocated. In addition, combining the automatic file system extensionand Virtual Provisioning features lets you grow the file system gradually as needed.

See also Automatic Volume Management.

volumeOn a Celerra system, a virtual disk into which a file system, database management system, orother application places data. A volume can be a single disk partition or multiple partitions onone or more physical drives.

See also disk volume, metavolume, slice volume, and stripe volume.

Managing Celerra Volumes and File Systems with AVM 5.6.48 117

Terminology

118 Managing Celerra Volumes and File Systems with AVM 5.6.48

Terminology

Index

Aalgorithm

automatic file system extension 45CLARiiON 34Symmetrix 39system-defined storage pools 31

attributesstorage pool, modify 93, 96, 99storage pools 29system-defined storage pools 96user-defined storage pools 99

automatic file system extensionalgorithm 45and Celerra Replicator interoperabilityconsiderations 46considerations 49enabling 55how it works 22maximum size option 65maximum size, set 80options 22restrictions 11Virtual Provisioning 81

Automatic Volume Management (AVM)restrictions 10storage pool 22

Ccautions 13

spanning storage systems 13Celerra upgrade

automatic file system extension issue 14character support, international 14checkpoint, create for file system 85clar_r1 storage pool 25clar_r5_economy storage pool 25

clar_r5_performance storage pool 25clar_r6 storage pool 25clarata_archive storage pool 25clarata_r10 storage pool 26clarata_r3 storage pool 25clarata_r6 storage pool 25clarefd_r10 storage pool 26clarefd_r5 storage pool 26CLARiiON thin pool, insufficient space 14clarsas_archive storage pool 26clarsas_r10 storage pool 26clarsas_r6 storage pool 26cm_r1 storage pool 26cm_r5_economy storage pool 26cm_r5_performance storage pool 26cm_r6 storage pool 26cmata_archive storage pool 26cmata_r10 storage pool 27cmata_r3 storage pool 26cmata_r6 storage pool 26cmefd_r10 storage pool 27cmefd_r5 storage pool 27cmsas_archive storage pool 27cmsas_r10 storage pool 27cmsas_r6 storage pool 27considerations

automatic file system extension 49interoperability 46

create a file system 55, 56, 58using system-defined pools 56using user-defined pools 58

Ddelete user-defined storage pools 108details, display 89display

details 89

Managing Celerra Volumes and File Systems with AVM 5.6.48 119

display (continued)size information 90

EEMC E-Lab Navigator 112error messages 113extend file systems

by size 70by volume 73with different storage pool 75

extend storage poolssystem-defined by size 106user-defined by size 104user-defined by volume 103

Ffile system

create checkpoint 85extend by size 70extend by volume 73quotas 13

file system considerations 49

Iinternational character support 14

Kknown problems and limitations 112

Mmessages, error 113modify system-defined storage pools 96

Pplanning considerations 46profiles, volume and storage 31

Qquotas for file system 13

RRAID group combinations 27related information 18restrictions 10, 11, 12, 13, 14

restrictions (continued)automatic file system extension 11AVM 10Celerra file systems 13CLARiiON 13Symmetrix volumes 10TimeFinder/FS 14Virtual Provisioning 12

Sstorage pools

attributes 40clar_r1 25clar_r5_economy 25clar_r5_performance 25clar_r6 25clarata_archive 25clarata_r10 26clarata_r3 25clarata_r6 25clarefd_r10 26clarefd_r5 26clarsas_archive 26clarsas_r10 26clarsas_r6 26cm_r1 26cm_r5_economy 26cm_r5_performance 26cm_r6 26cmata_archive 26cmata_r10 27cmata_r3 26cmata_r6 26cmefd_r10 27cmefd_r5 27cmsas_archive 27cmsas_r10 27cmsas_r6 27delete user-defined 108display details 89display size information 90explanation 22extend system-defined by size 106extend user-defined by size 104extend user-defined by volume 103list 88modify attributes 93remove volumes from user-defined 107supported types 24symm_ata 24symm_ata_rdf_src 25symm_ata_rdf_tgt 25symm_efd 25

120 Managing Celerra Volumes and File Systems with AVM 5.6.48

Index

storage pools (continued)symm_std 24symm_std_rdf_src 25symm_std_rdf_tgt 25system-defined algorithms 31system-defined CLARiiON 32system-defined Symmetrix 39

symm_ata storage pool 24symm_ata_rdf_src storage pool 25symm_ata_rdf_tgt storage pool 25symm_efd storage pool 25symm_std storage pool 24symm_std_rdf_src storage pool 25symm_std_rdf_tgt storage pool 25Symmetrix thin pool, insufficient space 14system-defined storage pools 31, 56, 70, 73, 96

algorithms 31create a file system with 56extend file systems by size 70

system-defined storage pools (continued)extend file systems by volume 73

Ttroubleshooting 111

UUnicode characters 14upgrade Celerra software 49user-defined storage pools 58, 70, 73, 99, 107

create a file system with 58extend file systems by size 70extend file systems by volume 73modify attributes 99remove volumes 107

VVirtual Provisioning, out of space message 14

Managing Celerra Volumes and File Systems with AVM 5.6.48 121

Index

122 Managing Celerra Volumes and File Systems with AVM 5.6.48

Index