emc celerra v6.0 linux clients product guide interoperability navigator the emc e-lab™...

188
EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com EMC ® Celerra ® MPFS over FC and iSCSI v6.0 Linux Clients Product Guide P/N 300-011-316 REV A01

Upload: buikhue

Post on 14-May-2018

239 views

Category:

Documents


0 download

TRANSCRIPT

EMC CorporationCorporate Headquarters:

Hopkinton, MA 01748-9103

1-508-435-1000www.EMC.com

EMC® Celerra® MPFS over FC and iSCSIv6.0 Linux Clients

Product GuideP/N 300-011-316

REV A01

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide2

Copyright © 2007-2010 EMC Corporation. All rights reserved.

Published September, 2010

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 3

Preface

Chapter 1 Introducing EMC Celerra MPFS over FC and iSCSIOverview of EMC Celerra MPFS over FC and iSCSI .................. 18EMC Celerra MPFS architectures ................................................... 19

EMC Celerra MPFS over Fibre Channel................................. 19EMC Celerra MPFS over iSCSI ................................................ 21MPFS configuration summary................................................. 24

How EMC Celerra MPFS works..................................................... 28

Chapter 2 MPFS Environment ConfigurationConfiguration roadmap ................................................................... 30Implementation guidelines.............................................................. 32

Celerra with MPFS recommendations.................................... 32 Storage configuration recommendations .............................. 34MPFS feature configurations.................................................... 35

MPFS installation and configuration process ............................... 40Configuration planning checklist ............................................ 41

Verifying system components ......................................................... 43Required hardware components ............................................. 43Required software components ............................................... 46Verifying configuration............................................................. 46Verifying storage array requirements ..................................... 47Verifying the Fibre Channel switch requirements (FC configuration) ..................................................................... 49Verifying the IP-SAN switch requirements............................ 50Verifying the IP-SAN CLARiiON CX3 or CX4 requirements ................................................................. 51

Contents

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide4

Contents

Setting up the Celerra Network Server ......................................... 53Celerra Startup Assistant (CSA) ..................................................... 54Setting up the file system................................................................. 55

File system prerequisites .......................................................... 55Creating a file system on a Celerra Network Server ............ 56

Enabling MPFS for the Celerra Network Server .......................... 66Configuring the CLARiiON by using CLI commands................ 67

Best practices for CLARiiON and Celerra Gateway configurations ............................................................................ 67

Configuring the SAN and storage.................................................. 68Installing the Fibre Channel switch (FC configuration) ...... 68Zoning the SAN switch (FC configuration)........................... 68Configuring the iSCSI-to-Fibre Channel bridge (iSCSI configuration)................................................................. 69Creating a security file on the Celerra Network Server....... 73CLARiiON iSCSI port configuration ...................................... 75Access Logix configuration ...................................................... 77

Configuring and accessing storage ................................................ 81Installing the Fibre Channel driver (FC configuration) ....... 81Adding hosts to the storage group (FC configuration)....... 82Configuring the iSCSI driver for RHEL 4 (iSCSI configuration)................................................................. 84Configuring the iSCSI driver for RHEL 5, SLES 10, and CentOS 5 (iSCSI configuration) ............................................... 88Adding initiators to the storage group(FC configuration) .................................................................... 93Adding initiators to the storage group(iSCSI configuration)................................................................ 95Adding initiators to the storage group(iSCSI to FC bridge configuration)......................................... 97

Mounting the MPFS file system ................................................... 100Unmounting the MPFS file system .............................................. 104

Chapter 3 Installing, Upgrading, or Uninstalling MPFS SoftwareInstalling the MPFS software ........................................................ 106

Before installing ....................................................................... 106Install the MPFS software from a tar file.............................. 106Installing the MPFS software from a CD ............................. 108Post-installation check ............................................................ 109Operating MPFS through a firewall ..................................... 109

Upgrading the MPFS software ..................................................... 110Upgrade the MPFS software .................................................. 110

5EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Contents

Upgrade the MPFS software with the MPFS file system mounted .................................................................................... 112Post-installation check............................................................. 113Verifying the MPFS software upgrade.................................. 114

Uninstalling the MPFS software ................................................... 115

Chapter 4 MPFS Command Line InterfaceUsing HighRoad disk protection .................................................. 118

Celerra Network Server and hrdp......................................... 118hrdp command syntax ............................................................ 119Viewing hrdp protected devices ............................................ 121

Using the mpfsctl utility................................................................. 122mpfsctl help .............................................................................. 123mpfsctl diskreset ...................................................................... 123mpfsctl diskresetfreq ............................................................... 124mpfsctl max-readahead........................................................... 125mpfsctl prefetch........................................................................ 127mpfsctl reset .............................................................................. 128mpfsctl stats .............................................................................. 128mpfsctl version ......................................................................... 131mpfsctl volmgt.......................................................................... 131

Displaying statistics ........................................................................ 132Using the mpfsstat command ................................................ 132

Displaying MPFS device information .......................................... 134Listing devices with the mpfsinq command........................ 134Listing devices with the /proc/mpfs/devices file ............. 137Displaying mpfs disk quotas.................................................. 137Validating a Linux server installation ................................... 139

Setting MPFS parameters............................................................... 141Kernel parameters ........................................................................... 141Setting persistent parameter values ............................................. 143

mpfs.conf parameters .............................................................. 143DirectIO support ...................................................................... 146EMCmpfs parameters ............................................................. 148

Appendix A File Syntax RulesFile syntax rules for creating a site .............................................. 152

Celerra with iSCSI ports.......................................................... 152Celerra with iSCSI-to-Fibre Channel bridge ........................ 153

File syntax rules for adding hosts ................................................ 154Linux host.................................................................................. 154

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide6

Contents

Appendix B Error Messages and TroubleshootingLinux server error messages ........................................................ 156Troubleshooting ............................................................................. 157

Installing MPFS software ....................................................... 157Mounting and unmounting a file system ............................ 158Miscellaneous issues ............................................................... 162

Known problems and limitations ................................................ 164Auto discovery......................................................................... 164automount limitation .............................................................. 164automount -t mpfs flag ........................................................... 164Character disk I/O .................................................................. 165Hardware initiators ................................................................. 165I/O fallthrough ........................................................................ 165Informational messages.......................................................... 165iSCSI port failure ..................................................................... 165MPFS data protection.............................................................. 165MPFS software hangs on reboot............................................ 167mpfsinfo command error ....................................................... 167Multiple mount points............................................................ 167Multiple MFS RHEL 5 hosts using iSCSI ............................. 168NFS or MPFS mounting may fail .......................................... 168PowerPath ................................................................................ 168Server load averages ............................................................... 169Symmetrix microcode 5771/5772.......................................... 169Uninstalling HRDP ................................................................. 169Unmounting a file system ...................................................... 169

Appendix C Connecting CLARiiON CX3-40C iSCSI CablesISCSI cabling ................................................................................... 172

Glossary

Index

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 7

Title Page

1 Celerra unified storage with Fibre Channel ............................................... 202 Celerra gateway with Fibre Channel........................................................... 203 Celerra unified storage with iSCSI .............................................................. 214 Celerra unified storage with iSCSI (MDS-based) ...................................... 225 Celerra gateway with iSCSI .......................................................................... 226 Celerra gateway with iSCSI (MDS-based).................................................. 237 Configuration roadmap................................................................................. 318 CLARiiON CX3-40C storage processor ports .......................................... 173

Figures

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide8

Figures

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 9

Title Page

1 MPFS configuration summary ...................................................................... 242 Data Mover capacity guidelines.................................................................... 333 Prefetch and read cache requirements ......................................................... 344 Arraycommpath and failovermode settings for storage groups.............. 815 iSCSI parameters for RHEL 4 using 2.6 kernels.......................................... 856 iSCSI parameters for RHEL 5, SLES 10, and CentOS 5.............................. 897 Linux server firewall ports........................................................................... 1098 Command line interface summary ............................................................. 1229 MPFS device information............................................................................. 13610 MPFS kernel parameters .............................................................................. 14211 Linux server error messages........................................................................ 15612 CLARiiON SP iSCSI IP address .................................................................. 172

Tables

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide10

Tables

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 11

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, please contact your EMC representative.

Important Check the EMC Powerlink® website, http://Powerlink.EMC.com, to ensure you have the latest versions of the MPFS software and documentation.

For software, open Support > Software Downloads and Licensing > Downloads C > Celerra MPFS Client for Linux and then select the necessary software for MPFS from the menu.

For documentation, open Support > Technical Documentation and Advisories > Software ~C~ Documentation > Celerra MPFS over Fibre Channel or Celerra MPFS over iSCSI.

Note: You must be a registered Powerlink user to download the MPFS software.

12 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Preface

Audience This document is part of the EMC Celerra MPFS documentation set, and is intended for use by Linux system administrators responsible for installing and maintaining EMC Celerra Linux servers.

Readers of this document are expected to be familiar with the following topics:

◆ EMC Symmetrix or CLARiiON storage system

◆ EMC Celerra Network Server

◆ NFS protocol

◆ Linux operating system

◆ Operating environments to install the Linux server include:

• Red Hat Enterprise Linux 4 and 5

• SuSE Linux Enterprise Server 10

• Community ENTerprise Operating System 5 (iSCSI only)

Relateddocumentation

Related documents include:

◆ EMC Celerra MPFS for Linux Clients Release Notes

◆ EMC Host Connectivity Guide for Linux

◆ EMC Host Connectivity Guide for VMware ESX Server

◆ EMC documentation for HBAs

CLARiiON storage system◆ Removing ATF or CDE Software before Installing other Failover

Software

◆ EMC Navisphere Manager online help

Symmetrix storage system◆ Symmetrix product manual

Celerra Network Server◆ EMC Celerra Documentation

◆ Using MPFS on Celerra

All of these publications can be found on the Powerlink website.

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 13

Preface

Powerlink The Powerlink website provides the most up-to-date information on documentation, downloads, interoperability, product lifecycle, target revisions, and bug fixes. As a registered Powerlink user, you can subscribe to receive notifications when updates occur.

E-Lab InteroperabilityNavigator

The EMC E-Lab™ Interoperability Navigator tool provides access to EMC interoperability support matrices. After logging in to Powerlink, go to Support > Interoperability and Product Lifecycle Information > E-Lab Interoperability Navigator.

Conventions used inthis document

EMC uses the following conventions for special notices.

Note: A note presents information that is important, but not hazard-related.

CAUTION!A caution contains information essential to avoid data loss or damage to the system or equipment.

IMPORTANT!An important notice contains information essential to operation of the software.

WARNING

A warning contains information essential to avoid a hazard that can cause severe personal injury, death, or substantial property damage if you ignore the warning.

DANGER

A danger notice contains information essential to avoid a hazard that will cause severe personal injury, death, or substantial property damage if you ignore the message.

14 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Preface

Typographical conventionsEMC uses the following type style conventions in this document:

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, functions, utilities

• URLs, pathnames, filenames, directory names, computer names, links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs,

processes, services, applications, utilities, kernels, notifications, system calls, man pages

Used in procedures for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when

shown outside of running text

Courier bold Used for:• Specific user input (such as commands)

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 15

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at:

http://Powerlink.EMC.com

Technical support — For technical support, go to EMC Customer Service on Powerlink. To open a service request through Powerlink, you must have a valid support agreement. Please contact your EMC Customer Support Representative for details about obtaining a valid support agreement or to answer any questions about your account.

Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to:

[email protected]

16 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Preface

Introducing EMC Celerra MPFS over FC and iSCSI 17

1Invisible Body Tag

This chapter provides an overview of EMC Celerra MPFS over FC and iSCSI and its architecture. This chapter includes the following topics:

◆ Overview of EMC Celerra MPFS over FC and iSCSI ................... 18◆ EMC Celerra MPFS architectures .................................................... 19◆ How EMC Celerra MPFS works ...................................................... 28

Introducing EMCCelerra MPFS over FC

and iSCSI

18 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Introducing EMC Celerra MPFS over FC and iSCSI

Overview of EMC Celerra MPFS over FC and iSCSI EMC® Celerra® Multi-Path File System (MPFS) over Fibre Channel (FC) lets Linux, Windows, UNIX, AIX, or Solaris servers access shared data concurrently over FC connections, whereas EMC Celerra MPFS over Internet Small Computer System Interface (iSCSI) lets servers access shared data concurrently over an iSCSI connection.

EMC Celerra MPFS uses common IP LAN topology to transport data and metadata for the EMC Celerra MPFS.

Without the MPFS file system, servers can access shared data by using standard network file system (NFS) or Common Internet File System (CIFS) protocols; the MPFS file system accelerates data access by providing separate transports for file data (file content) and metadata (control data).

For an FC-enabled server, data is transferred directly between the Linux server and storage array over a Fibre Channel SAN.

For an iSCSI-enabled server, data is transferred over the IP LAN between the Linux server and storage array for a unified storage or gateway configuration or through an iSCSI-to-Fibre Channel bridge for a unified storage (MDS-based) or gateway (MDS-based) configuration.

Metadata passes through the Celerra Network Server (and the IP network), which includes the NAS portion of the configuration.

EMC Celerra MPFS architectures 19

Introducing EMC Celerra MPFS over FC and iSCSI

EMC Celerra MPFS architecturesThere are two basic EMC Celerra MPFS architectures:

◆ EMC Celerra MPFS over Fibre Channel

◆ EMC Celerra MPFS over iSCSI

The FC architecture consists of two configurations:• Figure 1 on page 20 shows the Celerra unified storage with

Fibre Channel

• Figure 2 on page 20 shows the Celerra gateway with Fibre Channel

The iSCSI architecture consists of four configurations:• Figure 3 on page 21 shows the Celerra unified storage with

iSCSI

• Figure 4 on page 22 shows the Celerra unified storage with iSCSI (MDS-based)

• Figure 5 on page 22 shows the Celerra gateway with iSCSI• Figure 6 on page 23 shows the Celerra gateway with iSCSI

(MDS-based)Each is briefly described in this section and Table 1 on page 24 compares the MPFS configurations.

EMC Celerra MPFS over Fibre Channel

The EMC Celerra MPFS over Fibre Channel architecture consists of the following:

◆ Celerra Network Server with MPFS — A network-attached storage device that is configured with an EMC Celerra Network Server with MPFS software

◆ Symmetrix® or CLARiiON® storage array◆ Linux server with MPFS software connected to a Celerra Network

Server through the IP LAN, Symmetrix, or CLARiiON storage arrays by using Fibre Channel architecture

The following figures show the common Fibre Channel configurations. Figure 1 on page 20 shows the Celerra unified storage with Fibre Channel configuration where the Linux servers are connected to a Celerra Network Server by using an IP switch and one or more FC switches. In a smaller configuration of one or two servers, the servers can be connected directly to the Celerra Network Server without the use of Fibre Channel switches.

20 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Introducing EMC Celerra MPFS over FC and iSCSI

Figure 1 Celerra unified storage with Fibre Channel

Figure 2 on page 20 shows the Celerra gateway with Fibre Channel configuration. In this diagram, the Linux servers are connected to a CLARiiON or a Symmetrix storage array by using a Celerra Network Server and IP switch or optional Fibre Channel switch.

Figure 2 Celerra gateway with Fibre Channel

IP switch

FC switch

FC

NFS/CIFS

Celerra NetworkServer

MPFS data

Servers

CNS-001720

MPFS metadata

IP switch

Celerra NetworkServer

FC switch

FC

FC

NFS/CIFS

CLARiiON or Symmetrix

MPFS data

Servers

CNS-001717

MPFSmetadata

EMC Celerra MPFS architectures 21

Introducing EMC Celerra MPFS over FC and iSCSI

EMC Celerra MPFS over iSCSI

The EMC Celerra MPFS over iSCSI architecture consists of the following:

◆ Celerra Network Server with MPFS — A network-attached storage device that is configured with an EMC Celerra Network Server with MPFS software

◆ Symmetrix or CLARiiON storage array

◆ Linux server with MPFS software connected to a Celerra Network Server through the IP LAN, Symmetrix, or CLARiiON storage arrays by using iSCSI architecture

The following figures show the common iSCSI configurations. Figure 3 on page 21 shows the Celerra unified storage with iSCSI configuration where the Linux servers are connected to a Celerra Network Server by using one or more IP switches.

Figure 3 Celerra unified storage with iSCSI

Figure 4 on page 22 shows the Celerra unified storage with iSCSI MDS-based configuration where the Linux servers are connected to an iSCSI-to-Fibre Channel bridge (MDS-switch) and a Celerra Network Server by using an IP switch.

IP switchNFS/CIFS

Celerra NetworkServer

MPFS data

Servers

IP switch

CNS-001721

MPFS metadata

iSCSI data

22 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Introducing EMC Celerra MPFS over FC and iSCSI

Figure 4 Celerra unified storage with iSCSI (MDS-based)

Figure 5 on page 22 shows the Celerra gateway with iSCSI configuration where the Linux servers are connected to a CLARiiON or a Symmetrix storage array with a Celerra Network Server by using one or more IP switches.

Figure 5 Celerra gateway with iSCSI

IP switchNFS/CIFS

Celerra NetworkServer

MPFS data

Servers

CNS-001722

MPFS metadata

iSCSI data

iSCSI-to-FC bridgeFC

IP switch

Celerra NetworkServer

IP switch

FCNFS/CIFS

CLARiiON or Symmetrix

MPFS data

iSCSI data

Servers

CNS-001718

MPFSmetadata

EMC Celerra MPFS architectures 23

Introducing EMC Celerra MPFS over FC and iSCSI

Figure 6 on page 23 shows the Celerra gateway with iSCSI MDS-based configuration where the Linux servers are connected to a CLARiiON or Symmetrix storage array with an iSCSI-to-Fibre Channel bridge (MDS-switch) and a Celerra Network Server by using an IP switch.

Figure 6 Celerra gateway with iSCSI (MDS-based)

IP switch

Celerra NetworkServer

FC

FC

NFS/CIFS

CLARiiON or Symmetrix

MPFS data

MPFSmetadata

Servers

CNS-001719

iSCSI data

iSCSI-to-FC bridge

24 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Introducing EMC Celerra MPFS over FC and iSCSI

MPFS configuration summary

Table 1 on page 24 compares the MPFS configurations.

Table 1 MPFS configuration summary (page 1 of 4)

Figure Configuration Price/size

Maximum servers supported

Storage system

Max number of arrays

1 Celerra unified storage with Fibre Channel (NS20FC)

Entry-level 60 CLARiiON CX3-10F

1

Celerra unified storage with Fibre Channel (NS-120)

Entry-level 120a CLARiiON CX4-120

Celerra unified storage with Fibre Channel (NS40FC)

Midtier 120 CLARiiON CX3-40F

Celerra unified storage with Fibre Channel (NS-480)

Midtier 240b CLARiiON CX4-480

Celerra unified storage with Fibre Channel (NS-960)

High-end 500c CLARiiON CX4-960

2 Celerra gateway with Fibre Channel (NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8)

High-end Dependent on CLARiiON and Symmetrix limits

CLARiiON CX300,CX500,CX700,CX3-20F, CX3-40F, CX3-80, CX4-120a, CX4-240, CX4-480b, CX4-960c, Symmetrix DMX™ series, Symmetrix VMAX™ series, or Symmetrix 8000 series

4

EMC Celerra MPFS architectures 25

Introducing EMC Celerra MPFS over FC and iSCSI

3 Celerra unified storage with iSCSI (NS-120 with iSCSI enabled for MPFS)

Entry-level 120a CLARiiON CX4-120

1

Celerra unified storage with iSCSI (NS40 for MPFS)

Entry-level 120 CLARiiON CX4-120

Celerra unified storage with iSCSI (NS-480 with iSCSI enabled for MPFS)

Midtier 240b CLARiiON CX4-480

Celerra unified storage with iSCSI (NS-960 with iSCSI enabled for MPFS)

High-end 500c CLARiiON CX4-960

4 Celerra unified storage with iSCSI (MDS-based) (NS20FC)

Entry-level Dependent on MDS limita

CLARiiON CX4-120

1

Celerra unified storage with iSCSI (MDS-based) (NS-120 with FC enabled for MPFS)

Entry-level Dependent on MDS limita

CLARiiON CX4-120

Celerra unified storage with iSCSI (MDS-based) (NS40FC)

Midtier Dependent on MDS limit

CLARiiON CX3-40C

Celerra unified storage with iSCSI (MDS-based) (NS-480 with FC enabled for MPFS)

Midtier Dependent on MDS limitb

CLARiiON CX4-480

Celerra unified storage with iSCSI (MDS-based) (NS-960 with FC enabled for MPFS)

High-end Dependent on MDS limitc

CLARiiON CX4-960

Table 1 MPFS configuration summary (page 2 of 4)

Figure Configuration Price/size

Maximum servers supported

Storage system

Max number of arrays

26 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Introducing EMC Celerra MPFS over FC and iSCSI

5 Celerra gateway with iSCSI (NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8)

Midtier Dependent on CLARiiON limits

CLARiiONCX300,CX500,CX700,CX3-20C, CX3-40C, CX4-120a, CX4-240, CX4-480b, or CX4-960c

4

Celerra gateway with iSCSI (NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8)

High-end Dependent on CLARiiON limits

CLARiiONCX300,CX500,CX700,CX3-20C, CX3-40C, CX4-120a, CX4-240, CX4-480b, CX4-960c,Symmetrix DMX series, Symmetrix VMAX, or Symmetrix 8000 series

Table 1 MPFS configuration summary (page 3 of 4)

Figure Configuration Price/size

Maximum servers supported

Storage system

Max number of arrays

EMC Celerra MPFS architectures 27

Introducing EMC Celerra MPFS over FC and iSCSI

6 Celerra gateway with iSCSI (MDS-based) (NS40G, NS80G, NS-G8, NSX, VG2, or VG8)

High-end Dependent on MDS limit

CLARiiONCX300,CX500,CX700,CX3-20C, CX3-40C, CX4-960c, Symmetrix DMX series, Symmetrix VMAX, or Symmetrix 8000 series

4

a 240 Linux Servers are supported with EMC FLARE® release 29 or later.

b 1020 Linux Servers are supported with FLARE release 29 or later.

c 4080 Linux Servers are supported with FLARE release 29 or later.

Table 1 MPFS configuration summary (page 4 of 4)

Figure Configuration Price/size

Maximum servers supported

Storage system

Max number of arrays

28 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Introducing EMC Celerra MPFS over FC and iSCSI

How EMC Celerra MPFS worksAlthough called a file system, the EMC Celerra MPFS is neither a new nor a modified format for storing files. Instead, the MPFS file system interoperates and uses the standard NFS and CIFS protocols to enforce access permissions. The MPFS file system uses a protocol called File Mapping Protocol (FMP) to exchange metadata between the Linux server and the Celerra Network Server.

All requests unrelated to file I/O pass directly to the NFS/CIFS layer. The MPFS layer intercepts only the open, close, read, and write system calls.

When a Linux server intercepts a file-read call, it sends a request to the Celerra Network Server asking for the file's location. The Celerra Network Server responds with a list of file extents, which the Linux server then uses to read the file data directly from the disk.

When a Linux server intercepts a file-write call, it asks the Celerra Network Server to allocate blocks on disk for the file. The Celerra Network Server allocates the space in contiguous extents and sends the extent list to the Linux server. The Linux server then writes data directly to disk, informing the Celerra Network Server when finished, so that the Celerra Network Server can permit other Linux servers to access the file.

The remaining chapters describe how to install, manage, and tune EMC Celerra Linux servers. Using MPFS on Celerra technical module available on EMC Powerlink® at http://Powerlink.EMC.com, provides information on the Celerra Network Server MPFS commands.

MPFS Environment Configuration 29

2Invisible Body Tag

This chapter presents a high-level overview of configuring and installing EMC Celerra MPFS.

Topics include:

◆ Configuration roadmap .................................................................... 30◆ Implementation guidelines............................................................... 32◆ MPFS installation and configuration process ................................ 40◆ Verifying system components .......................................................... 43◆ Setting up the Celerra Network Server........................................... 53◆ Celerra Startup Assistant (CSA) ...................................................... 54◆ Setting up the file system.................................................................. 55◆ Enabling MPFS for the Celerra Network Server ........................... 66◆ Configuring the CLARiiON by using CLI commands................. 67◆ Configuring the SAN and storage ................................................... 68◆ Configuring and accessing storage ................................................. 81◆ Mounting the MPFS file system..................................................... 100◆ Unmounting the MPFS file system................................................ 104

MPFS EnvironmentConfiguration

30 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Configuration roadmapFigure 7 on page 31 shows the roadmap for configuring and installing the EMC Celerra MPFS over FC and iSCSI architecture for both FC and iSCSI environments. The roadmap contains the topics representing sequential phases of the configuration and installation process. The descriptions of each phase, which follow, contain an overview of the tasks required to complete the process, and a list of related documents where you can find more information.

Configuration roadmap 31

MPFS Environment Configuration

Figure 7 Configuration roadmap

!

MPFS installation and configuration process

Verifying system components

Setting up the file system

Implementation guidelines

Enabling MPFS for the Celerra Network Server

Setting up the Celerra Network Server

Configuring the CLARiiON by using CLI commands

Configuring the SAN and storage

Configuring and accessing storage

Mounting the MPFS file system

Run the Celerra Startup Assistant

32 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Implementation guidelinesThe following MPFS implementation guidelines are valid for all MPFS installations.

Celerra with MPFS recommendations

The following recommendations are described in detail in the Celerra MPFS over iSCSI Applied Best Practices Guide and the Celerra Network Server Best Practices for Performance, which can be found on EMC Powerlink at http://Powerlink.EMC.com:

◆ MPFS is optimized for large I/O transfers and may be useful for workloads with average I/O sizes as small as 16 KB. However, MPFS has been shown conclusively to improve performance for I/O sizes of 128 KB and greater.

◆ For best MPFS performance, in most cases, configure the Celerra volumes by using a volume stripe size of 256 KB.

◆ EMC PowerPath® is supported, but is not recommended. Path failover is built into the Linux server. When using PowerPath, the performance of the MPFS system is expected to be lower. Knowledgebase article emc 165953 contains details on using PowerPath and MPFS.

◆ When MPFS is started, 16 threads are run, which is the default number of MPFS threads. The maximum number of threads is 128. If system performance is slow, gradually increase the number of threads allotted for the Data Mover to improve system performance. Add threads conservatively, as the Data Mover allocates 16 KB of memory to accommodate each new thread. The optimal number of threads depends on the network configuration, the number of Linux servers, and the workload.

Using MPFS on Celerra provides the procedures necessary to adjust the thread count and can be found with the EMC Celerra Documentation on Powerlink.

Implementation guidelines 33

MPFS Environment Configuration

Data Mover capacity MPFS supports up to 256 TB total capacity per Data Mover. This larger Data Mover capacity allows a single 256 TB data access point through NMFS (Nested Mount File System). Table 2 on page 33 lists the conditions and restrictions of the Data Mover capacity.

Additional guidelines are listed below:

◆ EMC Celerra Replicator™ and Celerra SnapSure are not supported with the 128 and 256 TB Data Mover capacity. Existing snaps and replication sessions must be deleted by using MPFS with 128 TB Data Mover capacity. If Celerra Replicator or Celerra SnapSure will be used, the Data Mover capacity will be the same as the non-MPFS capacity limits.

◆ The 256 TB Data Mover capacity is only supported with MPFS running in an NSG8, NS-960, NSX, VG2, or VG8 environment.

◆ The Celerra Network Server must be configured for MPFS. However, CIFS, MPFS, and NFS servers can all connect to share file systems on the same Data Mover.

◆ Single file system size is limited to 16 TB.

◆ Performance is no different during data transfer or Data Mover boot up time by using 256 TB Data Movers from any other Data Mover capacity.

The EMC E-Lab™ Interoperability Navigator contains the latest Data Mover capacity information and MPFS-related restrictions.

Table 2 Data Mover capacity guidelines

MPFS with FC or ATA Conditions and restrictions MPFS 5.x MPFS 6.x

Limit per Data Mover/Blade

Minimum NAS version required

5.2.75.x 6.0.36.x

Data Mover capacity 128 TB 256 TB

Existing snaps and replication sessions

Must be deleted Must be deleted

New EMC SnapSure™ or replication sessions

Not supported Not supported

34 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Linux serverconfiguration

All Linux servers using the MPFS software require:

◆ At least one Fibre Channel connection or an iSCSI initiator connected to a SAN switch, or directly to a EMC CLARiiON® or EMC Symmetrix® storage array

◆ Network connections to the Data Mover

Note: When deploying MPFS over iSCSI on an NS-120, NS40 for MPFS, NS-480, or NS-960 unified storage configuration or a gateway configuration based on the iSCSI-enabled CLARiiON CX3 or CX4 series storage arrays, the CLARiiON iSCSI target is used. In all other MPFS over Celerra gateway with iSCSI implementations, the iSCSI target on the iSCSI-to-Fibre Channel bridge is used.

Storage configuration recommendations

Linux servers read and write directly from a storage system. This has several implications:

◆ FLARE release 26 or later should be used for best performance in new MPFS configurations. The EMC CLARiiON Best Practices for Fibre Channel Storage: FLARE Release 26 Firmware Update provides more details.

◆ All mounted MPFS file systems should be unmounted from the Linux server before changing any storage device or switch configuration.

◆ Table 3 on page 34 lists the prefetch and read cache requirements.

Table 3 Prefetch and read cache requirements

Prefetch requirements Read cache Notes

Modest 50–100 MB 80% of the systems fall under this category.

Heavy 250 MB Requests greater than 64 KB and sequential reads from many LUNs expected over 300 MB/s.

Extremely heavy 1 GB 120 or more drives reading in parallel.

Implementation guidelines 35

MPFS Environment Configuration

MPFS feature configurations

The following sections describe the configurations for MPFS features.

iSCSI CHAPauthentication

The Linux server with MPFS software and the CLARiiON storage array support the Challenge Handshake Authentication Protocol (CHAP) for iSCSI network security.

CHAP provides a method for the Linux server and CLARiiON storage array to authenticate each other through an exchange of a shared secret (a security key that is similar to a password), which is typically a string of 12 to 16 bytes.

CAUTION!If CHAP security is not configured for the CLARiiON storage array, any computer connected to the same IP network as the CLARiiON storage array iSCSI ports can read from or write to the CLARiiON storage array.

CHAP has two variants — one-way and reverse CHAP authentication:

◆ In one-way CHAP authentication, CHAP sets up the accounts that the Linux server uses to connect to the CLARiiON storage array. The CLARiiON storage array authenticates the Linux server.

◆ In reverse CHAP authentication, the CLARiiON storage array authenticates the Linux server and the Linux server also authenticates the CLARiiON storage array.

Because CHAP secrets are shared between the Linux server and CLARiiON storage array, the CHAP secrets must be configured the same on both the Linux server and CLARiiON storage array.

The CX-Series iSCSI Security Setup Guide contains detailed information regarding CHAP and can be found on the Powerlink website.

VMware ESX(optional)

VMware is a software suite for optimizing and managing IT environments through virtualization technology. MPFS supports the Linux guest operating systems running on a VMware ESX server.

36 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

The VMware ESX server is a robust, production-proven virtualization layer that abstracts processor, memory, storage, and networking resources into multiple virtual machines (VMs are software representations of a physical machine) running side-by-side on the same server.

VMware is not tied to any operating system, giving customers a bias-free choice of operating systems and software applications. All operating systems supported by VMware are supported with both Celerra iSCSI and NFS protocols for basic connectivity. This allows several instances of similar and different guest operating systems to run as virtual machines on one physical machine.

To run MPFS on a Linux guest operating system with a VMware ESX server, the configuration must meet the following requirements:

◆ Run a supported version of the Linux operating system.

◆ Have the CLARiiON supported hardware and driver installed.

◆ Be connected to each SP in each storage system directly or through a switch. Each SP must have an IP connection.

◆ Be on a TCP/IP network connected to both SPs in the storage system.

◆ Present LUNs as Raw Device Mapped (RDM) drives or software iSCSI initiators within the Linux guest operating system.

Currently, MPFS has the following limitations in a VMware ESX server environment:

◆ Booting the guest Linux server off iSCSI is not supported.

◆ PowerPath is not supported.

◆ Virtual machines that run the Linux guest operating system use iSCSI or Fibre Channel to access the CLARiiON storage arrays.

◆ The VMs can be stored on a VMware datastore, such as RDM (CLARiiON or Symmetrix storage array and accessed by the VMware ESX server by using either Fibre Channel or iSCSI), NFS datastore, or local disks.

The EMC Host Connectivity Guide for VMware ESX Server contains information on how to configure iSCSI initiator ports or Fibre Channel adapters (VMware ESX servers support iSCSI and FC configurations) and how VMware operates in a Linux environment. The VMware website, http://www.vmware.com, also provides more information.

Implementation guidelines 37

MPFS Environment Configuration

Rainfinity GlobalNamespace

The EMC Rainfinity® Global Namespace (GNS) Appliance complements the Celerra Nested Mount File System (NMFS) by providing a global namespace across Celerra Data Movers and simplifying mount point management of network shared files. A global namespace organizes file shares across servers into a coherent directory structure.

A global namespace is a virtual hierarchy of folders and links to shares or exports, designed to ease access to distributed data. End users no longer need to know the server names and shared folders where the physical data resides. Instead, they mount only to the namespace and navigate the structure of the namespace which appears as though they are navigating a directory structure on a physical server. The Rainfinity GNA application works behind the scenes to provide Linux servers with the data they need from multiple physical servers or shared folders.

The Rainfinity GNA has the following benefits:

◆ Leverages the MPFS architecture to provide a scalable NFS global namespace for Linux servers with an iSCSI interface.

◆ Creates a global view of file shares, simplifying the management of complex NAS and file server environments.

◆ Provides a single mount point for MPFS NAS shares, so as the file server environment grows and changes, users and applications do not have to experience the disruption of remounting.

◆ Supports 50,000 physical file shares in a single global namespace.

◆ Each Rainfinity GNA cluster supports 30,000 server connections with up to two clusters deployed to share a single global namespace.

The use of NAS devices and file servers increases storage management complexity. The Rainfinity GNS removes the dependency on physical storage location and makes it easier to consolidate, replace, and deploy NAS devices and file servers without disrupting server access.

MPFS is a Celerra feature that allows heterogeneous servers with MPFS software to concurrently access, directly over Fibre Channel or iSCSI channels, stored data on a CLARiiON or Symmetrix storage array. MPFS NFS is a referral-based protocol that do not require Rainfinity to be permanent in-band 100% of the time. As a result, the protocols are very scalable.

38 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

The EMC Rainfinity Global Namespace Appliance Getting Started Guide contains information on how the GNS solution works with MPFS, how to configure GNS when supporting Linux servers, and how to mount a Linux server to the GNS application.

Hierarchical volumemanagement

Hierarchical volume management (HVM) allows the user to cache more information about the file to disk mapping. It is particularly useful when using large files with random access I/O patterns and with file systems built on a small stripe.

A hierarchical volume is a tree-based structure composed of File Mapping Protocol (FMP) volumes. Each volume is either an FMP_VOLUME_DISK, an FMP_VOLUME_SLICE, an FMP_STRIPE or an FMP_VOLUME_META. The root of the tree and the intermediate nodes are slices, stripes or metas. The leaves of the tree are disks.

Every volume in a hierarchical volume description has a definition that includes an ID. By convention, a volume must be defined before it can be referenced by an ID. One consequence of this convention is that the volumes in the tree must be listed in depth-first search order.

Because of limitations on the transport medium, the description of an especially dense volume tree may require more than one RPC packet. Therefore, a hierarchical volume description may be incomplete, in which case the Linux server with MPFS software must send subsequent requests to obtain descriptions of the remaining volumes. Because the volume structure could change, for example, owing to automatic file system extension, each response contains a “cookie” that changes when the volume tree changes. A Linux server issuing a request for volume information must return the latest cookie, and if the volume tree has changed, the server will return a status of FMP_VOLUME_CHANGED. In this case, the Linux server must get the whole hierarchical volume description from the beginning by reissuing its mount request.

Implementation guidelines 39

MPFS Environment Configuration

MPFS changes the FMP protocol, which allows the FMP server to describe the volume slices, stripes and concatenations used to create a logical volume on which a file system is stored. Linux servers communicate with the FMP server to request maps that allow a Linux server to read a file directly from the disk. These maps are described as offsets and lengths on a physical disk. Because most file systems are created on striped volumes, from the standpoint of Linux server communication, the maps are broken up into many extents. Each time the file crosses a stripe boundary, the FMP server must send a different ID to represent the physical volume, and a new offset and length on that volume. With HVM, when the user mounts a file system, the Linux server requests a description of the logical volumes (the striping pattern). The Linux server now describes file maps as locations within the logical volume. The Linux server is now responsible for noticing when a file crosses a stripe boundary, and dispatching the I/Os to the proper physical disk. This change allows the protocol to be more efficient by using less space to represent the maps. Furthermore, it allows the Linux server to represent the extent map in a more compact form, thus conserving Linux server memory and CPU resources.

40 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

MPFS installation and configuration processThe MPFS configuration process involves performing tasks on various system components in a specific order. MPFS can be installed and configured manually or with the use of the Celerra Startup Assistant as described in “Celerra Startup Assistant (CSA)” on page 54.

Note: This document contains guidelines for installing and configuring MPFS with several options. Disregard steps that do not pertain to your environment.

To manually install and configure MPFS:

1. Run the Celerra Startup Assistant (CSA) to help setup the MPFS system (for MPFS-enabled systems only), which does the following:

a. Provisions unused disks.

b. Creates/extends the MPFS storage pool.

c. Configures CLARiiON iSCSI ports (only for iSCSI ports).

d. Starts MPFS service on the Celerra system.

e. Installs MPFS client software on multiple Linux hosts.

f. Configures Linux host parameters and sysctl parameters.

g. Mounts MPFS-enabled NFS exports.

2. Collect installation and configuration planning information and complete the checklist:

a. Collect the IP network addresses, Fibre Channel port addresses, iSCSI-to-Fibre Channel bridge information, and CLARiiON or Symmetrix storage array information.

b. Map the Ethernet and TCP/IP network topology.

c. Map the Fibre Channel zoning topology.

d. Map the virtual storage area network (VSAN) topology.

MPFS installation and configuration process 41

MPFS Environment Configuration

3. Install the MPFS software manually (on a native or VMware1 hosted Linux operating system):

a. Install the HBA driver (for FC configuration).

b. Install and configure iSCSI (for iSCSI configuration).2

c. Start the iSCSI service (for iSCSI configuration).

d. Install the MPFS software.

e. Check the MPFS software configuration.

Configuration planning checklist

Collect the following information before beginning the MPFS installation and configuration process.

For an FC and iSCSI configuration:

❑ SP A IP address .....................................................................................

❑ SP A login name....................................................................................

❑ SP A password ......................................................................................

❑ SP B IP address......................................................................................

❑ SP B login name ....................................................................................

❑ SP B password .......................................................................................

❑ Zoning for Data Movers ......................................................................

❑ First Data Mover LAN blade IP address or Data Mover IP address....................................................................................................

❑ Second Data Mover LAN blade IP address or Data Mover IP address....................................................................................................

❑ Control Station IP address or CS address .........................................

❑ LAN IP address (same as LAN Data Movers)..................................❑ Linux server IP address on LAN........................................................

❑ VSAN name ...........................................................................................

❑ VSAN number (ensure it is not in use)..............................................

1. “VMware ESX (optional)” on page 35 provides information.2. Installing Celerra iSCSI Host Components provides details.

42 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

For an FC configuration:

❑ SP A FC port assignment or FC ports ................................................

❑ SP B FC port assignment or FC ports.................................................❑ FC switch name.....................................................................................

❑ FC switch password .............................................................................

❑ FC switch port IP address....................................................................❑ Zoning for each FC HBA port .............................................................❑ Zoning for each FC director ................................................................

For an iSCSI configuration:

❑ Celerra Network Server with MPFS target IP address....................❑ CLARiiON or Symmetrix series storage array target IP address ..❑ Linux server IP address for iSCSI Gigabit connection.....................

❑ MDS management port IP address ....................................................

❑ iSCSI-to-Fibre Channel bridge name .................................................

❑ iSCSI-to-Fibre Channel bridge password ..........................................

❑ MDS iSCSI port IP address ..................................................................

❑ MDS iSCSI blade/port numbers.........................................................

❑ First MDS Data Mover FC blade/port number................................

❑ Second MDS Data Mover FC blade/port number...........................❑ Initiator and Target Challenge Handshake Authentication

Protocol (CHAP) password (optional)...............................................

Verifying system components 43

MPFS Environment Configuration

Verifying system componentsMPFS environments require standard EMC Celerra hardware and software, with the addition of a few components that are specific to either FC or iSCSI configurations. This involves setting up an MPFS environment to verify that each of the previously mentioned components is in place and functioning normally. Each hardware and software component is discussed in the following sections.

Required hardware components

This section lists the MPFS configurations with the required hardware components.

MPFS Celerra unifiedstorage with Fibre

Channel configuration

The hardware components for an MPFS Celerra unified storage with Fibre Channel configuration are:

◆ A Celerra Network Server connected to an FC network and SAN

◆ An IP switch connecting the Celerra Network Server to the servers

◆ An FC switch with an HBA for each Linux server

“EMC Celerra MPFS over Fibre Channel” on page 19 contains more information.

MPFS Celerragateway with Fibre

Channel configuration

The hardware components for an MPFS Celerra gateway with Fibre Channel configuration are:

◆ A Celerra Network Server connected to an FC network and SAN◆ A fabric-connected storage system, either CLARiiON or

Symmetrix, with available LUNs◆ An IP switch connecting the Celerra Network Server to the

servers

◆ An FC switch with an HBA for each Linux server

“EMC Celerra MPFS over Fibre Channel” on page 19 contains more information.

44 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

MPFS Celerra unifiedstorage with iSCSI

configuration

The hardware components for an MPFS Celerra unified storage with iSCSI configuration are:

◆ A Celerra Network Server storage system

◆ One or two IP switches connecting the Celerra Network Server and the servers

“EMC Celerra MPFS over iSCSI” on page 21 contains more information.

MPFS Celerra unifiedstorage with iSCSI

(MDS-based)configuration

The hardware components for an MPFS Celerra unified storage with iSCSI (MDS-based) configuration are:

◆ A Celerra Network Server storage system◆ An IP switch connecting the Celerra Network Server and the

servers

◆ An iSCSI-to-Fibre Channel bridge with one or more IPS blades—the IPS (IP SAN) blade is the iSCSI-to-Fibre Channel bridge component required to make the connection between a Linux server’s IP connection and the Fibre Channel storage system disk arrays

“EMC Celerra MPFS over iSCSI” on page 21 contains more information.

MPFS Celerragateway with iSCSI

configuration

The hardware components for an MPFS Celerra gateway with iSCSI configuration are:

◆ A Celerra Network Server connected to an FC network and SAN◆ A fabric-connected storage system, either CLARiiON or

Symmetrix, with available LUNs◆ One or two IP switches connecting the Celerra Network Server

and the CLARiiON or Symmetrix storage arrays to the servers

“EMC Celerra MPFS over iSCSI” on page 21 contains more information.

MPFS Celerragateway with iSCSI

(MDS-based)configuration

The hardware components for an MPFS Celerra gateway with iSCSI (MDS-based) configuration are:

◆ A Celerra Network Server connected to an FC network and SAN◆ A fabric-connected storage system, either CLARiiON or

Symmetrix, with available LUNs

Verifying system components 45

MPFS Environment Configuration

◆ One or two IP switches connecting the Celerra Network Server to the servers

◆ An iSCSI-to-Fibre Channel bridge with one or more IPS blades—the IP SAN (IPS) blade is an iSCSI-to-Fibre Channel bridge component required to make the connection between a Linux server’s IP connection and the Fibre Channel storage system disk arrays

“EMC Celerra MPFS over iSCSI” on page 21 contains more information.

Note: A minimal working configuration should have at least one Gigabit Ethernet port per server. However, Fast Ethernet (100BaseT) is also supported. By using Fast Ethernet, the total throughput of the Linux server has a theoretical limit of 12 MB/s.

A Linux server with MPFS software is required for all types of configurations.

Configuring GigabitEthernet ports

Two Gigabit Ethernet NICs, or a multiport NIC with two available ports, connected to isolated IP networks or subnets are recommended for each Linux server for iSCSI. For each Linux server for Fibre Channel, one NIC is required for NFS and FMP traffic. For maximum performance, use:

◆ One port for the connection between the Linux server and the Data Mover for MPFS metadata transfer and NFS traffic

◆ One port for the connection between the Linux server and the same subnet as the iSCSI discovery address dedicated to data transfer

Note: The second NIC for iSCSI must be on the same subnet as the discovery address.

Configuring and Managing EMC Celerra Networking provides detailed information for setting up network connections and is available on the Powerlink website.

46 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Required software components

The following software components are required for an MPFS configuration:

Note: The EMC Celerra MPFS for Linux Clients Release Notes provide a complete list of EMC supported operating system versions.

◆ Celerra Network Server NAS software version that supports either FC or iSCSI on Linux platforms

◆ Linux operating system and kernel version that supports HBAs or an iSCSI initiator

Note: The EMC E-Lab Interoperability Matrix lists the latest versions of Red Hat Enterprise Linux server and SUSE Linux Enterprise Server operating systems.

◆ MPFS software version 6.0 or later

◆ iSCSI Initiator

Verifying configuration

The next step in setting up MPFS is to verify whether each of the previously mentioned components is in place and functioning normally. If each of these components is operational, “MPFS installation and configuration process” on page 40 provides more information.

Relateddocumentation

The following documents, available on the Powerlink website, provide additional information:◆ Configuring and Managing EMC Celerra Networking

◆ Managing Celerra Volumes and File Systems Manually

◆ Configuring Standbys on Celerra

Verifying system components 47

MPFS Environment Configuration

Verifying storage array requirements

This section describes storage array requirements for an MPFS environment. The documents listed in “Related documentation” on page 48 detail storage array setup information.

CAUTION!Ensure that the storage arrays used for MPFS do not contain both CLARiiON and Symmetrix LUNs. MPFS does not support a mixed storage environment.

Storage arrayrequirements

All CLARiiON storage arrays used within an MPFS environment must meet these requirements:

◆ Use only CX series storage arrays designed for MPFS file systems. The following models are supported:

• CX300

• CX500

• CX700

• CX3-10F

• CX3-20C/F

• CX3-40C/F

• CX3-80

• CX4-120

• CX4-240

• CX4-480

• CX4-960

◆ Ensure that all MPFS system environments have file systems built on disks from only one type of storage system: either all Fibre Channel or all ATA drives.

◆ Ensure that MPFS does not use a file system spanning across two different storage array enclosures.

◆ Build Celerra LUNs by using RAID 1, RAID 3, RAID 5, or RAID 6 only.

◆ Build Celerra Management LUNs by using 4+1 RAID 5 only.

◆ Enable write cache.

◆ Use EMC Access Logix™.

48 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

◆ Run FLARE release 26 for high performance with NAS 5.5.31.x or later.

Note: As of the FLARE 24 release, the CX series (excluding the CX300i and CX500i) supports jumbo frames.

All Symmetrix storage arrays used within an MPFS environment must meet these requirements:

◆ Use only Symmetrix storage arrays designed for MPFS file systems. The following models are supported:

• Symmetrix DMX series Enterprise Storage Platform (ESP)

• Symmetrix VMAX series

• Symmetrix 8000 series

◆ Ensure the correct version of the microcode is used. Contact your EMC Customer Support Representative or check the EMC E-Lab Interoperability Navigator for microcode release updates.

◆ Ensure the Symmetrix Fibre Channel/SCSI port flags are properly configured for the MPFS file system. The Avoid_Reset_Broadcast (ARB) must be set for each port connected to a Linux server.

◆ Ensure MPFS does not use a file system spanning across two different storage array enclosures.

Relateddocumentation

The following documentation available on the Powerlink website, provide additional information:

◆ Storage system’s rails and enclosures documentation

◆ EMC Host Connectivity Guide for Linux

Verifying system components 49

MPFS Environment Configuration

Verifying the Fibre Channel switch requirements (FC configuration)

Ensure the following to set up the Fibre Channel switch:

◆ Install the Fibre Channel switch.

◆ Check that the host bus adapter (HBA) driver is loaded.

To verify that the HBA driver is loaded, type lsmod.

This output shows a configuration with a QLogic 2200 HBA:

Module Size Used by Tainted: PF microcode 5248 0 (autoclean)qla2200 259580 1 ext3 95784 2 jbd 56856 2 [ext3]aic7xxx 165616 3 sd_mod 13744 8 scsi_mod 116904 3 [qla2200 aic7xxx sd_mod]

◆ Connect cables from each HBA Fibre Channel port to a switch port.

◆ Verify the HBA connection to the switch by checking LEDs for the switch port connected to the HBA port.

◆ Configure zoning for the switch as described in “Zoning the SAN switch (FC configuration)” on page 68.

Note: Configure zoning as single initiator, meaning that each HBA port will have its own zone in which it is the only HBA port.

Relateddocumentation

The documentation that ships with the Fibre Channel switch provides more information about the switch.

50 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Verifying the IP-SAN switch requirements

The MPFS environment requires a Fibre Channel switch capable of iSCSI-to-Fibre Channel bridging. The following switch characteristics are required:

◆ An iSCSI-to-Fibre Channel bridge that supports the MPFS architecture.

◆ MPFS IP-SAN switches require specific firmware revisions for proper operation.

◆ Verify the SAN OS version with the E-Lab Interoperability Navigator for supported versions.

◆ Install one of the following iSCSI IPS (IP-SAN) modules:

• 8GE IPS blade

• 14FC/2/GE multi protocol services module

• 18FC/4/GE multi protocol services module

Note: The E-Lab Interoperability Navigator contains definitive information on supported software and hardware for Celerra network-attached storage (NAS) products.

Relateddocumentation

Specific IP-SAN switch documentation is a primary source of additional information.

The EMC Celerra MPFS for Linux Clients Release Notes provide a list of supported IP-SAN switches.

Verifying system components 51

MPFS Environment Configuration

Verifying the IP-SAN CLARiiON CX3 or CX4 requirements

The EMC Celerra MPFS over FC and iSCSI environment with CLARiiON CX3 or CX4 series storage array configurations requires the following:

◆ For a unified storage configuration:

• Celerra Network Server NS20FC, NS-120, NS-120 with iSCSI or FC enabled for MPFS, NS40 for MPFS, NS40FC, NS-480, NS-480 with iSCSI or FC enabled for MPFS, NS-960, or NS-960 with iSCSI or FC enabled for MPFS.

• CLARiiON CX3 or CX4 series storage array.

◆ For a gateway configuration:

• Celerra Network Server NS40G, NS80G, NS-G2, NS-G8, NSX, VG2, or VG8.

• CLARiiON CX300, CX500, CX700, CX3, CX4, Symmetrix DMX, Symmetrix VMAX, or Symmetrix 8000 series storage arrays.

• Cabled as any shared storage system.

• Access Logix LUN masking by using iSCSI to present all Celerra managed LUNs to the Linux servers.

◆ Linux server configuration is the same as a standard Linux server connection to an iSCSI connection.

◆ Linux servers are load balanced across CLARiiON iSCSI ports for performance improvement and protection against single-port and Ethernet cable problems.

◆ Port 0 iSCSI through 3 iSCSI on each storage processor is connected to the iSCSI network. If the cables are not connected, “ISCSI cabling” on page 172 contains instructions on how to connect the iSCSI cables.

52 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Relateddocumentation

The following Celerra installation guides provide more information about hardware installation:

◆ EMC Celerra NS20FC System (Single Blade) Installation Guide Addendum for MPFS

◆ EMC Celerra NS20FC System (Dual Blade) Installation Guide Addendum for MPFS

◆ EMC Celerra NS40 for MPFS System (Single Blade) Installation Guide Addendum for MPFS

◆ EMC Celerra NS40 for MPFS System (Dual Blade) Installation Guide Addendum for MPFS

◆ EMC Celerra NS40FC System (Single Blade) Installation Guide Addendum for MPFS

◆ EMC Celerra NS40FC System (Dual Blade) Installation Guide Addendum for MPFS

Setting up the Celerra Network Server 53

MPFS Environment Configuration

Setting up the Celerra Network ServerThe Celerra Network Server System Software Installation Guide contains information on how to set up the Celerra Network Server, which can be found on the Powerlink website.

54 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Celerra Startup Assistant (CSA)The Celerra Startup Assistant (CSA) is a single instance, pre-configuration tool targeted for a factory installed (unconfigured) Celerra or to open EMC Unisphere software. The CSA helps in setting up an MPFS system (for MPFS-supported systems only) by doing the following:

◆ Provisions storage for MPFS use

◆ Creates an MPFS storage pool

◆ Configures CLARiiON iSCSI ports (only for iSCSI ports)

◆ Starts the MPFS service on the Celerra system

◆ Push-installs the MPFS client software on multiple Linux hosts

◆ Configures Linux host parameters

◆ Mounts MPFS-enabled NFS exports

The CSA ships on the Applications and Tools CD, and is also available from the Celerra Tools page on Powerlink. For CSA on Powerlink, open Support > Product and Diagnostic Tools > Celerra Tools > Celerra Startup Assistant and download the appropriate version of the CSA from Powerlink.

Setting up the file system 55

MPFS Environment Configuration

Setting up the file systemThis section describes the prerequisites for file systems and the procedure for creating a file system.

File system prerequisites

File system prerequisites are guidelines that must be met before building a file system. A properly built file system must:

◆ Use disk volumes from the same storage system.

Note: Do not use a file system spanning across two storage array enclosures. A file system spanning multiple storage systems is not supported even if the multiple storage systems are of the same type, such as CLARiiON or Symmetrix.

◆ Use disk volumes from the same disk type, all FC or ATA, not a mixture of FC and ATA.

◆ For best MPFS performance, in most cases, configure the Celerra volumes by using a volume stripe size of 256 KB. Be sure to review the EMC Celerra MPFS over iSCSI Applied Best Practices Guide for detailed performance-related information.

◆ In a Symmetrix environment, ensure that the Symmetrix Fibre Channel/SCSI port flag settings are properly configured for the MPFS file system; in particular, the Avoid_Reset_Broadcast (ARB) flag must be set. The EMC Customer Support Representative configures these settings.

56 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Creating a file system on a Celerra Network Server

This section describes how to configure, create, mount, and export file systems.

Make sure LUNs for the new file system are created optimally for MPFS. All LUNs must:◆ Be of the same RAID type ◆ Have the same number of spindles in each RAID group◆ Contain spindles of the same type and speed

In addition, ensure that all LUNs do not share spindles with: ◆ Other LUNs in the same file system◆ Another file system heavily utilized by high-I/O applications

Before creating the LUNs, make sure that the total usable capacity of all the LUNs within a single file system does not exceed 16 TB. The maximum number of LUNs tested that are supported in MPFS configurations per file system is 256. Ensure that the LUNs are accessible by the Data Movers through LUN masking, switch zoning, and VSAN settings.

Use this procedure to build or mount the MPFS file system on the Celerra Network Server:

1. Log in to the Celerra Network Server Control Station as NAS administrator.

2. Before building the file system, type the nas_disk command to return a list of unused disks by using this command syntax:

$ nas_disk -list |grep n | more

For example, type:

$ nas_disk -list |grep n | more

Setting up the file system 57

MPFS Environment Configuration

The output shows all disks not in use:

id inuse sizeMB storageID-devID type name servers7 n 466747 APM00065101342-0010 CLSTD d7 1,28 n 466747 APM00065101342-0011 CLSTD d8 1,29 n 549623 APM00065101342-0012 CLSTD d9 1,210 n 549623 APM00065101342-0014 CLSTD d10 1,211 n 549623 APM00065101342-0016 CLSTD d11 1,212 n 549623 APM00065101342-0018 CLSTD d12 1,213 n 549623 APM00065101342-0013 CLSTD d13 1,214 n 549623 APM00065101342-0015 CLSTD d14 1,215 n 549623 APM00065101342-0017 CLSTD d15 1,216 n 549623 APM00065101342-0019 CLSTD d16 1,217 n 549623 APM00065101342-001A CLSTD d17 1,218 n 549623 APM00065101342-001B CLSTD d18 1,219 n 549623 APM00065101342-001C CLSTD d19 1,220 n 549623 APM00065101342-001E CLSTD d20 1,221 n 549623 APM00065101342-0020 CLSTD d21 1,222 n 549623 APM00065101342-001D CLSTD d22 1,223 n 549623 APM00065101342-001F CLSTD d23 1,224 n 549623 APM00065101342-0021 CLSTD d24 1,225 n 549623 APM00065101342-0022 CLSTD d25 1,226 n 549623 APM00065101342-0024 CLSTD d26 1,227 n 549623 APM00065101342-0026 CLSTD d27 1,228 n 549623 APM00065101342-0023 CLSTD d28 1,229 n 549623 APM00065101342-0025 CLSTD d29 1,230 n 549623 APM00065101342-0027 CLSTD d30 1,2

58 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

3. Display all disks by using this command syntax:

$ nas_disk -list

For example, type:

$ nas_disk -list

Output:

The first stripe alternate SP ownership A,B,A,B,A,B is displayed in bold text and the second stripe alternate SP ownership B,A,B,A,B,A is displayed in a shaded background. The two different stripes (A, B, A) and (B, A, B) are both in RAID group X, Y, and Z.

id inuse sizeMB storageID-devID type name servers1 y 11263 APM00065101342-0000 CLSTD root_disk 1,22 y 11263 APM00065101342-0001 CLSTD root_disk 1,23 y 2047 APM00065101342-0002 CLSTD d3 1,24 y 2047 APM00065101342-0003 CLSTD d4 1,25 y 2047 APM00065101342-0004 CLSTD d5 1,26 y 2047 APM00065101342-0005 CLSTD d6 1,27 n 466747 APM00065101342-0010 CLSTD d7 1,28 n 466747 APM00065101342-0011 CLSTD d8 1,29 n 549623 APM00065101342-0012 CLSTD d9 1,210 n 549623 APM00065101342-0014 CLSTD d10 1,211 n 549623 APM00065101342-0016 CLSTD d11 1,212 n 549623 APM00065101342-0018 CLSTD d12 1,213 n 549623 APM00065101342-0013 CLSTD d13 1,214 n 549623 APM00065101342-0015 CLSTD d14 1,215 n 549623 APM00065101342-0017 CLSTD d15 1,216 n 549623 APM00065101342-0019 CLSTD d16 1,217 n 549623 APM00065101342-001A CLSTD d17 1,218 n 549623 APM00065101342-001B CLSTD d18 1,219 n 549623 APM00065101342-001C CLSTD d19 1,220 n 549623 APM00065101342-001E CLSTD d20 1,221 n 549623 APM00065101342-0020 CLSTD d21 1,222 n 549623 APM00065101342-001D CLSTD d22 1,223 n 549623 APM00065101342-001F CLSTD d23 1,224 n 549623 APM00065101342-0021 CLSTD d24 1,225 n 549623 APM00065101342-0022 CLSTD d25 1,226 n 549623 APM00065101342-0024 CLSTD d26 1,227 n 549623 APM00065101342-0026 CLSTD d27 1,228 n 549623 APM00065101342-0023 CLSTD d28 1,229 n 549623 APM00065101342-0025 CLSTD d29 1,230 n 549623 APM00065101342-0027 CLSTD d30 1,2

Setting up the file system 59

MPFS Environment Configuration

Note: Use Navicli or EMC Navisphere® Manager to determine which LUNs are on SP A and SP B.

4. Find the names of file systems mounted on all servers by using this command syntax:

$ server_df ALL

For example, type:

$ server_df ALL

Output:

server_2 :Filesystem kbytes used avail capacity Mounted onS2_Shgvdm_FS1 831372216 565300 825719208 1% /root_vdm_5/S2_Shgvdm_FS1root_fs_vdm_vdm01 114592 7992 106600 7% /root_vdm_5/.etcS2_Shg_FS2 831372216 19175496 812196720 2% /S2_Shg_mnt2S2_Shg_FS1 1662746472 25312984 1637433488 2% /S2_Shg_mnt1root_fs_common 153 5280 10088 34% /.etc_commonroot_fs_2 2581 80496 177632 31% /

server_3 :Filesystem kbytes used avail capacity Mounted onroot_fs_vdm_vdm02 114592 7992 106600 7% /root_vdm_6/.etcS3_Shgvdm_FS1 831372216 4304736 827067480 1% /root_vdm_6/S3_Shgvdm_FS1S3_Shg_FS1 831373240 11675136 819698104 1% /S3_Shg_mnt1S3_Shg_FS2 831373240 4204960 827168280 1% /S3_Shg_mnt2root_fs_commo 15368 5280 10088 34% /.etc_commonroot_fs_3 258128 8400 249728 3% /

vdm01 :Filesystem kbytes used avail capacity Mounted onS2_Shgvdm_FS1 831372216 5653008 825719208 1% /S2_Shgvdm_FS1

vdm02 :Filesystem kbytes used avail capacity Mounted onS3_Shgvdm_FS1 831372216 4304736 827067480 1% /S3_Shgvdm_FS1

5. Find the names of file systems mounted on a specific server by using this command syntax:

$ server_df <server_name>

where:<server_name> = name of the Linux server

60 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

For example, type:

$ server_df vdm02

Output:

vdm02 :Filesystem kbytes used avail capacity Mounted onS3_Shgvdm_FS1 831372216 4304736 827067480 1% /S3_Shgvdm_FS1

6. Find the names of existing file systems that are not mounted by using this command syntax:

$ nas_fs -list

For example, type:

$ nas_fs -list

Output:

id inuse type acl volume name server1 n 1 0 10 root_fs_12 y 1 0 12 root_fs_2 23 n 1 0 14 root_fs_34 n 1 0 16 root_fs_45 n 1 0 18 root_fs_56 n 1 0 20 root_fs_67 n 1 0 22 root_fs_78 n 1 0 24 root_fs_89 n 1 0 26 root_fs_910 n 1 0 28 root_fs_1011 n 1 0 30 root_fs_1112 n 1 0 32 root_fs_1213 n 1 0 34 root_fs_1314 n 1 0 36 root_fs_1415 n 1 0 38 root_fs_1516 y 1 0 40 root_fs_common 217 n 5 0 73 root_fs_ufslog18 n 5 0 76 root_panic_reserve19 n 5 0 77 root_fs_d320 n 5 0 78 root_fs_d421 n 5 0 79 root_fs_d522 n 5 0 80 root_fs_d625 y 1 0 116 S2_Shg_FS2 2221 y 1 0 112 S2_Shg_FS1 2222 n 1 0 1536 S3_Shg_FS1223 n 1 0 1537 S3_Shg_FS2384 y 1 0 3026 testdoc_fs2 2

Setting up the file system 61

MPFS Environment Configuration

7. Find the names of volumes already mounted by using this command syntax:

$ nas_volume -list

For example, type:

$ nas_volume -list

Part of the output:

id inuse type acl name cltype clid 1 y 4 0 root_disk 0 1-34,52 2 y 4 0 root_ldisk 0 35-51 3 y 4 0 d3 1 77 4 y 4 0 d4 1 78 5 y 4 0 d5 1 79 6 y 4 0 d6 1 80 7 n 1 0 root_dos 0 8 n 1 0 root_layout 0 9 y 1 0 root_slice_1 1 10 10 y 3 0 root_volume_1 2 1 11 y 1 0 root_slice_2 1 12 12 y 3 0 root_volume_2 2 2 13 y 1 0 root_slice_3 1 14 14 y 3 0 root_volume_3 2 3 15 y 1 0 root_slice_4 1 16 16 y 3 0 root_volume_4 2 4 . . . . . . . . . . . . . . . . . . . . . 1518 y 3 0 Meta_S2vdm_FS1 2 229 1527 y 3 0 Meta_S2_FS1 2 235

8. Create the first stripe by using this command syntax:

$ nas_volume -name <name> -create -Stripe <stripe_size> <volume_set>,...

where:<name> = name of new stripe pair<stripe_size> = size of the stripe<volume_set> = set of disks

For example, to create a stripe pair named s2_stripe1 and a depth of 262144 bytes (256 KB) by using disks d9, d14, d11, d16, d17, and d22, type:

$ nas_volume -name s2_stripe1 -create -Stripe 262144 d9,d14,d11,d16,d17,d22

62 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Output:

id = 135name = s2_stripe1acl = 0in_use = Falsetype = stripestripe_size = 262144volume_set = d9,d14,d11,d16,d17,d22disks = d9,d14,d11,d16,d17,d22

Note: For best MPFS performance, in most cases, configure your Celerra volumes by using a volume stripe size of 256 KB. Detailed performance-related information is available in the EMC Celerra MPFS over iSCSI Applied Best Practices Guide.

9. Create the second stripe by using this command syntax:

$ nas_volume -name <name> -create -Stripe <stripe_size> <volume_set>,...

where:<name> = name of new stripe pair<stripe_size> = size of the stripe<volume_set> = set of disks

For example, to create a stripe pair named s2_stripe2 and a depth of 262144 bytes (256 KB) by using disks d13, d10, d15, d12, d18, and d19, type:

$ nas_volume -name s2_stripe2 -create -Stripe 262144 d13,d10,d15,d12,d18,d19

Output:

id = 136name = s2_stripe2acl = 0in_use = Falsetype = stripestripe_size = 262144volume_set = d13,d10,d15,d12,d18,d19disks = d13,d10,d15,d12,d18,d19

Setting up the file system 63

MPFS Environment Configuration

10. Create the metavolume by using this command syntax:

$ nas_volume -name <name> -create -Meta <volume_name>

where:<name> = name of the new meta volume<volume_name> = names of the volumes

For example, to create a meta volume s2_meta1 with volumes s2_stripe1 and s2_stripe2, type:

$ nas_volume -name s2_metal -create -Meta s2_stripe1, s2_stripe2

Output:

id = 137name = s2_meta1acl = 0in_use = Falsetype = metavolume_set = s2_stripe1, s2_stripe2disks =

d9,d14,d11,d16,d17,d22,d13,d10,d15,d12,d18,d19

11. Create the file system by using this command syntax:

$ nas_fs -name <name> -create <volume_name>

where:<name> = name of the new file system<volume_name> = name of the meta volume

For example, to create a file system s2fs1 with a meta volume s2_meta1, type:

$ nas_fs -name s2fs1 -create s2_meta1

Output:

id = 33name = s2fs1acl = 0in_use = Falsetype = uxfsworm = complianceworm_clock = Thu Mar 6 16:26:09 EST 2008worm Max Retention Date = Fri April 18 12:30:40 EST 2008volume = s2_meta1pool = rw_servers= ro_servers= rw_vdms =

64 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

ro_vdms = auto_ext = no, virtual_provision=nostor_devs =

APM00065101342-0012,APM00065101342-0015,APM00065101342-0016,APM00065101342-0019,APM00065101342-001A,APM00065101342-001D,APM00065101342-0013,APM00065101342-0014,APM00065101342-0017,APM00065101342-0018,APM00065101342-001B,APM00065101342-001C

disks = d9,d14,d11,d16,d17,d22,d13,d10,d15,d12,d18,d19

12. Create the mount point by using this command syntax:

$ server_mountpoint <movername> -create <pathname>

where:<movername> = name of the Data Mover<pathname> = path of the new mount point

For example, to create a mount point on Data Mover server_2 with a path of /s2fs1, type:

$ server_mountpoint server_2 -create /s2fs1

Output:server_2 : done

13. Mount the file system by using this command syntax:

$ server_mount <movername> <fs_name> <mount_point>

where:<movername> = name of the Data Mover<fs_name> = name of the file system to mount<mount_point> = name of the mount point

For example, to mount a file system on Data Mover server_2 with file system s2fs1 and mount point /s2fs1, type:

$ server_mount server_2 s2fs1 /s2fs1

Output:server_2 : done

Setting up the file system 65

MPFS Environment Configuration

14. Export the file system by using this command syntax:

$ server_export <mover_name> -Protocol nfs -name <name> -option <options> <pathname>

where:<mover_name> = name of the Data Mover<name> = name of the alias for the <pathname><options> = options to include<pathname> = path of the mount point created

For example, to export a file system on Data Mover server_2 with a path name alias of ufs1 and mount point path /ufs1, type:

$ server_export server_2 -P nfs -name ufs1 /ufs1

Output:server_2 : done

Relateddocumentation

The following documents contain more information on building the MPFS file system and are available on the Powerlink website:

◆ Celerra Network Server Command Reference Manual

◆ Configuring and Managing Celerra Networking

◆ Managing Celerra Volumes and File Systems Manually

◆ Using MPFS on Celerra

66 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Enabling MPFS for the Celerra Network ServerStart MPFS on the Celerra Network Server by using this command syntax:

$ server_setup <movername> -Protocol mpfs -option <options>

where:<movername> = name of the Data Mover<options> = options to include

For example, to start MPFS on Data Mover server_2, type:

$ server_setup server_2 -Protocol mpfs -option start

Output:server_2 : done

Note: Start MPFS on the same Data Mover on which the file system was exported by using NFS.

Configuring the CLARiiON by using CLI commands 67

MPFS Environment Configuration

Configuring the CLARiiON by using CLI commandsThis section presents an overview of configuring the CLARiiON CX3 or CX4 array ports mounted on the CLARiiON storage array for Celerra gateway configurations. Use site-specific parameters for these steps.

The configuration of CLARiiON CX3 or CX4 array ports for a Celerra gateway is done by using CLI commands.

Best practices for CLARiiON and Celerra Gateway configurations

To simplify the configuration and management of the Linux server, EMC recommends that the discovery addresses (IP addresses) and enabled targets for each Linux server be configured so that all the iSCSI target ports on the storage array are equally balanced to achieve maximum performance and availability. Balancing the load across all ports enables speeds up to 4 x 10 Gb/s per storage processor. If one of the iSCSI target ports fails, the other three will remain operational, so one-fourth of the Linux servers will fail over to the native NFS or CIFS protocol, but three-fourths of the Linux servers will continue operating at higher speeds attainable through iSCSI.

CLARiiON discovery sessions reveal paths to all four iSCSI ports on each storage processor. The ports are described to the iSCSI initiators as individual targets. Each of these connections creates another session. The maximum number of initiator sessions or hosts per storage processor is 128 for the CLARiiON CX3, 256 for the CLARiiON CX4-120, 512 for the CLARiiON CX4-240, 1024 for the CLARiiON CX4-480, and 4096 for the CLARiiON CX4-960. Without disabling other iSCSI targets, a CLARiiON cannot support more than 32 iSCSI initiators. To increase the number of achievable Linux servers for a Celerra gateway, disable access on each Linux server to as many as three out of four iSCSI targets per storage processor. Ensure that the enabled iSCSI targets (CLARiiON iSCSI ports) match the storage group definition.

For Celerra gateway configurations, Access Logix LUN masking by using iSCSI is used to present all Celerra managed LUNs to the Linux servers. The non-Celerra LUNs are protected from the iSCSI initiators. A separate storage group is created for MPFS initiators and all Celerra LUNs that are not Celerra Control LUNs are added to this group. At least one port from each SP should be enabled for each Linux server in this group.

68 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

In a gateway environment, iSCSI initiator names are used in providing the path in the storage group for the Linux server to access the iSCSI targets. Unique, known iSCSI names are required when using Access Logix software.

Configuring the SAN and storageThis section describes how to configure the SAN switch along with specific configuration information for Symmetrix and CLARiiON storage arrays.

Installing the Fibre Channel switch (FC configuration)

To set up the Fibre Channel switch, complete these tasks:

1. Install the Fibre Channel switch (if not already installed).

2. Connect cables from each HBA Fibre Channel port to a switch port.

3. Verify the HBA connection to the switch by checking the LEDs for the switch port connected to the HBA port.

Note: Configure zoning as single initiator, meaning that each HBA port will have its own zone in which it is the only HBA port.

Zoning the SAN switch (FC configuration)

This section presents an overview of configuring and zoning a Fibre Channel switch:

1. Record all attached port WWNs.

2. Log in to the iSCSI-to-Fibre Channel bridge or MDS Fibre Channel switch console by using CLI commands or the Fabric Manager.

3. Create a zone for each Fibre Channel HBA port and its associated Fibre Channel Target.

Relateddocumentation

The documentation that ships with the Fibre Channel switch contains additional information on installing or zoning.

Note: Configure the CLARiiON so that each target is zoned to an SP A and SP B port. Configure the Symmetrix so that it is zoned to a single Fibre Channel Director (FA).

Configuring the SAN and storage 69

MPFS Environment Configuration

Configuring the iSCSI-to-Fibre Channel bridge (iSCSI configuration)

This section presents an overview of configuring and zoning the iSCSI-to-Fibre Channel bridge for an iSCSI configuration. Use site-specific parameters for these steps.

When configuring the iSCSI-to-Fibre Channel bridge for an iSCSI configuration:

◆ The EMC Celerra MPFS over iSCSI configuration does not use the EMC Celerra iSCSI target feature. It relies on the iSCSI initiator and an IP-SAN compatible switch as the iSCSI target to bridge traffic to Fibre Channel. Use “Verifying the IP-SAN switch requirements” on page 50 to identify supported switch models.

◆ The MDS management port must be connected to a different LAN from the iSCSI ports. When properly configured, the iSCSI ports should not be able to ping the MDS management port.

◆ Configure zoning as a single initiator. Each zone entity will contain one initiator and one target HBA. Each server will also have its own zone.

The documents listed in “Related documentation” on page 65 provide additional information.

Enabling iSCSI on theMDS console (iSCSI

configuration)

To enable iSCSI on the MDS console:

1. Enable iSCSI and add the iSCSI interface to the VSAN:

switch # config t switch(config)# iscsi enableswitch(config)# iscsi interface vsan-membership

2. Import the Fibre Channel targets so the MDS console can communicate with the iSCSI initiator and set the authentication:

switch(config)# iscsi import target fcswitch(config)# iscsi authentication none

70 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Configuring the MDSiSCSI port for Gigabit

Ethernet (iSCSIconfiguration)

Configure the MDS iSCSI port as a normal Gigabit Ethernet port:

switch(config)# interface GigabitEthernet2/1switch(config-if)# <port ip address> <subnet mask>switch(config-if)# iscsi authentication noneswitch(config-if)# no shutdownswitch(config-if)# exitswitch(config)# ip routingswitch(config)#

Note: The iSCSI-to-Fibre Channel bridge can also be configured and managed through the Fabric Manager GUI.

The Celerra MPFS over iSCSI Applied Best Practices Guide describes best practices for configuring the MDS iSCSI port for Gigabit Ethernet and can be found on the Powerlink website.

Configuring the iSCSIport proxy and TCP

parameters (iSCSIconfiguration)

Configure the iSCSI-to-Fibre Channel bridge to act as a proxy initiator so that the CLARiiON storage array does not need to be configured with the World Wide Names (WWNs) for all the initiators:

1. Set the iSCSI-to-Fibre Channel bridge to act as a proxy initiator:

switch(config)# interface iscsi2/1switch(config-if)# switchport proxy-initiator switch(config-if)#

2. Set the TCP parameters:

switch(config-if)# tcp send-buffer-size 16384switch(config-if)# no shutdownswitch(config-if)# exitswitch(config)# exitswitch #

3. Retrieve the system-assigned www names:

switch # show interface iscsi2/1

Configuring the SAN and storage 71

MPFS Environment Configuration

Output:

Hardware is GigabitEthernet Port WWN is 21:d9:00:0d:ec:01:5f:40 Admin port mode is ISCSI Port vsan is 2 iSCSI initiator is identified by name Number of iSCSI session: 0 (discovery session: 0) Number of TCP connection: 0 Configured TCP parameters Local Port is 3260 PMTU discover is enabled, reset timeout is 3600 sec Keepalive-timeout is 60 sec Minimum-retransmit-time is 300 ms Max-retransmissions 4 Sack is enabled QOS code point is 0 Maximum allowed bandwidth is 1000000 kbps Minimum available bandwidth is 70000 kbps Estimated round trip time is 1000 usec Send buffer size is 16384 KB Congestion window monitoring is enabled, burst size is 50 KB Configured maximum jitter is 500 us

Forwarding mode: store-and-forward TMF Queueing Mode: enabled Proxy Initiator Mode: enabled nWWN is 20:10:00:0d:ec:01:5f:42 (system-assigned) pWWN is 20:11:00:0d:ec:01:5f:42 (system-assigned) 5 minutes input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 5 minutes output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec iSCSI statistics Input 0 packets, 0 bytes Command 0 pdus, Data-out 0 pdus, 0 bytes, Unsolicited 0 bytes Output 0 packets, 0 bytes Response 0 pdus (with sense 0), R2T 0 pdus Data-in 0 pdus, 0 bytes

72 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Creating andconfiguring a new

zone for iSCSI proxy(iSCSI configuration)

Before creating a new zone for the iSCSI proxy, check the FC ports connected to SP A and SP B of the storage array. In the example, SP A is connected to port FC4/2 and SP B is connected to port FC4/4.

In principle, two ports may be zoned with each proxy. However, the throughput of iSCSI will still be 1 Gb. One port must be connected to each SP for high availability.

Using Navicli or Navisphere Manager, obtain the pWWN for SP A and SP B. Create the new zone name by using the name of the iSCSI port. PROXY2-1 is used in the example:

switch # config tswitch(config)# zone name PROXY2-1 vsan 2switch(config-zone)# member pwwn 50:06:01:60:10:60:20:35 switch(config-zone)# member pwwn 50:06:01:60:10:60:20:3eswitch(config-zone)# member pwwn 20:11:00:0d:ec:01:5f:42 switch(config-zone)# exit

Activating the zone(iSCSI configuration)

After creating the zone, it must be activated for the switch to know it exists.

To include and activate the newly created zone into the active zones, set:

switch(config)# zoneset name PROXIES vsan 2switch(config-zoneset)# member PROXY2-1switch(config-zoneset)# exitswitch(config)# zoneset activate name PROXIES vsan 2

Note: The vendor’s documentation contains specific information on these switches.

Enabling SP A andSP B FC interfaces (FC

configuration)

Enable the Fibre Channel ports for the storage processors. Use the FC interface data collected in “Configuration planning checklist” on page 41. For example, type:

switch(config)# interface fc4/2switch(config-if)# no shutdownswitch(config-if)# exitswitch(config)# interface fc4/4switch(config-if)# no shutdownswitch(config-if)# exit

Configuring the SAN and storage 73

MPFS Environment Configuration

Adding a VSAN foriSCSI/FC and enabling

zones

Use the following example to add the iSCSI blade in slot 2, port 1 and Fibre Channel (FC) blade in slot 4, port 2 and slot 4, port 4 to MDS VSAN 2. Then, enable the default zone to permit VSANs 1-4093:

1. Add a VSAN for iSCSI and Fibre Channel to the iSCSI-to-Fibre Channel bridge:

switch(config)# show vsan database switch(config-vsan-db)# vsan 2switch(config-vsan-db)# vsan 2 interface iscsi2/1switch(config-vsan-db)# vsan 2 interface fc4/4

switch(config-vsan-db)# vsan 2 interface fc4/2switch(config-vsan-db)# exit

2. Enable zones on the iSCSI-to-Fibre Channel bridge:

switch(config)# zone default-zone permit vsan 1-4093

3. Log off the iSCSI-to-Fibre Channel bridge or MDS Fibre Channel switch console:

switch(config)# exit

Creating a security file on the Celerra Network Server

A storage system will not accept a Secure CLI command unless the user who issues the command has a valid user account on the storage system. A Navisphere 6.X security file can be configured to issue Secure CLI commands on the server. Secure CLI commands require the servers (or the password prompt) in each command line; they are not needed in the command line if a security file is created.

To create a security file:

1. Log in to the Celerra Network Server Control Station as NAS administrator.

2. Create a security file by using the following naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -AddUserSecurity -scope 0 -user nasadmin -password nasadmin

where:<hostname:IP address> = name of the Celerra Network Server or IP address of the CLARiiON storage array

For example, type:

$ /nas/sbin/naviseccli -h 123.45.67.890 -AddUserSecurity -scope 0 -user nasadmin -password nasadmin

74 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Output:

This command produces no system response. When the command has finished executing, only the command line prompt is returned.

3. Check that the security file was created correctly by using the following command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> getagent

where:<hostname:IP address> = name of the Celerra Network Server or IP address of the CLARiiON storage array

For example, type:

$ /nas/sbin/naviseccli -h 123.45.67.890 getagent

Output:Agent Rev: 6.28.20 (1.40)Name: K10Desc:Node: A-APM00065101342Physical Node: K10Signature: 2215975Peer Signature: 2220127Revision: 04.28.000.5.706SCSI Id: 0Model: CX4-480Model Type: RackmountProm Rev: 4.00.00SP Memory: 8160Serial No. APM00065101342SP Identifier: ACabinet: SPE5

If the security file was not created correctly or cannot be found, an error message is displayed:

Security file not found. Already removed or check -secfilepath option.

4. If an error message was displayed, repeat step 2 and step 3 to create the security file.

Configuring the SAN and storage 75

MPFS Environment Configuration

CLARiiON iSCSI port configuration

This section describes how to set up the CLARiiON storage array for an iSCSI configuration:

Note: The IP addresses of all storage arrays <hostname:IP address> can be found in the /etc/hosts file on the Celerra Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

1. Configure iSCSI target hostname SP A and port IP address 0 on the storage array by using the following naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 0 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

where:<hostname:IP address> = name of the Celerra Network Server or IP address of the CLARiiON storage array.<port IP address> = IP address of a named logical element mapped to a port on a Data Mover. Each interface assigns an IP address to the port.<subnet mask> = 32-bit address mask used in IP to identify the bits of an IP address used for the subnet address.<gateway IP address> = IP address of the gateway machine through which network traffic is routed.

For example, type:

$ /nas/sbin/naviseccli -h 123.45.67.890 connection -setport -sp a -portid 0 -address 123.456.789.1 -subnetmask 123.456.789.0 -gateway 123.456.789.2

Output:

It is recommended that you consult with your Network Manager to determine the correct settings before applying these changes. Changing the port properties may disrupt iSCSI traffic to all ports on this SP. Initiator configuration changes may be necessary to regain connections. Do you really want to perform this action (y/n)? y

SP: APort ID: 0Port WWN: iqn.1992-04.com.emc:cx.apm00065101342.a0iSCSI Alias: 2147.a0IP Address: 123.45.67.890

76 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Subnet Mask: 123.456.789.0Gateway Address: 123.456.789.2Initiator Authentication: false

Note: If the iSCSI target is not configured (by replying with n), the command line prompt is returned.

2. Continue for SP A ports 1–3 and SP B ports 0–3 by using the following command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 1 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 2 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 3 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 0 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 1 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 2 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 3 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

The outputs for SP A ports 1–3 and SP B ports 0–3 are the same as SP A port 0 with specific port information for each port.

Configuring the SAN and storage 77

MPFS Environment Configuration

Access Logix configuration

This section describes how to set up an Access Logix configuration, create storage groups, add LUNs, and set failovermode and the arraycommpath.

Setting failovermodeand the

arraycommpath

The naviseccli failovermode command enables or disables the type of trespass needed for the failover software. This method of setting failovermode works for storage systems with Access Logix only.

The naviseccli arraycommpath command enables or disables a communication path from the Celerra Network Server to the CLARiiON storage system. This command is needed to configure a storage system when LUN 0 is not configured. This method of setting failovermode works for storage systems with Access Logix only.

CAUTION!Changing the failovermode setting may force the storage system to reboot. Changing the failovermode to the wrong value will make the storage group inaccessible to any connected server.

Note: It is suggested that failovermode and arraycommpath both be set to 1 for MPFS. If EMC PowerPath is enabled, failovermode must be set to 1.

Use this procedure to set failovermode and arraycommpath settings:

1. Set failovermode to 1 (unified storage only) by using the following naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin failovermode 1

where:<hostname:IP address> = name of the Celerra Network Server or IP address of the CLARiiON storage array

For example, type:

$ /nas/sbin/naviseccli -h 123.45.67.890 -scope 0 -user nasadmin -password nasadmin failovermode 1

Output:

WARNING: Previous Failovermode setting will be lost!DO YOU WISH TO CONTINUE (y/n)? y

78 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

2. Set arraycommpath to 1 (unified storage only) by using the following naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin arraycommpath 1

where:<hostname:IP address> = name of the Celerra Network Server or IP address of the CLARiiON storage array

For example, type:

$ /nas/sbin/naviseccli -h 123.45.67.890 -scope 0 -user nasadmin -password nasadmin arraycommpath 1

Output:

WARNING: Previous arraycommpath setting will be lost!DO YOU WISH TO CONTINUE (y/n)? y

Note: Setting or not setting arraycommpath produces no system response. When the command has finished executing, only the command line prompt is returned.

3. Check the failovermode setting (unified storage only) by using the following naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin failovermode

For example, type:

$ /nas/sbin/naviseccli -h 123.45.67.890 -scope 0 -user nasadmin -password nasadmin failovermode

Output:

Current failovermode setting is: 1

4. Check the arraycommpath setting (unified storage only) by using the following naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin arraycommpath

Configuring the SAN and storage 79

MPFS Environment Configuration

For example, type:

$ /nas/sbin/naviseccli -h 123.45.67.890 -scope 0 -user nasadmin -password nasadmin arraycommpath

Output:

Current arraycommpath setting is: 1

To discover the current settings of failovermode or the arraycommpath, also use the port -list -failovermode or port -list -arraycommpath commands.

Note: The outputs of these commands provide more detail than just the failovermode and arraycommpath settings and may be multiple pages in length.

Creating storagegroups and adding

LUNs

This section describes how to create storage groups, add LUNs to the storage groups, and configure the storage groups.

The IP addresses of all storage arrays <hostname:IP address> can be found in the /etc/hosts file on the Celerra Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file:

Note: Specify the hostname as the name of the Celerra Network Server, for example Server_2.

1. Create a storage group by using the following navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -create -gname MPFS_Clients

where:<hostname:IP address> = name or IP address of the Celerra Network Server

For example, type:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -create -gname MPFS_Clients

Output:

This command produces no system response. When the command has finished executing, only the command line prompt is returned.

80 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

2. Add LUNs to the storage group by using the following navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 16

where:<hostname:IP address> = name or IP address of the Celerra Network Server

For example, type:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 16

Output:

This command produces no system response. When the command has finished executing, only the command line prompt is returned.

3. Continue adding LUNs to the rest of the storage group:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 17

where:<hostname:IP address> = name or IP address of the Celerra Network Server

For example, type:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 17

Output:

This command produces no system response. When the command has finished executing, only the command line prompt is returned.

Configuring and accessing storage 81

MPFS Environment Configuration

Configuring and accessing storageThis section describes how to install the Fibre Channel driver, add hosts to storage groups, configure the iSCSI driver, and add initiators to the storage group.

The arraycommpath and failovermode settings are used to see both active and passive paths concurrently. For a LUN failover, LUNs can be presented from active to passive path or passive to active path. Use the arraycommpath and failovermode settings as described in Table 4 on page 81.

Any MPFS server or MDS port that is connected and logged in to a storage group should have the arraycommpath and failovermode set to 1. For any Celerra port connected to a storage group, these settings should be 0. These settings are on an individual server/port basis and override the global settings on the storage array default of 0.

When using the MDS in proxy-initiator mode, the storage group should contain the WWN of the MDS proxy-initiator ports, not the Linux servers.

When using the CLARiiON CX3 or CX4 storage array in an iSCSI gateway configuration, the iSCSI initiator name, or IQN, is used to define the server, not a WWN.

Installing the Fibre Channel driver (FC configuration)

Install the Fibre Channel driver on the Linux server. The latest driver and qualification information is available on the Fibre Channel manufacturer’s website, the EMC E-Lab Interoperability Navigator, or the documentation provided with the Fibre Channel driver.

Table 4 Arraycommpath and failovermode settings for storage groups

Celerra ports

Linux FC server/ MDS iSCSI proxy-initiators

Gateway units with Access Logix arraycommpath 0 1

failovermode 0 1

Unified storage, MPFS capable units arraycommpath 0 1

failovermode 0 1

82 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Adding hosts to the storage group (FC configuration)

Use the following steps to view hosts in the storage group and add hosts to the storage group for SP A and SP B:

1. List the hosts in the storage group by using the following navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:"

where:<hostname:IP address> = name or IP address of the Celerra Network Server

For example, type:

$ /nas/sbin/navicli -h 123.45.67.890 port -list |grep "HBA UID:"

Output:

HBA UID: 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3AHBA UID: 20:00:00:1B:32:00:D1:3A:21:00:00:1B:32:00:D1:3AHBA UID: 20:01:00:1B:32:20:B5:35:21:01:00:1B:32:20:B5:35HBA UID: 20:00:00:1B:32:00:B5:35:21:00:00:1B:32:00:B5:35

2. Add hosts to the storage group by using the following navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

where:

<hostname:IP address> = name or IP address of the Celerra Network Server

<gname> = storage group name<hbauid> = WWN of proxy initiator<sp> = storage processor<spport> = port on SP<failovermode> = enables or disables the type of

trespass needed for failover software (1 = enable, 0 = disable)

<arraycommpath> = creates or removes a communication path between the server and the storage system (1 = enable, 0 = disable)

Configuring and accessing storage 83

MPFS Environment Configuration

Examples of adding initiators to storage groups are shown in step 3 and step 4.

3. Add hosts to storage group A:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid 20:0a:00:0d:ec:01:53:82:20:09:00:0d:ec:01:53:82 -sp a -spport 0 -failovermode 1 -arraycommpath 1

Note: The IP addresses of all storage arrays <hostname:IP address> can be found in the /etc/hosts file on the Celerra Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

4. Add hosts to storage group B:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid 20:0a:00:0d:ec:01:53:82:20:09:00:0d:ec:01:53:82 -sp b -spport 0 -failovermode 1 -arraycommpath 1

84 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

Configuring the iSCSI driver for RHEL 4 (iSCSI configuration)

After installing the Linux server for iSCSI configurations, follow these steps to configure the iSCSI driver for RHEL 4 on the Linux server:

1. Edit the /etc/iscsi.conf file on the Linux server.

2. Edit the /etc/initiatorname.iscsi file on the Linux server.

3. Start the iSCSI service daemon.

Note: “Configuring the iSCSI driver for RHEL 5, SLES 10, and CentOS 5 (iSCSI configuration)” on page 88 provides information about RHEL 5, SLES 10, and CentOS 5.

Edit the iscsi.conf file Using vi, or another standard text editor that does not add carriage return characters, edit the /etc/iscsi.conf file. Modify the file so that the iSCSI parameters shown in Table 5 on page 85 have comments removed and have the required values as listed in this table.

Global parameters should be listed before the DiscoveryAddress, should start in column 1, and should not have any white space in front of them. The DiscoveryAddresses must also start in column 1 and not have any whitespace in front of them.

Configuring and accessing storage 85

MPFS Environment Configuration

DiscoveryAddresses should appear after all global parameters. Be sure to read the iscsi.conf man page carefully.

Note: The Discovery Address is the IP address of the IP-SAN iSCSI LAN port. This address is an example of using an internal IP. The actual switch IP address will be different.

Table 5 iSCSI parameters for RHEL 4 using 2.6 kernels

iSCSI parameter Required value

HeaderDigest Never

DataDigest Never

ConnFailTimeout 45

InitialR2T Yes

PingTimeout 45

ImmediateData No

DiscoveryAddress IP address of the iSCSI LAN port on IP-SAN switch

86 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

For the CLARiiON CX3 or CX4 series storage array configurations, the target name is the IQN of the CLARiiON CX3 or CX4 series storage array ports. Run the following command from the Control Station to get the IQNs of the target ports:

$ /nas/sbin/navicli -h <hostname:IP address> port -list -all |grep "SP UID:"

SP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:60:41:E0:05:9FSP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:61:41:E0:05:9FSP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:68:41:E0:05:9FSP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:69:41:E0:05:9FSP UID: iqn.1992-04.com.emc:cx.hk192201067.a2SP UID: iqn.1992-04.com.emc:cx.hk192201067.a3SP UID: iqn.1992-04.com.emc:cx.hk192201067.a0SP UID: iqn.1992-04.com.emc:cx.hk192201067.a1SP UID: iqn.1992-04.com.emc:cx.hk192201067.b2SP UID: iqn.1992-04.com.emc:cx.hk192201067.b3SP UID: iqn.1992-04.com.emc:cx.hk192201067.b0SP UID: iqn.1992-04.com.emc:cx.hk192201067.b1

An example of iscsi.conf parameters for a gateway configuration follows (two discovery addresses are shown as there are two zones):

Enabled=noTargetName=iqn.1992-04.com.emc:cx.hk192201067.a0TargetName=iqn.1992-04.com.emc:cx.hk192201067.a1TargetName=iqn.1992-04.com.emc:cx.hk192201067.a2TargetName=iqn.1992-04.com.emc:cx.hk192201067.a3TargetName=iqn.1992-04.com.emc:cx.hk192201067.b0TargetName=iqn.1992-04.com.emc:cx.hk192201067.b1TargetName=iqn.1992-04.com.emc:cx.hk192201067.b2TargetName=iqn.1992-04.com.emc:cx.hk192201067.b3DiscoveryAddress=45.246.0.41TargetName=iqn.1992-04.com.emc:cx.hk192201067.a1 Continuous=no Enabled=yes LUNs=16,17,18,19,20,21,22,23,24,25DiscoveryAddress=45.246.0.45TargetName=iqn.1992-04.com.emc:cx.hk192201067.b1 Continuous=no Enabled=yes LUNs=16,17,18,19,20,21,22,23,24,25

Configuring and accessing storage 87

MPFS Environment Configuration

Edit theinitiatorname.iscsi file

The iSCSI-to-Fibre Channel bridge and CLARiiON CX3 or CX4 series storage array automatically generate initiator names for the Linux server. However, initiator names must be unique and auto initiatorname generation may not always be unique.

iSCSI names are generalized by using a normalized character set (converted to lower case or equivalent), with no white space allowed, and very limited punctuation. For those using only ASCII characters (U+0000 to U+007F), the following characters are allowed:

◆ ASCII dash character ('-' = U+002d)

◆ ASCII dot character ('.' = U+002e)

◆ ASCII colon character (':' = U+003a)

◆ ASCII lower-case characters ('a'..'z' = U+0061..U+007a)

◆ ASCII digit characters ('0'..'9' = U+0030..U+0039)

In addition, any upper-case characters input by using a user interface MUST be mapped to their lower-case equivalents. RFC 3722, http://www.ietf.org/rfc/rfc3722.txt, provides more information.

Use the following steps to generate a unique initiatorname:

1. To view the current /etc/initiator.iscsi file:

$ more /etc/initiatorname.iscsiGenerateName=yes

2. When using vi, or another standard text editor that does not add carriage return characters, edit the /etc/initiatorname.iscsi file and comment out the line containing GenerateName=yes.

Example of commented-out line:

$ GenerateName=yes

Note: Do not exit the file until step 3 is completed.

3. Place the unique IQN name, iSCSI qualified name, in the /etc/initiatorname.iscsi file:

$$ GenerateName=yesInitiatorName=iqn.2006-06.com.emc.mpfs:<xxxxxxx>

where:

<xxxxxxx> = Server name

88 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

In this example, use mpfsclient01 as the Linux server name:

$ GenerateName=yesInitiatorName=iqn.2006-06.com.emc.mpfs:<mpfsclient01>

4. Save and exit the editor.

Note: If nodes exist on the switch, issue the show iscsi initiator command to show the IQN name. Care must be taken to not use duplicate IQN names (InitiatorName).

Starting iSCSI To start iSCSI, as root, type the following command:

$ /etc/init.d/iscsi start

Output:

Starting iSCSI: iscsi iscsid [ OK ]

Configuring the iSCSI driver for RHEL 5, SLES 10, and CentOS 5 (iSCSI configuration)

After installing the Linux server for iSCSI configurations, follow the procedures below to configure the iSCSI driver for RHEL 5, SLES 10, and CentOS 5 on the Linux server.

Installing andconfiguring RHEL 5,

SLES 10, and CentOS 5

To install the Linux Open iSCSI software initiator, consult the README files available within your Linux distribution and the release notes from the distributor.

Note: Complete the following steps before continuing to the RHEL 5, SLES 10, and CentOS 5 installation subsections. The open-iSCSI persistent configuration is implemented as a DBM database available on all Linux installations.

The database contains two tables:

◆ Discovery table (discovery.db)

◆ Node table (node.db)

The iSCSI database files in RHEL 5, SLES 10, and CentOS 5 are located in /var/lib/open-iscsi/. For SLES 10 SP2 and SP1 they will be found in /etc/iscsi/. The following are MPFS recommendations to complete the installation. These recommendations are generic to all distributions unless noted otherwise.

Configuring and accessing storage 89

MPFS Environment Configuration

Use the following steps to configure the iSCSI driver for RHEL 5, SLES 10, and CentOS 5 on the Linux server:

1. Edit the /etc/iscsi/iscsid.conf file.

There are several variables within the file. The default file from the initial installation is configured to operate with the default settings. The syntax of the file uses a pound (#) symbol to comment out a line in the configuration file. You can enable a variable by deleting the pound (#) symbol preceding the variable in the iscsid.conf file. The entire set of variables with the default and optional settings is listed in each distribution’s README file and in the configuration file.

Table 6 on page 89 lists the recommended iSCSI parameter settings.

Table 6 iSCSI parameters for RHEL 5, SLES 10, and CentOS 5

Variable nameDefault settings

MPFS recommended Comments

node.startup manual auto None

node.session.iscsi.InitialR2T No Yes None

node.session.iscsi.ImmediateData Yes No None

node.session.timeo.replacement_timeout

120 60 With the use of multipathing software, you may want to decrease this time to 30 seconds for a faster failover. However, caution should be used to ensure that this timer is greater than the node.conn[0].timeo.timoe.noop_out_interval and node.conn[0].timeo.timeo.noop_out_timeout times combined.

node.conn[0].timeo.timoe.noop_out_interval

10 > 10 congested network

Ensure that this value does not exceed that of the value in node.session.timeo.replacement_timeout.

node.conn[0].timeo.timeo.noop_out_timeout

15 > 15 congested network

Ensure that this value does not exceed that of the value in node.session.timeo.replacement_timeout.

node.conn[0].iscsi.MaxRecvDataSegmentLength

131072 262144 According to BP for previous versions.

90 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

2. Set the run levels for the iSCSI daemon to automatically start at boot and to shut down when the Linux server is brought down:

• For RHEL 5.x and CentOS 5:

# chkconfig - -level 345 iscsid on # service iscsi start

For RHEL you will need to perform a series of eight iscsiadm commands to configure the targets you want to connect to with open-iSCSI. Consult the man pages for iscsiadm for a detailed explanation of the command and its syntax.

First, discover the targets to connect the server to by using iSCSI.

• For SLES 10.x:

# chkconfig -s open-iscsi 345# chkconfig -s open-iscsi on#/sbin/rcopen-iscsi start

You may use the YaST utility on SLES 10 to configure the iSCSI software initiator. It can be used to discover targets with the use of the iSCSI SendTargets command, add targets to be connected to the server, and start/stop the iSCSI service. Open YaST and select Network Services > iSCSI Initiator. Open the tab to Discovered Targets by typing the IP address of the target:

• For a CLARiiON storage arrays:

Specify one of the target IP addresses and the array will return all its available targets for you to select. After discovering your targets you will need to click the Connected Targets tab to log in to the targets you want to be connected to and select those you want to be logged in to automatically at boot time. Perform a discovery on a single IP address and the array will return all its iSCSI configured targets.

• For a Symmetrix storage arrays:

Specify each individual target you want to discover and the array will return the specified targets for you to select. After discovering your targets you will need to click the Connected Targets tab to log in to the targets you want to be connected to and select those you want to be logged in to automatically at boot time. Perform the discovery process on each individual target and the array will return the specified iSCSI configured targets.

Configuring and accessing storage 91

MPFS Environment Configuration

Command examples To discover targets:

# iscsiadm -m(ode) discovery -t(ype) s(end)t(argets) -p(ortal) <port IP address>

output: <node.discovery_address>:3260,1 iqn.2007-06.com.test.cluster1:storage.cluster1

#iscsiadm -m discovery<node.discovery_address>:3260 via sendtargets<node.discovery_address>:3260 via sendtargets#iscsiadm --mode node (rhel5.0)<node.discovery_address>:3260,13570

iqn.1987-05.com.cisco:05.tomahawk.11-03.5006016941e00f1c

<node.discovery_address>:3260,13570 iqn.1987-05.com.cisco:05.tomahawk.11-03.5006016141e00f1c

#iscsiadm --mode node --targetname iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016141e00f1c

node.name = iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016141e00f1c

node.tpgt = 13569node.startup = automaticiface.hwaddress = defaultiface.iscsi_ifacename = defaultiface.net_ifacename = defaultiface.transport_name = tcpnode.discovery_address = 128.221.252.200node.discovery_port = 3260….

#iscsiadm --mode node (suse10.0)[2f21ef] <node.discovery_address>:3260,13569

iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e019bd

[2f071e] <node.discovery_address>:3260,13569 iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e00f1c

#iscsiadm -m node -r 2f21ef node.name = iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e019bd

node.transport_name = tcpnode.tpgt = 13569node.active_conn = 1node.startup = automaticnode.session.initial_cmdsn = 0node.session.auth.authmethod = None

92 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

#iscsiadm --mode node --targetname iqn.2007-06.com.test.cluster1

:storage.cluster1 --portal <node.discovery_address>:3260 --login

#iscsiadm -m session -i#iscsiadm --mode node --targetname

iqn.2007-06.com.test.cluster1:storage.cluster1 --portal <node.discovery_address>:3260 --logout

To log in to the all targets:

# iscsiadm -m node -L all

Login session [45.246.0.45:3260 iqn.1992-04.com.MPFS:cx.hk192201109.b1]

Login session [45.246.0.45:3260 iqn.1992-04.com.MPFS:cx.hk192201109.a1]

Starting/stopping theiSCSI driver for RHEL 5,SLES 10, and CentOS 5

(iSCSI configuration

Use the following commands to start and stop the Open-iSCSI driver.

To manually start the iSCSI driver for RHEL 5.x and CentOS 5:

# etc/init.d/iscsid {start|stop|restart|status| condrestart}

To manually start the iSCSI driver for SLES 10.x:

# sbin/rcopen-iscsi {start|stop|status|restart}

If there are problems loading the iSCSI kernel module, diagnostic information will be placed in /var/log/iscsi.log.

The open_iscsi driver is a sysfs class driver. You can access many of its attributes in the directory. The man page for iscsiadm (8) provides information for all administrative functions used to configure, gather statistics, target discovery and so on. The command is in the format:

/sys/class/iscsi_<host, session, connection>

Note: Check that anything that has an iSCSI device open has closed the iSCSI device before shutting down iSCSI. This includes file systems, volume managers, and user applications. If iSCSI devices are open when you attempt to stop the driver, the scripts will error out instead of removing those devices. This prevents you from corrupting the data on iSCSI devices. In this case, iscsid will no longer be running, so if you want to continue using the iSCSI devices, issue /etc/init.d/iscsi start.

Configuring and accessing storage 93

MPFS Environment Configuration

Limitations and workarounds

The following are limitations and workarounds:

◆ The Linux iSCSI driver, which is part of the Linux operating system, does not distinguish between NICs on the same subnet. Therefore to achieve load balancing and multipath failover, storage systems connected to Linux servers must configure each NIC on a different subnet.

◆ The open-iSCSI daemon does not find targets automatically on boot when configured to log in at boot time. More information is provided in the Linux iSCSI Attach Release Notes.

Adding initiators to the storage group(FC configuration)

In a Fibre Channel configuration, the storage group should contain the HBA UID of the Linux servers.

The IP addresses of all storage arrays <hostname:IP address> can be found in the /etc/hosts file on the Celerra Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

Use the following steps to add initiators to the storage group for SP A and SP B:

1. Use the following navicli command to list hosts in the storage group:

$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:"

For example, type:

$ /nas/sbin/navicli -h 123.45.67.890 port -list |grep "HBA UID:"

Output:

HBA UID: 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3AHBA UID: 20:00:00:1B:32:00:D1:3A:21:00:00:1B:32:00:D1:3AHBA UID: 20:01:00:1B:32:20:B5:35:21:01:00:1B:32:20:B5:35HBA UID: 20:00:00:1B:32:00:B5:35:21:00:00:1B:32:00:B5:35

2. Use the following navicli command to add initiators to the storage group by using this command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

94 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

where:

Note: Perform this command for each SP.

Examples of adding initiators to storage groups are shown in step 3 and step 4.

3. Add initiators to the storage group for SP A:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp a -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

<gname> = storage group name<hbauid> = HBA UID of Linux servers<sp> = storage processor<spport> = port on SP<failovermode> = enables or disables the type of trespass

needed for failover software (1 = enable, 0 = disable)

<arraycommpath> = creates or removes a communication path between the server and the storage system (1 = enable, 0 = disable)

Configuring and accessing storage 95

MPFS Environment Configuration

4. Add initiators to the storage group for SP B:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

Adding initiators to the storage group(iSCSI configuration)

When using the CLARiiON CX3 or CX4 series storage array in an iSCSI gateway configuration, the iSCSI initiator name, or IQN, is used to define the host, not a WWN.

The IP addresses of all storage arrays <hostname:IP address> can be found in the /etc/hosts file on the Celerra Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

Use the following steps to add initiators to the storage group for SP A and SP B:

1. Find the IQN used to define the host by using the following navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:" |grep iqn

For example, type:

$ /nas/sbin/navicli -h 123.45.67.890 port -list |grep "HBA UID:" |grep iqn

96 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Output:

InitiatorName=iqn.1994-05.com.redhat:58c8b0919b31

2. Use the following navicli command to add initiators to the storage group by using this command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

where:

Note: Perform this command for each iSCSI proxy-initiator.

Examples of adding initiators to storage groups are shown in step 3 and step 4.

3. Add initiators to the storage group for SP A:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid iqn.1994-05.com.redhat:58c8b0919b31 -sp a -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system.

<gname> = storage group name<hbauid> = iSCSI initiator name<sp> = storage processor<spport> = port on SP<failovermode> = enables or disables the type of trespass

needed for failover software (1 = enable, 0 = disable)

<arraycommpath> = creates or removes a communication path between the server and the storage system (1 = enable, 0 = disable)

Configuring and accessing storage 97

MPFS Environment Configuration

Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

4. Add initiators to the storage group for SP B:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid iqn.1994-05.com.redhat:58c8b0919b31 -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

Adding initiators to the storage group(iSCSI to FC bridge configuration)

When using the iSCSI-to-Fibre Channel bridge in proxy-initiator mode, the storage group should contain the HBA UID of the Linux servers.

The IP addresses of all storage arrays <hostname:IP address> can be found in the /etc/hosts file on the Celerra Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

Use the following steps to add initiators to the storage group for SP A and SP B:

98 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

1. Use the following navicli command to list hosts in the storage group:

$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:"

For example, type:

$ /nas/sbin/navicli -h 123.45.67.890 port -list |grep "HBA UID:"

Output:

HBA UID: 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3AHBA UID: 20:00:00:1B:32:00:D1:3A:21:00:00:1B:32:00:D1:3AHBA UID: 20:01:00:1B:32:20:B5:35:21:01:00:1B:32:20:B5:35HBA UID: 20:00:00:1B:32:00:B5:35:21:00:00:1B:32:00:B5:35

2. Use the following navicli command to add initiators to the storage group by using this command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

where:

Note: Perform this command for each SP.

Examples of adding initiators to storage groups are shown in step 3 and step 4.

<gname> = storage group name<hbauid> = HBA UID of Linux servers<sp> = storage processor<spport> = port on SP<failovermode> = enables or disables the type of trespass

needed for failover software (1 = enable, 0 = disable)

<arraycommpath> = creates or removes a communication path between the server and the storage system (1 = enable, 0 = disable)

Configuring and accessing storage 99

MPFS Environment Configuration

3. Add initiators to the storage group for SP A:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp a -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

4. Add initiators to the storage group for SP B:

$ /nas/sbin/navicli -h 123.45.67.890 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

100 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Mounting the MPFS file systemA connection between the Linux Server and the Celerra Network Server, known as a session, must be completed before mounting the MPFS file system. Establish a session by mounting the MPFS file system on the Linux server by using the mount command.

Note: MPFS file systems can be added to the /etc/fstab file to mount the file system automatically after the server is rebooted or shut down.

To mount the MPFS file system on the Linux Server, use the mount command with the following syntax:

mount -t mpfs [-o] <MPFS_specific_options> <movername>:/<FS_export> <mount_point>

where:

◆ <MPFS_specific_options> is a comma-separated list (without spaces) of arguments to the -o option that are supported by EMC Celerra MPFS. Most arguments to the optional -o option that are supported by the NFS mount and mount_nfs commands are also supported by MPFS. MPFS also supports the following additional arguments:

• -o mpfs_verbose — Executes the mount command in verbose mode. If the mount succeeds, the list of disk signatures used by the MPFS volume is printed on standard output.

• -o mpfs_keep_nfs — If the mount using MPFS fails, the file system is mounted by using NFS instead. Warning messages inform the user that the MPFS mount failed.

• -o hvl — Specify the volume management type as hierarchical by default if it is supported by the server (-o hvl=1) or as not hierarchical by default (-o hvl=0). Setting this value overrides the default value specified in /etc/sysconfig/EMCmpfs.“Hierarchical volume management” on page 38 describes hierarchical volumes and their management.

The -t option specifies the type of file system (such as MPFS).

Note: The -o hvl option requires NAS software version 5.6 or later.

◆ <movername> is the name of the Celerra Network Server.◆ <FS_export> is the absolute pathname of the directory that is

exported on the Celerra Network Server.

Mounting the MPFS file system 101

MPFS Environment Configuration

◆ <mount_point> is the absolute pathname of the directory on the Linux server on which to mount the MPFS file system.

Note: To view the man page for the mount command, type man mount_mpfs.

Example The following command mounts an MPFS file system without any MPFS specific options:

mount -t mpfs <hostname:IP address>:/src /usr/src

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

The default behavior of mount –t mpfs is to try to mount the file system. If all disks are not available, the mount will fail with the following error:

$ mount -t mpfs <hostname:IP address>:/src /usr/src -vRequested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Volume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

The following command mounts an MPFS file system and displays a list of disk signatures:

mount -t mpfs -o mpfs_verbose <hostname:IP address>:/src /usr/src

Output:

Celerra signature vendor product_id device serial number or path

APM000531007850006-001c EMC SYMMETRIX /dev/sdaa path = /dev/sdaa(0x41a0) ActiveAPM000531007850006-001d EMC SYMMETRIX /dev/sdab path = /dev/sdab(0x41b0) ActiveAPM000531007850006-001e EMC SYMMETRIX /dev/sdac path = /dev/sdac(0x41c0) ActiveAPM000531007850006-001f EMC SYMMETRIX /dev/sdad path = /dev/sdad(0x41d0) ActiveAPM000531007850006-0020 EMC SYMMETRIX /dev/sdae path = /dev/sdae(0x41e0) ActiveAPM000531007850006-0021 EMC SYMMETRIX /dev/sdaf path = /dev/sdaf(0x41f0) Active

102 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

If all disks are not available, the mount will fail with the following error:

$ mount -t mpfs -o <hostname:IP address>:/src /usr/src -vRequested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Volume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

The following command mounts an MPFS file system; the mpfs_keep_nfs option causes the file system to mount by using NFS if the mount by using MPFS fails:

mount -t mpfs -o mpfs_keep_nfs <hostname:IP address>:/src /usr/src

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

With the mpfs_keep_nfs option, the behavior is to try to mount the file system by using MPFS. If all the disks are not available, the mount will default to NFS:

$ mount -t mpfs <hostname:IP address>:/rcfs /mnt/mpfs -v -o mpfs_keep_mpfs

<hostname:IP address>:/rcfs on mnt/mpfs type mpfs (rw,addr=<hostname:IP address>)

<hostname:IP address>:/rcfs using disksNo disks found, ignore and work through NFS now!It will failback to MPFS automatically when the disks are

OK.

The following command specifies the volume management type as hierarchical volume management:

mount -t mpfs -o hvl <hostname:IP address>:/src /usr/src

Note: This command produces no system response. When the command has finished executing, only the command line prompt is returned.

Mounting the MPFS file system 103

MPFS Environment Configuration

If all disks are not available, the mount will fail with the following error:

$ mount -t mpfs -o hvl <hostname:IP address>:/src /usr/src -v

Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Volume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

Never retry I/O through the SAN. For all intents and purposes, the behavior is as if the user typed mount –t nfs. Use this option for mounts that are done automatically and to ensure that the volume is mounted with or without MPFS.

104 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Environment Configuration

Unmounting the MPFS file systemTo unmount the MPFS file systems on the Linux Server, use the umount command with the following syntax:umount -t mpfs [-a] <mount_point>

where:

◆ -t is the type of file system (such as MPFS).◆ -a specifies to unmount all MPFS file systems.

◆ <mount_point> is the absolute pathname of the directory on the Linux server on which to unmount the MPFS file system.

Example The following command unmounts all MPFS file systems:

umount -t mpfs -a

To unmount a specific MPFS file system, type either of the following commands:

umount -t mpfs /mnt/fs1orumount /mnt/fs1

If a file system cannot be unmounted or is not in use, the umount command displays the following error message:

Error unmounting /mnt/fs1/mpfs via MPFS

If a file system cannot be unmounted as it is in use, the umount command displays the following error message:

umount: device busy

Note: These commands produce no system response. When the commands have finished executing, only the command line prompt is returned.

Installing, Upgrading, or Uninstalling MPFS Software 105

3Invisible Body Tag

This chapter describes how to install, upgrade, and uninstall the MPFS software.

Topics include:

◆ Installing the MPFS software ......................................................... 106◆ Upgrading the MPFS software....................................................... 110◆ Uninstalling the MPFS software .................................................... 115

Installing, Upgrading,or Uninstalling MPFS

Software

106 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Installing, Upgrading, or Uninstalling MPFS Software

Installing the MPFS softwareThis section describes the requirements necessary before installing and two methods to install the MPFS software:

◆ Install the MPFS software from a tar file

◆ Install the MPFS software from a CD

Before installing Before installing the MPFS software, read the prerequisites for the Linux server and storage system listed in this section:❑ Verify that the Linux server on which you plan to install the

MPFS software meets the MPFS configuration requirements specified in the EMC Celerra MPFS for Linux Clients Release Notes.

❑ Ensure that the Linux server has a network connection to the Data Mover on which the MPFS software resides and that you can contact the Data Mover.

❑ Ensure that the Linux server meets the overall system and other configuration requirements specified in the E-Lab Interoperability Navigator.

Install the MPFS software from a tar file

To install the MPFS software from a compressed tar file, download the file from the Powerlink website. Then, uncompress and extract the tar file on the Linux server and execute the install-mpfs script.

Note: The uncompressed tar file needs approximately 17 MB and the installation RPM file needs approximately 5 MB of disk space.

Note: Unless noted as an output, when the commands in the following procedure have finished executing, only the command line prompt is returned.

Follow these steps to uncompress, extract, and install the MPFS software from the compressed tar file:

1. Create the directory /tmp/temp_mpfs if it does not already exist.

2. Locate the compressed tar file on the Powerlink website.

Depending on the specific MPFS software release and version, the filename will appear as:EMCmpfs.linux.6.0.x.x.tar.Z

Installing the MPFS software 107

Installing, Upgrading, or Uninstalling MPFS Software

3. Download the compressed tar file from the Powerlink website to the directory created in step 1.

4. Change to the /tmp/temp_mpfs directory:

cd /tmp/temp_mpfs

5. Uncompress the tar file by using the following command syntax:

uncompress <filename>

where <filename> is the name of the tar fileFor example, type:

uncompress EMCmpfs.linux.6.0.x.x.tar.Z

6. Extract the tar file by using the following command syntax:

tar -zxvf <filename>

where <filename> is the name of the tar fileFor example, type:

tar -zxvf EMCmpfs.linux.6.0.x.x.tar.Z

7. Go to the Linux directory created by the last step: cd /tmp/temp_mpfs/linux

8. Install the MPFS software: $ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.1.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...[ Step 2 ] Installing MPFS package ...

Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Loading EMC MPFS Disk Protection [ OK ]Protecting EMC Celerra disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

9. Follow the instructions in “Post-installation check” on page 113.

108 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Installing, Upgrading, or Uninstalling MPFS Software

Installing the MPFS software from a CD

To install the MPFS software from the EMC MPFS for Linux Client software CD:

Note: Unless noted as an output, when the commands in the following procedure have finished executing, only the command line prompt is returned.

1. Insert the CD into the CD drive.

2. Mount the CD in the mnt directory:$ mount /dev/cdrom /mnt

Output: mount: block device/dev/cdrom is write-protected,

mounting read-only

3. Go to the mnt directory created in the last step: $ cd /mnt

4. View the architecture subdirectories:$ ls -lt

Output:dr-xr-xr-x 2 root root 2048 Jul 31 14:46 Packages-r--r--r-- 1 root root 694 Jul 31 14:46 README.txt-r--r--r-- 1 root root 442 Jul 31 14:46 TRANS.TBL

5. Go to the Packages directory: $ cd Packages

6. View the architecture subdirectories:$ ls -lt

Output:-r--r--r-- 1 root root 3381317 Jul 31 14:46 EMCmpfs-6.0.1.x-ia64.rpm-r--r--r-- 1 root root 3955356 Jul 31 14:46 EMCmpfs-6.0.1.x-x86_64.rpm-r-xr-xr-x 1 root root 11711 Jul 31 14:46 install-mpfs-r--r--r-- 1 root root 1175 Jul 31 14:46 TRANS.TBL-r--r--r-- 1 root root 4807898 Jul 31 14:46 EMCmpfs-6.0.1.x-i686.rpm

Installing the MPFS software 109

Installing, Upgrading, or Uninstalling MPFS Software

7. Install the MPFS software: $ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.1.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...[ Step 2 ] Installing MPFS package ...

Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Loading EMC MPFS Disk Protection [ OK ]Protecting EMC Celerra disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

8. Follow the instructions in “Post-installation check” on page 113.

Post-installation check

After installing the MPFS software:

1. Verify that the MPFS software is installed properly and the MPFS daemon (mpfsd) has started.

2. Start the MPFS software by mounting an MPFS file system.

If the MPFS software does not run, Appendix B, “Error Messages and Troubleshooting,” contains information on troubleshooting the MPFS software.

Operating MPFS through a firewall

For proper MPFS operation, the Linux server must contact the Celerra Network Server (a Data Mover) on its File Mapping Protocol (FMP) port. The Celerra Network Server must also contact the Linux server on its FMP port.

If a firewall resides between the Linux server and the Celerra Network Server, the firewall must allow access to the ports listed in Table 7 on page 109 for the Linux server.

Table 7 Linux server firewall ports

Linux server Linux server port

Celerra Network Server port

Linux OSRHEL, SLES

CentOS

6907 4656, 2079, 1234, 111, 625

110 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Installing, Upgrading, or Uninstalling MPFS Software

Upgrading the MPFS softwareUse the following procedure to upgrade the MPFS software.

Upgrade the MPFS software

Upgrade the existing MPFS software by using the install-mpfs script.

The install-mpfs script can store information about the MPFS configuration, unmount MPFS file systems, and restore the configuration after an upgrade.

The command syntax for the install-mpfs script is:

install-mpfs [-s] [-r]

where:-s = silent mode, which unmounts all MPFS file systems and upgrades RPM without prompting the user. -r = restores configurations by backing up the current MPFS configurations and restoring them after an upgrade.

The install script will automatically issue rpm -e EMCmpfs to remove the existing MPFS software.

Note: Do not backup and restore MPFS configuration files by default.

Upgrading the MPFS software 111

Installing, Upgrading, or Uninstalling MPFS Software

To upgrade the MPFS software on a Linux server that has an earlier version of MPFS software installed:

1. Type:

$ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.1.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...Warning: Package EMCmpfs-5.0.11-0 has already been installed.Do you want to upgrade to new package? [yes/no]yes[ Step 2 ] Checking mounted mpfs file system ...Fine, no mpfs file system is mounted. Install process will continue. [ Step 3 ] Upgrading MPFS package ...Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Unloading old version modules...unprotectLoading EMC MPFS Disk Protection [ OK ]Protecting EMC Celerra disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

2. The installation is complete. Follow the instructions in “Post-installation check” on page 113.

112 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Installing, Upgrading, or Uninstalling MPFS Software

Upgrade the MPFS software with the MPFS file system mounted

The install-mpfs script can store information about the MPFS configuration, unmount MPFS file systems, and restore the configuration after an upgrade.

The command syntax for the install-mpfs script is:

install-mpfs [-s] [-r]

where:-s = silent mode which unmounts all MPFS file systems and upgrades RPM without prompting the user. -r = restores configurations by backing up the current MPFS configurations and restoring them after an upgrade.The install-mpfs script will attempt to unmount the MPFS file system after prompting the user to proceed.

Note: Do not backup and restore MPFS configuration files by default.

To install the MPFS software on a Linux server that has an earlier version of MPFS software installed and an MPFS file system mounted:

1. Type:

$ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.1.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...Warning: Package EMCmpfs-5.0.11-0 has already been installed.Do you want to upgrade to new package? [yes/no]yes[ Step 2 ] Checking mounted mpfs file system ...The following mpfs file system are mounted:/mntDo you want installation to umount these file system automatically? [yes/no]yes

Upgrading the MPFS software 113

Installing, Upgrading, or Uninstalling MPFS Software

Unmounting MPFS filesystems...Successfully umount all mpfs file system.[ Step 3 ] Upgrading MPFS package ...Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Unloading old version modules...unprotectLoading EMC MPFS Disk Protection [ OK ]Protecting EMC Celerra disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

2. The installation is complete. Follow the instructions in “Post-installation check” on page 113.

Post-installation check

After upgrading the MPFS software:

1. Verify that the MPFS software is upgraded properly and the MPFS daemon mpfsd has started.

2. Start the MPFS software by mounting an MPFS file system.

114 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Installing, Upgrading, or Uninstalling MPFS Software

Verifying the MPFS software upgrade

To verify that the MPFS software is upgraded and that the MPFS daemon is started:

1. Use RPM to verify the MPFS software upgrade:

rpm -q EMCmpfs

If the MPFS software is upgraded properly, the command displays an output such as:

EMCmpfs-6.0.x-x

Note: Alternatively, use the mpfsctl version command to verify the MPFS software is upgraded. The mpfsctl man page or “Using the mpfsctl utility” on page 122 provides additional information.

2. Use the ps command to verify that the MPFS daemon has started:

ps -ef |grep mpfsd

The output will look like this if the MPFS daemon has started:

root 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd

3. If the ps command output does not show the MPFS daemon process is running, as root, start MPFS software by using the following command:

$ /etc/rc.d/init.d/mpfs start

Uninstalling the MPFS software 115

Installing, Upgrading, or Uninstalling MPFS Software

Uninstalling the MPFS softwareUse the following steps to uninstall the MPFS software from a Linux server:

1. To uninstall the MPFS software, type the following command:

$ rpm -e EMCmpfs

If the MPFS software was uninstalled correctly, the following message appears on the screen:

Unloading EMCmpfs module...[root@###14583 root]#

2. If the MPFS software was not uninstalled due to MPFS file systems being mounted, the following error message appears:

[root@###14583 root]# rpm -e EMCmpfsERROR: Mounted mpfs filesystems found.Please unmount all mpfs filesystems before uninstalling the product.error: %preun(EMCmpfs-6.0.1-x) scriptlet failed, exit status 1

3. Unmount the MPFS file system. Follow the instructions in “Unmounting the MPFS file system” on page 104.

4. Repeat step 1.

116 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Installing, Upgrading, or Uninstalling MPFS Software

MPFS Command Line Interface 117

4Invisible Body Tag

This chapter discusses a variety of MPFS commands, parameters, and procedures that can be used to manage and fine-tune the Linux server. Topics include:

◆ Using HighRoad disk protection ................................................... 118◆ Using the mpfsctl utility ................................................................. 122◆ Displaying statistics ......................................................................... 132◆ Displaying MPFS device information ........................................... 134◆ Setting MPFS parameters................................................................ 141

MPFS Command LineInterface

118 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Using HighRoad disk protectionLinux servers provide hard disk protection for Symmetrix and CLARiiON volumes associated with MPFS file systems. These volumes are called File Mapping Protocol (FMP) volumes. The program providing this protection is called the EMC HighRoad® Disk Protection (hrdp) program.

With hrdp read/write protection activated, I/O requests to FMP volumes from the Linux server are allowed, but I/O requests from other sources are denied. For example, root users can use the dd utility to read/write to an MPFS mounted file system, but cannot use the dd utility to read/write to the device files themselves (/dev).

The reason for disk protection is twofold. The first reason is to provide security. Arbitrary users on a Linux server should not be able to access the data stored on FMP volumes. The second reason is to provide data integrity. Hard drive protection prevents the accidental corruption of Celerra file systems.

This section describes the behavior and interface characteristics between the Celerra File Server and Linux servers.

Celerra Network Server and hrdp

Linux servers depend on the Celerra Network Server to tag relevant volumes to identify them as FMP volumes. To accomplish this, the Celerra Network Server writes a Celerra signature on all visible volumes. From a disk protection view, a Celerra and an FMP volume are synonymous.

Discovering disks When a Linux server performs a disk discovery action, it tries to read a Celerra signature from every accessible volume.

For CLARiiON volumes, which may be accessible through two different service processors (SP A and SP B), hrdp is not able to read a Celerra signature from the passive path. However, hrdp does recognize that the two paths lead to the same device. The hrdp program protects both the passive and active paths to CLARiiON volumes.

Because a set of FMP volumes may change over time, hrdp must perform disk discovery periodically. The hrdp program receives notifications of changes to device paths, and responds accordingly by protecting any newly accessible Celerra devices.

Using HighRoad disk protection 119

MPFS Command Line Interface

hrdp command syntax

The hrdp program can be used to manually control device protection. Used with no arguments, hrdp identifies all the devices in the system, and protects the devices or partitions with a Celerra disk signature.

Command syntax hrdp [-d] [-h] [-n] [-p] [ -s sleep_time ] [-u] [-v] [-w]

where:

-d = run hrdp as a daemon, periodically scan devices, and update the kernel.

-h = print hrdp usage information.

-n = scan for new volumes, but do not inform the kernel about them.

-p = enable protection (read and write) for all Celerra volumes.

-s sleep_time = when run as a daemon, sleep the specified number of seconds between rediscovery. The default sleep time is 900 seconds.

Note: Sleep time can also be set by using HRDP_SLEEP_TIME as an environment variable, or as a parameter in /etc/sysconfig/EMCmpfs. The sysconfig parameter is explained in detail in “Displaying statistics” on page 132.

-u = disable protection (read and write) for all Celerra volumes.

-v = scan in verbose mode; print the signatures of new volumes as they are found.

-w = enable write protection for all Celerra volumes.

Examples The following examples illustrate the hrdp command output.

The following command runs hrdp as a daemon, periodically scans devices, and updates the kernel:

$ hrdp -d

Note: When the command has finished executing, only the command line prompt is returned.

120 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

The following command prints information about hrdp usage:$ hrdp -h

Output:

Usage: hrdp [options]Options: -d run as a daemon -h print this help information -n do not update kernel just print results -p enable protection -s time seconds to sleep between reprotection if run

as daemon -u disabled (unprotect) protection -v verbose -w enable write protection (i.e. allow reads)$

The following command does not inform the kernel about scanning for new volumes:$ hrdp -n

Note: When the command has finished executing, only the command line prompt is returned.

The following command enables read and write protection for all Celerra volumes:$ hrdp -p

Output:

protect$

This command output displays "protect" to show that read and write protection is enabled for all Celerra volumes.

The following command when running hrdp as a daemon, sleeps the specified number of seconds between rediscoveries:$ hrdp -s sleep_time

Note: When the command has finished executing, only the command line prompt is returned.

Using HighRoad disk protection 121

MPFS Command Line Interface

The following command disables read and write protection for all Celerra volumes:$ hrdp -u

Output:

unprotect$

This command output displays "unprotect" to show that read and write protection is disabled for all Celerra volumes.

The following command scans in verbose mode and prints the signatures of new volumes:$ hrdp -v

Output:

Celerra signature vendor product_id device serial number or path info0001874307271FA0-00f1 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:35:32 path = /dev/sdig Active FA-51b /dev/sg2400001874307271FA0-00ee EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:33 path = /dev/sdid Active FA-51b /dev/sg2370001874307271FA0-00f0 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:44 path = /dev/sdif Active FA-51b /dev/sg239

$

The following command enables write protection for all Celerra volumes:$ hrdp -w

Output:

protect$

This command output displays "protect" to show that write protection is enabled for all Celerra volumes.

Viewing hrdp protected devices

Devices protected by hrdp can be seen by listing the /proc/hrdp file. For a list of protected devices, type the following command:

$ cat /proc/hrdp

Output: Disk Protection Enabled for reads and writes

Device Status274: /dev/sddw 71.224 protected275: /dev/sddx 71.240 protected276: /dev/sddy 128.000 protected277: /dev/sddz 128.016 protected

122 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Using the mpfsctl utilityThe MPFS Control Program, mpfsctl, is a command line utility that can be used by MPFS system administrators to troubleshoot and fine-tune their systems. The mpfsctl utility resides in /usr/sbin/mpfsctl. Table 8 on page 122 lists the mpfsctl commands.

Table 8 Command line interface summary

Error message If any of these commands are used and an error is received, ensure that the MPFS software has been loaded. Use the command “mpfsctl version” on page 131 to check the version number of MPFS software.

Command Description Page

mpfsctl help Displays a list of mpfsctl commands. 123

mpfsctl diskreset Clears any file error conditions and causes MPFS to retry I/Os through the SAN.

123

mpfsctl diskrestfreq

Clears file error conditions, and tells MPFS to retry I/Os through the SAN in a specified timeframe.

124

mpfsctl max-readahead

Allows for adjustment of the number of kilobytes to prefetch when MPFS detects sequential read requests.

125

mpfsctl prefetch Sets the number of blocks of metadata to prefetch.

127

mpfsctl reset Sets the statistical counters. 128

mpfsctl stats Displays statistical data about the MPFS file system.

128, 132

mpfsctl version Displays the current version of MPFS software running on the Linux server.

131

mpfsctl volmgt Displays the volume management type used by each mounted file system.

131

Using the mpfsctl utility 123

MPFS Command Line Interface

mpfsctl help This command displays a list of the various mpfsctl program commands. Each command is explained in the rest of this chapter.

Why use thiscommand

Get a listing of all available mpfsctl commands.

Command syntax mpfsctl help

Input:

$ mpfsctl help

Output: Usage: mpfsctl op ...Operations supported (arguments in parentheses): diskreset resets disk connections diskresetfreq sets the disk reset frequency (seconds) max-readahead set number of readahead pages help print this list prefetch set number of blocks to prefetch reset reset statistics stats print statistics version display product compile time stamp volmgt get volume management type$

Use the man page facility on the Linux server for mpfsctl by typing man mpfsctl at the command line prompt.

mpfsctl diskreset This command clears any file error conditions and tells MPFS to retry I/Os through the SAN.

Why use thiscommand

When MPFS detects that I/Os through the SAN are failing, it uses NFS to transport data. There are many reasons why a SAN I/O failure can occur. Use the mpfsctl diskreset command when:

◆ A cable has been disconnected. After the reconnection, use the mpfsctl diskreset command to immediately retry the SAN.

◆ A configuration change or a hardware failure has occurred and the MPFS I/O needs to be reset through the SAN after the repair or change has been completed.

◆ Network congestion has occurred and the MPFS I/O needs to be reset through the SAN when the network congestion has been identified and eliminated.

124 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Command syntax mpfsctl diskreset

Input:

$ mpfsctl diskreset

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

mpfsctl diskresetfreq

This command sets the frequency at which the kernel automatically clears errors associated by using the SAN.

Why use thiscommand

Invoke NFS until the errors are cleared either manually with the mpfsctl diskreset command, or automatically when the frequency is greater than zero.

Command syntax mpfsctl diskresetfreq <interval_seconds>

where:

<interval_seconds> = time between the clearing of errors in seconds

Input:

$ mpfsctl diskresetfreq 650

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

To verify that the new interval has been set, type the following command:

$ cat /proc/mpfs/params

Output:

Kernel ParametersDirectIO=1disk-reset-interval=650 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0

Using the mpfsctl utility 125

MPFS Command Line Interface

defer-close-seconds=60 secondsdefer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0$

The default for the disk-reset-interval parameter is 600 seconds with a minimum of 60 seconds and a maximum of 3600 seconds. Note the value change in the example.

mpfsctl max-readahead

This command allows for adjustment of the number of kilobytes of data to prefetch when MPFS detects sequential read requests.

The mpfsctl max-readahead command is designed for 2.6 Linux kernels to provide functionality similar to the kernel parameter entry and /proc/sys/vm/max-readahead for 2.4 Linux kernels. One difference is that in 2.4 Linux kernels, this kernel parameter is system-wide and the mpfsctl max-readahead parameter only applies to I/O issued to MPFS file systems.

This option to the mpfsctl command allows experimentation with different settings on a currently running system. Changes to the mpfsctl max-readahead value are not persistent across system reboots. However, mpfsctl max-readahead value changes take effect immediately for file systems that are currently mounted.

To load a new value every time MPFS starts, remove the comments from the globReadahead parameter in the /etc/mpfs.conf file if it is present. If it is not present, add the globReadahead on a line by itself to change the default value.

Note: The prefetch parameter value can be set to stay in effect after a reboot. “Setting persistent parameter values” on page 143 describes how to set this value persistently.

For example:

globReadahead=120 (120 x 4 K = 480 KB) where 120 equals 480 KB on an x86_64 machine.

126 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Why use thiscommand

Tune MPFS for higher read performance.

Command syntax mpfsctl max-readahead <kilobytes>

where:

<kilobytes> = an integer between 0 and 32768

The minimum/default value 0 specifies use of the kernel default, which is 480 KB. A maximum value specifies 32,768 KB of data to be read ahead.

Input:

$ mpfsctl max-readahead 0

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

To verify that the new imax readahead value has been set, type the following command:

$ cat /proc/mpfs/params

Output:

Kernel ParametersDirectIO=1disk-reset-interval=600 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0defer-close-seconds=60 secondsdefer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0$

Using the mpfsctl utility 127

MPFS Command Line Interface

mpfsctl prefetch This command sets the number of data blocks to prefetch metadata. Metadata is information that describes the location of file data on the SAN. It is this prefetched metadata that allows for fast, accurate access to file data through the SAN.

Why use thiscommand

Tune MPFS for higher performance.

Command syntax mpfsctl prefetch <blocks>

where:

<blocks> = an integer between 4 and 4096 that specifies the number of blocks for which to prefetch metadata.

A block contains 8 KB of metadata. Metadata can be prefetched that maps (describes) between 32 KB (4 blocks) and 32 MB (4096 blocks) of data.

The default is 256 blocks or 2 MB for which metadata is prefetched. This is the best number for a variety of workloads. Leave this value unchanged. However, mpfsctl prefetch can be changed in situations when higher performance is required.

Changing the prefetch value does not affect current MPFS mounts, only subsequent mounts.

Note: The prefetch parameter value can be set to stay in effect after a reboot. “Setting persistent parameter values” on page 143 describes how to set this value persistently.

Input:

$ mpfsctl prefetch 256

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

To verify that the new prefetch value has been set, type the following command:

$ cat /proc/mpfs/params

128 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Output:

$ cat /proc/mpfs/paramsKernel ParametersDirectIO=1disk-reset-interval=650 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0defer-close-seconds=60 secondsdefer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0$

mpfsctl reset This command resets the statistical counters read by the mpfsctl stats command. “Displaying statistics” on page 132 provides additional information.

Why use thiscommand

By default, statistics accumulate until the system is rebooted. Use the mpfsctl reset command to reset the counters to 0 before executing the mpfsctl stats command.

Command syntax mpfsctl reset

Input:$ mpfsctl reset

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

mpfsctl stats This command displays a set of statistics showing the internal operation of the Linux server. By default, statistics accumulate until the system is rebooted. The command “mpfsctl reset” on page 128 provides information to reset the counters to 0 before executing the mpfsctl stats command.

Using the mpfsctl utility 129

MPFS Command Line Interface

Why use thiscommand

The output of the mpfsctl stats command can help pinpoint performance problems.

Command syntax mpfsctl stats

Input: $ mpfsctl stats

Output:=== OS INTERFACE8534 reads totalling 107683852 bytes5378 direct reads totalling 107683852 bytes4974 writes totalling 74902093 bytes2378 direct writes totalling 107683852 bytes0 split I/Os25 commits, 14 setattributes 4 fallthroughs involving 28 bytes

=== Buffer Cache

8534 disk reads totalling 107683852 bytes

4974 disk writes totalling 74902093 bytes

0 failed disk reads totalling 0 bytes

0 failed disk writes totalling 0 bytes

=== NFS Rewrite

6436 sync read calls totalling 107683852 bytes

3756 sync write calls totalling 74902093 bytes

=== Address Space Errors

321 swap failed writes

=== EXTENT CACHE

8364 read-cache hits (97%)

3111 write-cache hits (62%)

=== NETWORK INTERFACE

188 open messages, 187 closes

178 getmap, 1897 allocspace

825 flushes of 1618 extents and 9283 blocks, 43 releases

1 notify messages

=== ERRORS

0 WRONG_MSG_NUM, 0 QUEUE_FULL 0 INVALIDARG0 client-detected sequence errors

0 RPC errors, 0 other errors

$

130 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

When the command has finished executing, the command line prompt is returned.

Understanding mpfsctlstats output

Each of the output sections is explained next.

OS INTERFACE — The first four lines show the number of NFS message types that MPFS either handles (reads, direct reads, writes, and direct writes) or watches and augments (split I/Os, commits, and setattributes).

The last line shows the number of fallthroughs or reads and writes attempted over MPFS, but accomplished over NFS. The number of fallthroughs should be small. A large number of fallthroughs indicates that the MPFS file system is not being used to its full advantage.

Buffer Cache — The first two lines show the number of disk (reads and writes) that MPFS reads and writes to and from cache. The last two lines show the number of failed disk reads and writes to cache.

NFS Rewrite — The first line shows the number of synchronized read calls that NFS rewrites. The second line shows the number of synchronized write calls that NFS rewrites.

Address Space Errors — This line shows the number of failed writes due to memory pressure, which will be retried later.

EXTENT CACHE — These lines show the cache-hit rates. A low percentage (such as the percentage of write-cache hits in this example) indicates heavy network traffic. Contact EMC Customer Support for advice on increasing the cache capacity.

NETWORK INTERFACE — These lines show the number of FMP messages sent. In this example, the number is 187.

The number of blocks (9283) per flush (825) is also significant; in this case it is a 11:1 ratio. Coalescing multiple blocks into a single flush is a major part of the MPFS strategy for reducing message traffic.

ERRORS — This section shows both serious errors and completely recoverable errors. The only serious errors are those described as either RPC or other. Contact EMC Customer Support if a significant number of errors are reported in a short period of time.

Using the mpfsctl utility 131

MPFS Command Line Interface

mpfsctl version This command displays the version number of the MPFS software running on the Linux server.

Why use thiscommand

Find the specific version number of the MPFS software running on the Linux server.

Command syntax mpfsctl version

Input:

$ mpfsctl version

Output:

version: EMCmpfs.linux.6.0.1.x.x /emc/test/mpfslinux (test@eng111111), 01/20/10 01:41:24 PM

$

When the command has finished executing, only the command line prompt is returned.

If the MPFS software is not loaded, the following error message appears:

/dev/mpfs : No such file or directory

Install the MPFS software by following the procedure in “Installing the MPFS software” on page 106.

mpfsctl volmgt This command displays the volume management type used by each mounted file system.

Why use thiscommand

Find if the volume management type is hierarchical volume management.

Command syntax mpfsctl volmgt

Input:

$ mpfsctl volmgt

Output:

Fs ID VolMgtType1423638547 Hvl management Disk signature$

When the command has finished executing, only the command line prompt is returned.

132 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Displaying statisticsMPFS statistics for the system can be retrieved by using the mpfsstat command.

Using the mpfsstat command

This command displays a set of statistics. The command reports I/O activity for MPFS file systems. Without options, mpfsstat reports global statistics in megabytes per second. By default, statistics accumulate until the Linux server is rebooted. To reset the counters to 0, run mpfsstat with the -z option.

Why use thiscommand

Help troubleshoot MPFS performance issues or to gain general knowledge about the performance of a given MPFS file system.

Command syntax mpfsstat [-d] [-h] [-k][-z] [interval[count]]

where:-d = report statistics about the MPFS disk interface-h = print mpfsstat usage information-k = report statistics in kilobytes instead of megabytes-z = reset the counters to 0 before reporting statistics

Operands:interval = report statistics every interval secondscount = print only count lines of statistics

Examples The following examples illustrate the mpfsstat command output. These examples show zero outputs which may have different values depending on the system workload.

The following command prints the I/O rate for all MPFS-mounted file systems:$ mpfsstat

Output:

r/s w/s dr/s dw/s mr/s mw/s mdr/s mdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

The following command reports MPFS disk interface statistics:$ mpfsstat -d

Displaying statistics 133

MPFS Command Line Interface

Output:

disk syncnfs failed zeror/s w/s mr/s mw/s r/s w/s mr/s mw/s r+w/s mr+w/s blocks0 0 0.0 0.0 0 0 0.0 0.0 0 0.0 0

$

The following command prints information about mpfsstat usage:$ mpfsstat -h

Output:

Usage: mpfsstat [-dhkz] [interval [count]] -d Print disk statistics -h Print This screen -k Print Statistics in Kilobytes per sec. -z Clear all statistics$

The following command report statistics in kilobytes instead of megabytes:$ mpfsstat -k

Output:

r/s w/s drs dw/s kr/s kw/s kdr/s kdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

The following command resets the counters to zero before reporting statistics:$ mpfsstat -z

Output:

r/s w/s dr/s dw/s mr/s mw/s mdr/s mdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

The following command prints two lines of statistics, waiting one second between prints:$ mpfsstat 1 2

Output:

r/s w/s dr/s dw/s mr/s mw/s mdr/s mdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 00.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

134 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Displaying MPFS device informationSeveral types of information can be displayed for MPFS devices, including the device’s vendor ID, product ID, active/passive state, and mapped paths. The methods for displaying device information are as follows:

◆ Using the mpfsinq command

◆ Listing the /proc/mpfs/devices file

◆ Using the hrdp command

◆ Using the mpfsquota command

This section describes each of these methods and the type of information provided by each.

Listing devices with the mpfsinq command

The mpfsinq command displays disk signature information, shows where the path where disks are mapped, and identifies whether the devices are active (available for I/O).

The command has the following syntax:mpfsinq [-c time] [-h] [-m] [-S] [-v] <devices>

where:-c time = timeout for scsi command in seconds

-h = print mpfsinq usage information

-m = used to write scripts based on the output of mpfsinq; it prints out information in a machine readable format that can be easily edited with awk and sed

-S = tests the disk speed

-v = runs in verbose mode printing out additional information

<devices> = active devices available for I/O

Displaying MPFS device information 135

MPFS Command Line Interface

To view the timeout in seconds for a scsi command, type the following command:

$ mpfsinq -c time

Output:

FNM00083700177002E-0007 DGC RAID 5 60:06:01:60:00:03:22:00:27:b4:48:b9:d8:03:de:11

path = /dev/sdbm (0x4400 | 0x4003f00) Active SP-a3 /dev/sg65

path = /dev/sdbi (0x43c0 | 0x3003f00) Passive SP-b3 /dev/sg61 . . . . . .FNM000837001770000-000e DGC RAID 5

60:06:01:60:00:03:22:00:5d:59:b0:aa:71:02:de:11 path = /dev/sdbh (0x43b0 | 0x1001300) Active SP-a2* /dev/sg60

path = /dev/sdee (0x8260 | 0x2000000) Passive SP-b2 /dev/sg167

* designates active path using non-default controller$

To print information about mpfsinq usage, type the following command:$ mpfsinq -h

Output:

Usage: mpfsinq [options] <devices>Options: -c time timeout for scsi command in seconds -h print this help information -m machine readable format for output -S test disk speed -v verbose$

To write scripts based on the output of the mpfsinq command with information printed out in a machine readable format that can be edited with awk and sed, type the following command:

$ mpfsinq -m

Output:

b2:38:7e:d9:03:de:11 /dev/sdcy Active /dev/sg103FNM00083700177002E-0010 DGC RAID 5 60:06:01:60:00:03:22:00:4a:b2:38:7e:d9:03:de:11 /dev/sddb Passive /dev/sg106FNM000837001770028-000c DGC RAID 5 60:06:01:60:00:03:22:00:8d:e7:5d:67:d9:03:de:11 /dev/sdcq Active /dev/sg95 . . . . . .

136 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

FNM000837001770000-0001 DGC RAID 5 60:06:01:60:00:03:22:00:16:b6:c1:d5:6e:02:de:11 /dev/sdee Passive /dev/sg135

To test the disk speed, type the following command:

$ mpfsinq -S

Output:

FNM000837001770000-0017 DGC RAID 5 60:06:01:60:00:03:22:00:5c:59:b0:aa:71:02:de:11 path = /dev/sdbd (0x4370 | 0x1001200) Active SP-a2* /dev/sg56 50MB/s

path = /dev/sdfj (0x8250 | 0x2001200) Passive SP-b2 /dev/sg166 . . . . . .FNM000837001770000-0001 DGC RAID 5 60:06:01:60:00:03:22:00:16:b6:c1:d5:be:02:de:11 path = /dev/sdc (0x820 | 0x1000000) Active SP-a2 /dev/sg3 50MB/s

path = /dev/sder (0x8130 | 0x2000000) Passive SP-b2 /dev/sg148

* designates active path using non-default controller$

To display MPFS devices in verbose mode printing out additional information, type the following command:

$ mpfsinq -v

Output:

Celerra signature vendor product_id device serial number or path info0001874307271FA0-00f1 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:35:32 path = /dev/sdig Active FA-51b /dev/sg2400001874307271FA0-00ee EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:33 path = /dev/sdid Active FA-51b /dev/sg2370001874307271FA0-00f0 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:44 path = /dev/sdif Active FA-51b /dev/sg239

Note: A Passive path will be shown in the output only if there is a secondary path mapped to the device. Only CLARiiON storage arrays have Active/Passive states. Symmetrix arrays are Active/Active only as shown in Table 9 on page 136.

Table 9 MPFS device information

Vendor ID State I/O available

Symmetrix Active Yes

CLARiiON Active Yes

CLARiiON Passive No

Displaying MPFS device information 137

MPFS Command Line Interface

Listing devices with the /proc/mpfs/devices file

The state of MPFS devices may also be shown by listing the /proc/mpfs/devices file.

To list the MPFS devices in the /proc/mpfs/devices file, type the following command:

$ cat /proc/mpfs/devices

Output:Celerra Signature Path StateFNM000837001770000-0001 /dev/sdc activeFNM000837001770034-0001 /dev/sdak activeFNM000837001770000-0002 /dev/sdf activeFNM000837001770034-0002 /dev/sdan activeFNM000837001770028-0002 /dev/sdg activeFNM00083700177002E-0002 /dev/sdv activeFNM000837001770000-0003 /dev/sdaq active

Displaying mpfs disk quotas

Use the mpfsquota command to display a user’s MPFS file system disk quota and usage. Log in as root to use the optional username argument and view the limits of other users. Without options, mpfsquota displays warnings about mounted file systems where usage is over quota. Remote mounted file systems that do not have quotas turned on are ignored.

Note: If quota is not turned on in the file system, log in to the Celerra Network Server and execute the following nas_quotas commands.

Example To set quotas in the server, type the following command:$ nas_quotas -edit -user -fs server2_fs1 501

Output:Userid : 501fs "server2_fs1" blocks (soft = 2000, hard = 3000) inodes (soft = 0, hard = 0)

To turn on the quotas, type the following command:$ nas_quotas -on -user -fs server2_fs1

Output:done$

138 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

To run a report on the quotas, type the following command:$ nas_quotas -report -fs server2_fs1

Output:

Report for user quotas on file system server2_fs1 mounted on /server2fs1+------------+-------------------------------------+---------------------------+|User | Bytes Used (1K) | Files |+------------+-----------+-------+-------+---------+------+-----+-----+--------+| | Used | Soft | Hard |Timeleft | Used | Soft| Hard|Timeleft|+------------+-----------+-------+-------+---------+------+-----+-----+--------+|#501 | 8 | 2000| 3000| | 1 | 0| 0| ||#32769 | 1864424 | 0| 0| | 206 | 0| 0| |+------------+-----------+-------+-------+---------+------+-----+-----+--------+

done$

Mount the file system with the mpfs option from a Linux server.

The command has the following syntax:mpfsquota -v [username/UID]

where:-v = is a required option

username/UID = is the user ID

To display all MPFS-mounted file systems where quotas exist, type the following command:

$ mpfsquota -v

Output:Filesystem usage quota limit timeleft files quota limit timeleft/mnt 8 2000 3000 1 0 0

To view the quota of UID 501, type the following command:$ mpfsquota -v 501

Output:Filesystem usage quota limit timeleft files quota limit timeleft/mnt 8 2000 3000 1 0 0

Example If quota is turned off in the server, the following appears:$ mpfsquota 501

Output:No quota

Displaying MPFS device information 139

MPFS Command Line Interface

Validating a Linux server installation

Use the mpfsinfo command to validate a Linux server and Celerra Network Server installation by querying an FMP server (Data Mover) and validating that the Linux server can access all the disks required to use MPFS for each exported file system.

The user must supply the name or IP address of at least one FMP server and have TCL and TCLX installed. Multiple FMP servers may be specified, in which case the validation is done for the exported file systems on all the listed servers.

mpfsinfo command The command has the following syntax:mpfsinfo [-v] [-h] <fmpserver>

where:-v = runs in verbose mode printing out additional information

-h = print mpfsinfo usage information

<fmpserver> = name or IP address of the FMP server

To query FMP server ka0abc12s402, type the following command:

$ mpfsinfo ka0abc12s402

Output: ka0abc12s402:/server4fs1 OKka0abc12s402:/server4fs2 OKka0abc12s402:/server4fs3 OKka0abc12s402:/server4fs4 OKka0abc12s402:/server4fs5 OK$

When the Linux server cannot access all of the disks required for each exported file system, the output in the following example appears.

To query FMP server kc0abc17s901, type the following command:

$ mpfsinfo kc0abc17s901

Output: kc0abc17s901:/server9fs1 MISSING DISK(s) APM000637001700000-0049 MISSING APM000637001700000-004a MISSING APM000637001700000-0053 MISSING APM000637001700000-0054 MISSINGkc0abc17s901:/server9fs2 MISSING DISK(s) APM000637001700000-0049 MISSING APM000637001700000-004a MISSING APM000637001700000-0053 MISSING APM000637001700000-0054 MISSING

140 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

To run in verbose mode printing out additional information, type the following command:$ mpfsinfo -v 123.45.67.89

Output:123.45.67.89:/S2_Shg_mnt1 OK FNM000836000810000-0007 OK FNM000836000810000-0008 OK123.45.67.89:/S2_Shg_mnt2 OK FNM000836000810000-0009 OK FNM000836000810000-000a OK123.45.67.89:/S2_Shg_mnt3 OK FNM000836000810000-000d OK FNM000836000810000-000e OK$

To print mpfsinfo usage information, type the following command:$ mpfsinfo -h

Output:Usage: /usr/sbin/mpfsinfo [options] fmpserver...options: -h help -v verbose

If the server is not available, the following error message is displayed:

$ mpfsinfo -v ka0abc12s402

Warning: No MPFS disks foundka0abc12s402: Cannot reach server.

Setting MPFS parameters 141

MPFS Command Line Interface

Setting MPFS parametersA list of MPFS parameters may be found in the /proc/mpfs/params file. The parameter settings shown are the default or recommended values.

If a Linux server reboot is performed, several of these parameters revert to the default value unless they are set to a persistent state. “Setting persistent parameter values” on page 143 explains the procedure for applying these parameters across the reboot process.

Kernel parametersUse Table 10 on page 142 as a guide for minimum and maximum settings for each parameter. To display the current settings:

$ cat /proc/mpfs/params

Output:

Kernel ParametersDirectIO=1disk-reset-interval=600 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0defer-close-seconds=60 secondsdefer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0AllowMultiMount=0$

Note: Contact EMC Customer Service before making any changes to any kernel parameters.

142 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Table 10 MPFS kernel parameters (page 1 of 2)

Parameter Default Minimum Maximum Meaning

defer-close-max 1024 0 None Closes 12 files when the number of open files exceeds the defer-close-max value.

defer-close-seconds 60 0 None When an application closes a file, the FMP module will not send the FMP close command to the server until the defer-close-seconds time has passed.

DirectIO 1 0 3 Allows file reads and writes to go directly from an application to a storage device bypassing operating system buffer cache.

disk-reset-interval 600 60 3,600 Sets the timeframe in seconds for failback to start by using the SAN for all open files.

ecache-size 2,047 31 16,383 Sets the number of extents per file to keep in the extent cache.

ExostraMode 0 Do not change

Do not change

Use default setting; for EMC use only.

MaxCommitBlocks 2,048 Do not change

Do not change

Sets the maximum number of blocks to commit to a single commit command.

MaxConcurrentNfsWrites 128 Do not change

Do not change

Sets the maximum number of concurrent NFS writes allowed.

max-retries 10 2 20 Sets the maximum number of SAN-based retries before failing over to NFS.

NotifyPort 6,907 Do not change

Do not change

The notification port that is used by default.

prefetch-size 256 4 2,048 Sets the number of prefetch blocks. Recommended size: no larger than 512 unless instructed to do so by your EMC Customer Support Representative.

Readahead 0 Do not change

Do not change

Specifies the read ahead in pages. This parameter only applies to 2.6 kernels.

ReadaheadForRandomIO 0 0 1 When an application reads a file randomly, the readahead size is reduced by the kernel. By setting this parameter to 1, the readahead size is not reduced by the kernel.

Setting persistent parameter values 143

MPFS Command Line Interface

Setting persistent parameter values Parameters in /etc/mpfs.conf and /etc/sysconfig/EMCmpfs can be set to persistently remain in effect after a Linux server reboot.

mpfs.conf parameters

Prefetch along with several other MPFS parameters, may be set to a persistent state by modifying the /etc/mpfs.conf file. These parameters are:

◆ globPrefetchSize — Sets the number of blocks to prefetch when requesting mapping information.

◆ globMaxRetries — Sets the number of retries for all FMP requests.

◆ globDiskResetInterval — Sets the number of seconds between retrying by using SAN.

To view the /etc/mpfs.conf file: $ cat /etc/mpfs.conf

Output: ## This is the MPFSi configuration file## It contains parameters that are used when the MPFSi module is loaded#

SmallFileThreshold 0 0 None Sets the size threshold for files. For files smaller than this value, I/O will go through NFS instead of MPFS. When set to 0, this function is disabled.

StatfsBsize 65,536 8,192 2 M The file system block size as returned by the statsfs system call. This value is not used by MPFS, but some applications choose this as the size of their writes.

UsePseudo 1 0 1 Enables MPFS to use pseudo devices created by Multipathing software, such as PowerPatha and the Device-Mapper Multipath toolb.

a. MPFS supports PowerPath version 5.3 on RHEL 4 U4-U7, RHEL 5 U1-U3, and SLES 10 SP2.

b. MPFS supports the Device-Mapper Multipath tool on RHEL 4 U4-U7 and RHEL 5 U1-U3.

Table 10 MPFS kernel parameters (page 2 of 2)

Parameter Default Minimum Maximum Meaning

144 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

## Users who supply the direct I/O flag when opening a file will get# behavior that is dependant on the global setting of a parameter# called "globDirectIO".## There are three valid values for this parameter. They are:# # 0 -- No direct I/O support, return ENOSUPP# 1 -- Direct I/O via MPFS# 2 -- Direct I/O via NFS even on MPFS file systems# 3 -- Direct I/O via MPFS, and optimized for DIO to pre-allocated file, DIO to non-allocated file will fallback to NFS## globDirectIO=1

## Set the number of seconds between retrying via SAN# # globDiskResetInterval_sec=600

## Set number of extents per file to keep in the extent cache. This # should be a power of 2 minus 1. # Too many extents means that searching the extent cache may be slow. # Too few, and we will have to do too many RPCs.# # globECacheSize=2047

## Set the number of retries for all FMP requests## globMaxRetries=10

## Set the number of blocks to prefetch when requesting mapping

information## globPrefetchSize=256

## Set number of simultaneous NFS writes to dispatch on SAN failure## globMaxConcurrentNfsWrites=128

## Set optimal blocksize for MPFS file systems## globStatfsBsize=65536## Set number of readahead pages# This is only used for 2.6 kernels. For 2.4 kernel users, please set

vm.max_readahead

Setting persistent parameter values 145

MPFS Command Line Interface

## globReadahead=250

## Readahead support for random I/O# When an application reads a file randomly, the readahead size is

reduced by kernel.# By setting this parameter to 1, the readahead size is not reduced.## globReadaheadForRandomIO=0

## Set maximum number of blocks to commit in a single commit command.## globMaxCommitBlocks=2048

## Set the notification port if the mpfsd is unable to get the requested# port when it starts.## globNotifyPort=6907

## Enable MPFS to use Pseudo devices created by Multipathing software,# namely PowerPath and Device-Mapper Multipath tool.# globUsePseudo=1

## Set Defer Close Second for FileObj, 0 to disable## globDeferCloseSec=60

## Set maximum Defer Close files## globDeferCloseMax=1024

## Set size threshold for files# For file smaller than this value, IO will go through NFS instead of

MPFS# When set to 0, this function is disabled.#

# globSmallFileThreshold=0

To modify the /etc/mpfs.conf file, use vi or another text editor that does not add carriage returns to the file. Remove the comment from the parameter by deleting the hash mark (#) on the line, replace the value, and save the file. The following example shows this file after modification:

146 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

##This is the MPFSi configuration file##It contains parameters that are used when the MPFSi module is loaded## Set the number of blocks to prefetch when requesting mapping

information## globPrefetchSize=256

# Set number of readahead pages## globReadahead=250

# Set the number of seconds between retrying via SAN## globDiskResetInterval_sec=600

# Set the number of retries for all FMP requests#globMaxRetries=8

# Set the number of seconds between retrying via SAN#globDiskResetInterval_sec=700#

In the example, globMaxRetries was changed to 8 and globDiskResetInterval_sec was changed to 700.

DirectIO support DirectIO allows file reads and writes to go directly from an application to a storage device bypassing operating system buffer cache. This feature was added for applications that use the O_DIRECT flag when opening a file.

When MPFS opens files by using DirectIO, the read/write behavior depends on the global setting of a parameter called DirectIO.

Note: DirectIO is only a valid option for 2.6 kernels (such as RHEL 4, RHEL 5, SLES 10, and CentOS 5).

To examine the value of the MPFS DirectIO setting:

$ grep DirectIO /proc/mpfs/params$ globDirectIO=1 $

Note: The default value is 1, meaning that DirectIO is enabled.

Setting persistent parameter values 147

MPFS Command Line Interface

To change the DirectIO parameter value, use vi or another text editor that does not add carriage return characters to the file. Remove the comment from the following line in the /etc/mpfs.conf file on the server:

globDirectIO=1

Change the value 1 to the desired DirectIO action value,

where:

0 = No DirectIO support, return ENOSUPP

1 = DirectIO via MPFS

2 = DirectIO via NFS even on MPFS file systems

3 = DirectIO via MPFS and optimized for DirectIO to a pre-allocated file, DirectIO to a non-allocated file will fallback to NFS

After changing the DirectIO parameter in the /etc/mpfs.conf file, perform the following steps to activate DirectIO for MPFS:

1. Unmount all MPFS file systems:

$ umount -a -t mpfs

2. Stop the MPFS service:

$ service mpfs stop

3. Restart the MPFS service:

$ service mpfs start

4. Remount the MPFS file systems:

$ mount -a -t mpfs

Rebooting the Linux server will also activate the changes made in the /etc/mpfs.conf file.

Changes to global parameters in the /etc/mpfs.conf file persist across reboots.

Example In the following example, the /etc/mpfs.conf file has been modified so that MPFS does not use DirectIO when writing to and reading from MPFS file systems.

Type the command:

$ cat /etc/mpfs.conf$

148 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

Output: ## This is the MPFSi configuration file## It contains parameters that are used when the MPFSi

module is loaded#

## Users who supply the direct I/O flag when opening a# file will get behavior that is dependant on the global# setting of a parameter called "globDirectIO".## There are three valid values for this parameter. They# are:# 0 -- No direct I/O support, return ENOSUPP# 1 -- Direct I/O via MPFS# 2 -- Direct I/O via NFS even on MPFS file systems# 3 -- Direct I/O via MPFS, and optimized for Direct I/O

to pre-allocated file, Direct I/O to non-allocated file will fallback to NFS

#globDirectIO=0#

Note: The O_DIRECT flag is used by the DirectIO parameter. The man 2 open man page contains detailed information on the O_DIRECT flag.

Asynchronous I/O interfaces allow an application thread to dispatch an I/O without waiting for the I/O operation to complete. Later the thread can check to see if the I/O has completed. This feature is for applications that use the aio_read and aio_write interfaces. Asynchronous I/O is now supported natively in 2.6 Linux kernels.

EMCmpfs parameters

The /etc/sysconfig/EMCmpfs file contains the following parameters:

◆ When run as a daemon, MPFS_DISCOVER_SLEEP_TIME waits the specified number of seconds before it performs a disk rediscovery process. The default is 900 seconds.

If an error occurs on any Celerra volume, the daemon wakes so that it can perform a rediscovery without waiting the full sleep time.

◆ The purpose of the HRDP_SLEEP_TIME daemon is to periodically wake up, notice if there are additional disks, and protect them if they are Celerra disks. The default is 300 seconds.

Setting persistent parameter values 149

MPFS Command Line Interface

◆ The MPFS_ISCSI_PID_FILE parameter can be used to customize the name of the file containing the Process ID (PID) of the iSCSI daemon.

◆ The purpose of the MPFS_ISCSI_REDISCOVER_TIME daemon is to wait the specified number of seconds to allow iSCSI to rediscover new LUNs. The default is 10 seconds.

◆ The MPFS_SCSI_CMD_TIMEOUT daemon waits the specified number of seconds for scsi commands before timing out. The default is 5 seconds.

◆ The PERF_TIMEOUT daemon waits the specified number of seconds to send performance packets after the last hello message. The default is 900 seconds.

◆ The purpose of the MPFS_DISKSPEED_BUF_SIZE daemon is to set the default disk speed test buffer size. The default is 5 MB.

◆ The purpose of the MPFS_MOUNT_HVL parameter is to set the default behavior by using hierarchical volume management (hvm). HVM uses protocols which allows the Linux server to conserve memory and CPU resources. The default value of 1 uses hierarchical volume management. A value of 0 does not use hierarchical volume management if it is supported by the server. The value can be changed by using the -o hvl=0 option to disable hvm or -o hvl=1 option to enable hvm on the mount command. “Hierarchical volume management” on page 38 describes hierarchical volumes and their management.

◆ The MPFS_DISCOVER_LOAD_BALANCE parameter is based on Celerra best practices to statically load-balance the CLARiiON storage array. Load-balancing the Symmetrix storage array is not necessary. The default is to disable userspace load-balancing.

To view the parameters:

$ cat /etc/sysconfig/EMCmpfs# Default values for MPFS daemons### /** Default amount of time to sleep between rediscovery */# MPFS_DISCOVER_SLEEP_TIME=900## /** Default amount of time to sleep between reprotection of disks */# HRDP_SLEEP_TIME=300## /** Default name of iscsi pid file */# MPFS_ISCSI_PID_FILE=/var/run/iscsi.pid#

150 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

MPFS Command Line Interface

# /** Default time to allow iscsi to rediscover new LUNs */# MPFS_ISCSI_REDISCOVER_TIME=10## /** Default timeout for scsi commands (inquiry, etc) in seconds */# MPFS_SCSI_CMD_TIMEOUT=5## /** Number of seconds to send performance packets after last hello

message */# PERF_TIMEOUT=900## /** Default disk speed test buffer size, unit is MB */# MPFS_DISKSPEED_BUF_SIZE=5## /** The value of this determines the default behavior for using

hierarchical volume management.# Assign a value of 1 to use hierarchical volume management by default

if it is supported by the server# Assign a value of 0 to not use hierarchical volume management by

default. The default value # can be changed by using the -o hvl=0 or -o hvl=1 option on the mount

command. */ # MPFS_MOUNT_HVL=1## /** Default value for Multipath static load-balancing */# The static load-balancing is based on Celerra best practice for

CLARiiON backend.# It is not useful for Symmetrix backend# Set 1 to get optimized load-balancing for multiple clients# Set 2 to get optimized load-balancing for single client# Set 0 to disable userspace load-balancing# MPFS_DISCOVER_LOAD_BALANCE=0

To modify the /etc/sysconfig/EMCmpfs file, use vi or another text editor that does not add carriage returns to the file. Remove the comment from the parameter by deleting the hash mark (#) on the line, replace the value, and save the file. The following example shows this file after modification:

# Default values for MPFS daemons### /** Default amount of time to sleep between rediscovery */

MPFS_DISCOVER_SLEEP_TIME=800$

File Syntax Rules 151

AInvisible Body Tag

This appendix describes the file syntax rules to follow when creating a.txt file to create a site and add Linux hosts. This appendix includes the following topics:

◆ File syntax rules for creating a site ................................................ 152◆ File syntax rules for adding hosts.................................................. 154

File Syntax Rules

152 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

File Syntax Rules

File syntax rules for creating a siteThe file syntax rules for creating a text (.txt) file used to create a site are described below.

Celerra with iSCSI ports

To create a text file for a site with a Celerra with iSCSI ports:

Command syntax cssite sn=<site-name> spw=<site-password> un=<celerra-user-name> pw=<celerra-password> addr=<celerra-name>

where:<site-name> = name of the site<site-password> = password for the site<celerra-user-name> = username of the Celerra Control Station<celerra-password> = password for the Celerra Control Station<celerra-name> = network name or IP address of the Celerra Control Station

Example To create a site with a site name of mysite, a site password of password, a Celerra username of celerratest, a Celerra password of swlabtest, and an IP address of 123.45.67.890:

cssite sn=mysite spw=password un=celerratest pw=swlabtest addr=123.45.67.890

File syntax rules for creating a site 153

File Syntax Rules

Celerra with iSCSI-to-Fibre Channel bridge

To create a text file for a site with a Celerra and an iSCSI-to-Fibre Channel bridge:

Command syntax switchsite sn=<site-name> spw=<site-password> un=<celerra-user-name> pw=<celerra-password> addr=<celerra-name> mdsun=<mds-user-name> mdspw=<mds-password> mdsaddr=<mds-name>

where:<site-name> = name of the site<site-password> = password for the site<celerra-user-name> = username of the Celerra Control Station<celerra-password> = password for the Celerra Control Station<celerra-name> = network name or IP address of the Celerra Control Station<mds-user-name> = username of the iSCSI-to-Fibre Channel bridge<mds-password> = password for the iSCSI-to-Fibre Channel bridge<mds-name> = network name or IP address of the iSCSI-to-Fibre Channel bridge control port

Example To create a site with a site name of mysite, a site password of password, a Celerra username of celerratest, a Celerra password of swlabtest, a Celerra Control Station IP address of 123.45.67.890, an MDS username of mdslab, an MDS password of polonium, and an MDS IP address of 135.79.124.680:

switchsite sn=mysite spw=password un=celerratest pw=swlabtest addr=123.45.67.890 mdsun=mdslab mdspw=polonium mdsaddr=135.79.124.680

154 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

File Syntax Rules

File syntax rules for adding hostsThe file syntax rules for creating a text (.txt) file used to add Linux hosts are described below.

Linux host To create one or more Linux hosts that share the same username (root) and password:

Command syntax linuxhost un=<host-root-user-name> pw=<host-password> <host-name1>[...<host-nameN>]

where:<host-root-user-name> = root username of the Linux host<host-password> = password for the Linux host<host-name1>[...<host-nameN>] = one or more Linux hostnames or IP addresses

Example To create a Linux host with a root username of test, a Linux host password of swlabtest, and Linux host IP addresses of 123.45.678.90 and 135.79.124.68:

linuxhost un=test pw=swlabtest 123.45.678.90 135.79.124.68

Error Messages and Troubleshooting 155

BInvisible Body Tag

This appendix describes messages that the Linux server writes to the system error log and troubleshooting problems, causes, and solutions. The appendix includes the following topics:

◆ Linux server error messages........................................................... 156◆ Troubleshooting................................................................................ 157◆ Known problems and limitations .................................................. 164

Error Messages andTroubleshooting

156 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

Linux server error messagesTable 11 on page 156 describes Linux server error messages.

Table 11 Linux server error messages

Message Explanation

notification error on session create The session was not created. Verify the mpfsd process is running.

session to server lost The Linux server has lost contact with the Celerra Network Server. This loss of contact is probably due to a network or server problem and not an I/O error. <server_name> session expired now=<time>,

expiration=<time>

reestablished session OK The first message indicates the Linux server has re-established contact with the Celerra Network Server. The second indicates an attempt at re-establishing contact has been made, but has not succeeded. Neither message indicates an I/O error.

handles may have been lost

could not find disk signature for <nnnnn> (<nnnnn> is the disk signature).

The Celerra Network Server specified a storage location <nnnnn>, that is inaccessible from the Linux server.

could not start <xxx> thread A component of the Linux server failed to start. (<xxx> is a component of the Linux server.)

error accessing volume. I/O routed to LAN This message is printed in the log file when the Linux server receives an error message while communicating with the Symmetrix storage system over Fibre Channel. All subsequent I/O operations for the file are done over NFS until the file is reopened. After the file is reopened, the Fibre Channel SAN path is retried.

Troubleshooting 157

Error Messages and Troubleshooting

TroubleshootingThis section lists problems, causes, and solutions in troubleshooting the MPFS software.

The EMC Celerra MPFS for Linux Clients Release Notes provide additional information on troubleshooting, known problems, and limitations.

Installing MPFS software

The following problems may be encountered while installing the MPFS software.

Problem Installation of the MPFS software fails with an error message such as: Installing ./EMCmpfs-6.0.1.x-i686.rpm on localhost[ Step 1 ] Checking installed MPFSpackage ...[ Step 2 ] Installing MPFS package ...Preparing... ##################################### [100%] 1:EMCmpfs #####################################[100%]The kernel that you are running,2.6.22.18-0.2-default, is not

supported by MPFS.The following kernels are supported by MPFS on SuSE: SuSE-2.6.16.27-0.6-default SuSE-2.6.16.27-0.6-smp SuSE-2.6.16.46-0.12-default SuSE-2.6.16.46-0.12-smp SuSE-2.6.16.53-0.8-default SuSE-2.6.16.53-0.8-smp SuSE-2.6.16.60-0.21-default SuSE-2.6.16.60-0.21-smp SuSE-2.6.5-7.282-default SuSE-2.6.5-7.282-smp SuSE-2.6.5-7.283-default SuSE-2.6.5-7.283-smp SuSE-2.6.5-7.286-default SuSE-2.6.5-7.286-smp SuSE-2.6.5-7.287.3-default SuSE-2.6.5-7.287.3-smp SuSE-2.6.5-7.305-default SuSE-2.6.5-7.305-smp SuSE-2.6.5-7.308-default SuSE-2.6.5-7.308-smp

Cause The kernel being used is not supported.

Solution Use a supported OS kernel. The EMC Celerra MPFS for Linux Clients Release Notes provide a list of supported kernels.

158 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

Mounting and unmounting a file system

The following problems may be encountered in mounting or unmounting a file system.

Problem The MPFS software does not run or the MPFS daemon did not start.

Cause The MPFS software may not be installed.

Solution Verify that the MPFS software is installed and the MPFS daemon has started by using the following procedure:1. Use RPM to verify the installation:

rpm -q EMCmpfs

If the MPFS software is installed properly, the output is displayed as:

EMCmpfs-6.0.x-x

Note: Alternatively, use the mpfsctl version command to verify that Linux server is installed. The mpfsctl man page or “Using the mpfsctl utility” on page 122 provides additional information.

2. Use the ps command to verify that the MPFS daemon has started:

ps -ef |grep mpfsd

The output will look like this if the MPFS daemon has started:

root 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd

3. If the ps command output does not show the MPFS daemon process is running, as root, start MPFS by using the following command:

$ /etc/rc.d/init.d/mpfs start

Problem The mount command displays messages about unknown file systems.

Cause An option was specified that is not supported by the mount command.

Solution Check the mount command options and correct any unsupported options:1. Display the mount_mpfs man page to find supported options: man mount_mpfs2. Run the mount command again with the correct options.

Troubleshooting 159

Error Messages and Troubleshooting

Problem The mount command displays the following message:mount: must be root to use mount

Cause Permissions are required to use the mount command.

Solution Log in as root and try the mount command again.

Problem The mount command displays the following message:nfs mount: get_fh: <hostname>:: RPC: Rpcbind failure - RPC: Timed out

Cause The Celerra Network Server or NFS server specified is down.

Solution Check that the correct server name was specified and that the server is up with an exported file system.

Problem The mount command displays the following message:$ mount -t mpfs 123.45.67.890:/rcfs /mnt/mpfsVolume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

Cause The MPFS mount operation could not find the physical disk associated with the specified file system.

Solutions Use the mpfsinq command to verify the physical disk device associated with the file system is connected to the server over Fibre Channel and is accessible from the server.

Problem The mount command displays the following message:mount: /<filesystem>: No such file or directory

Cause No mount point exists.

Solution Create a mount point and try the mount again.

160 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

Problem The mount command displays this message:mount: fs type mpfs not supported by kernel.

Cause The MPFS software is not installed.

Solution Install the MPFS software and try the mount command again.

Problem A file system cannot be unmounted. The unmount command displays this message:umount: Device busy

Cause Existing processes were using the file system when an attempt was made to unmount it, or the umount command was issued from the file system itself.

Solution Identify all processes, stop all processes, and unmount the file system again:1. Use the fuser command to identify all processes using the file system.2. Use the kill -9 command to stop all processes.3. Run the umount command again.

Problem The mount command hangs.

Cause The server specified with the mount command does not exist or cannot be reached.

Solution Stop the mount command, check for a valid server, and retry the mount command again:1. Interrupt the mount command by using the interrupt key combinations (usually Ctrl-C). 2. Try to reach the server by using the ping command.3. If the ping command succeeds, retry the mount.

Troubleshooting 161

Error Messages and Troubleshooting

Problem The mount command displays the message:permission denied.

Causes Cause 1Permissions are required to access the file system specified in the mount command.Cause 2You are not the root user on the server.

Solutions Solution 1Ensure the file system has been exported with the right permissions, or set the right permissions for the file system (the Celerra Network Server Command Reference Manual provides information on permissions).Solution 2Use the su command to become the root user.

Problem The mount command displays the message:RPC program not registered.

Cause The server specified in the mount command is not a Celerra Network Server or NFS server.

Solution Check that the correct server name was specified and the server has an exported file system.

Problem The mount command logs this message in the /var/log/messages file: Couldn’t find device during mount.

Cause The MPFS mount operation could not find the physical disk associated with the specified file system.

Solution Use either the fdisk command or the mpfsinq command to verify the physical disk device associated with the file system is connected to the server over Fibre Channel and is accessible from the server.

162 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

Miscellaneous issues

The following miscellaneous issues may be encountered with a Linux server.

Problem The mount command displays this message:RPC: Unknown host.

Cause The server name specified in the mount command does not exist on the network.

Solution Check the server name and use the IP address if necessary, to mount the file system:1. Make sure the correct server name is specified in the mount command.2. If the correct name was not specified, check whether the host’s /etc/hosts file or the NIS/DNS map Xcontains an entry for the server.3. If the server does appear in /etc/hosts or the NIS/DNS map, check whether the server responds to the ping command.4. If the ping command succeeds, try using the server’s IP address instead of its name in the mount Xcommand.

Problem The mount command displays the following message:$ mount -t mpfs ka0abc12s401:/server4fs1 /mnt mount: fs type mpfs not supported by kernel

Cause The MPFS software is not installed on the Linux server.

Solution Install the MPFS software and try the mount command again:1. Install the MPFS software on the Linux server.2. Run the mount command again.

Problem User cannot write to a mounted file system.

Cause Write permission is required on the file system or the file system is mounted as read-only.

Solution Verify that you have write permission and try writing to a mounted file system again:1. Check that you have write permission on the file system. 2. Try unmounting the file system and remounting it in read/write mode.

Troubleshooting 163

Error Messages and Troubleshooting

Problem The following message appears:NFS server not responding.

Cause The Celerra Network Server is unavailable due to a network-related problem, a reboot, or a shutdown.

Solution Check whether the server responds to the ping command. Also try unmounting and remounting the file system.

Problem Removing the MPFS software package fails.

Causes Cause 1The MPFS software package is not installed on the Linux server.Cause 2Trying to remove the MPFS software package while one or more MPFS-mounted file systems are active, and I/O is taking place on the active file system. A message appears on the Linux server such as:ERROR: Mounted MPFS filesystems found on the system.

Please unmount all MPFS filesystems before removing the product.

Solutions Solution 1Make sure the MPFS software package name is spelled correctly, with uppercase and lowercase letters specified. If the MPFS software package name is spelled correctly, verify that the MPFS software is installed on the Linux server:$ rpm -q EMCmpfs

If the MPFS software is installed properly, the output is displayed as: EMCmpfs-6.0.1-xxx

If the MPFS software is not installed, the output is displayed as:Package "EMCmpfs" was not found.

Solution 2Unmount the MPFS file systems and try removing the MPFS software package again:1. Stop the I/O. 2. Unmount all active MPFS file systems by using the umount command.3. Restart the removal process.

164 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

Known problems and limitationsThe following sections describe known problems and limitations for Linux servers.

Auto discovery The iSCSI software initiator in RHEL 4.0 does not properly auto-discover LUs and can cause some delay in the mounting process. To fix this issue, edit the /etc/sysconfig/iscsi file on the server. Uncomment the following line and change the value to 60. For example:

# ESTABLISHTIMEOUT=30

Uncomment line with new value follows: ESTABLISHTIMEOUT=60

A server reboot is required for this change to take effect.

automount limitation

When using the automount capabilities with NFS or MPFS, the mount fails with the following messages:

Linux server side message in /var/log/messages: mount: <hostname:IP address>:/fs1 failed, reason given by server: No such file or directory

Data Mover message in the server_log: NFS: 3: checkExportedByPath: verifyPath failed with status 7 for path /fs1 client <NFS client name>

Ensure that there are no escaped white spaces at the end of the path in the map file used by autofs service on the Linux server.

The Linux vendor's support website provides more information on this issue.

automount -t mpfs flag

automount does not accept the -t mpfs flag.

To fix this issue, edit the /etc/auto.misc file and add the following line:

mnt -fstype=mpfs /server/fs

Thereafter, the mount -t mpfs command will function normally.

Known problems and limitations 165

Error Messages and Troubleshooting

Character disk I/O An application or utility using character disk I/O should not run on the same system as the MPFS software. The sg_dd utility, for example, is used to copy data to and from Linux SCSI generic (sg) devices. As a data protection safeguard, do not run this or other character disk I/O utilities on the same system as the MPFS software.

Hardware initiators EMC has not tested MPFS on any hardware iSCSI initiators and recommends that you use the tested software initiators.

I/O fallthrough When the MPFS file system is full, or when a user exceeds a set quota, MPFS is designed to allow I/Os to fall through to NFS.

Informational messages

During an MPFS installation, messages similar to the following may get logged in the /var/log/messages file:

Feb 2 12:07:15 hvcwy3889 modprobe: Warning:loading/lib/modules/2.4.21-27.0.2.ELsmp/kernel/fs/mpfs/mpfs.o will taint the kernel: non-GPL license -Proprietary

You can disregard these messages. They indicate that the proprietary EMC Celerra MPFS module has been loaded. GPL compatible licenses are not impacted in any way.

iSCSI port failure When using a single Cisco MDS iSCSI port with 30 or more servers active simultaneously, and the vm.max-readahead parameter is set to a value greater than 95, the iSCSI port fails over to NFS.

The default value for the vm.max-readahead parameter is 31. Depending on the number of servers, this parameter can be increased to improve performance.

This is only a problem under a very high load/stress, and with many servers sharing the same port. For less loaded environments, a value of 128 or 256 is adequate.

MPFS data protection

MPFS protects Celerra disks from being used by programs other than MPFS with the HighRoad Disk Protection (hrdp) program. When the MPFS service is started at boot time, all EMC disks attached to the Linux server are protected.

166 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

To protect any LUs added after boot time, use the hrdp command to protect those LUs. For example, type:

$ /usr/sbin/hrdp

Use the following steps to verify disks are protected:1. Run the mpfsinq command to view the Celerra disk signatures

by typing:

$ mpfsinq -v

Output:

Celerra signature vendor product_id device serial number or path

APM000531007850006-001c EMC SYMMETRIX /dev/sdaa path = /dev/sdaa(0x41a0) ActiveAPM000531007850006-001d EMC SYMMETRIX /dev/sdab path = /dev/sdab(0x41b0) ActiveAPM000531007850006-001e EMC SYMMETRIX /dev/sdac path = /dev/sdac(0x41c0) ActiveAPM000531007850006-001f EMC SYMMETRIX /dev/sdad path = /dev/sdad(0x41d0) ActiveAPM000531007850006-0020 EMC SYMMETRIX /dev/sdae path = /dev/sdae(0x41e0) ActiveAPM000531007850006-0021 EMC SYMMETRIX /dev/sdaf path = /dev/sdaf(0x41f0) Active

2. Choose one of the disks for verification. For example, type: $$ dd if=/dev/sdc count=1

Output:dd: opening `/dev/sdc': No such device$

The disk is protected if No such device appears.

Known problems and limitations 167

Error Messages and Troubleshooting

MPFS software hangs on reboot

When using the Broadcom Tigon3 Ethernet drivers on an Intel IA64 processor-based Linux server running RHEL 4 - U7, the Linux server hangs on reboot after the MPFS software is installed.

To work around this issue, add a line containing sleep 10 to the /etc/init.d/iscsi file on the server. By using vi or another text editor that does not add carriage returns to the end of each line, locate the line containing modprobe. Add sleep 10 before the line containing modprobe.

For example, type:

sleep 10modprobe sd_mod > /dev/null 2>&1#

Note: This work around must be made to every Linux server by using the configuration described above.

mpfsinfo command error

When using the mpfsinfo command, an error message similar to the following may be encountered:

$ mpfsinfocan't find package Tclx while executing"package require Tclx" (file "/usr/sbin/mpfsinfo" line 236)$

If this happens, Tool Command Language scripting (TclX) is not installed on the server. TclX is required for mpfsinfo command usage. TclX may be found on the SLES distribution CD or on open-source Internet websites.

Multiple mount points

When using MPFS v6.0, you may not mount a file system by using NFS on one mount point, for example, /mnt/share1nfs, and on another mount point, /mnt/share1mpfs, using mpfs. If you attempt to simultaneously mount the same share at two different places, one with MPFS and one without, the mount command will return an error such as: MPFS mount: Name not unique on network Error mounting /sgmnt via MPFS

168 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

Multiple MFS RHEL 5 hosts using iSCSI

When using Red Hat Enterprise Linux 5 and all its updates, if more than one MPFS hosts share the same pair of iSCSI ports on the CLARiiON, the total write throughput drops significantly. To solve this problem, follow these steps:

1. Add net.ipv4.tcp_congestion_control = htcp and net.ipv4.tcp_ecn = 1 to the /etc/sysctl.conf file on all the MPFS Linux hosts.

2. Type and run /sbin/sysctl -a at the command prompt.

3. Restart all the MPFS Linux hosts.

NFS or MPFS mounting may fail

When an NFS mount is performed by using the autofs service from a Linux server, the mount might fail in cases when the map file has escaped white spaces at the end of the path. This Linux issue could also affect MPFS mounts.

If the path entry has a tab sign at the end, the mount will fail with the following messages:

◆ Linux server side messages in /var/log/messages:

mount: <hostname:IP address>/fs1 failed, reason given by server: No

such file or directory

◆ Data Mover message in the server_log:

NFS: 3: checkExportedByPath: verifyPath failed withstatus 7 for path /fs1 client <NFS client name>

Ensure that there are no escaped white spaces at the end of the path in the map file used by the autofs service on the Linux server. The Linux vendor’s support website contains more information on this issue.

PowerPath PowerPath is supported but not recommended with MPFS as path failover is built in to the Linux server. If you use PowerPath, performance degradation of the MPFS system is expected. Knowledgebase article emc165953 provides more details.

Known problems and limitations 169

Error Messages and Troubleshooting

Server load averages

Load averages may appear high under heavy I/O conditions, for example:3:50pm up 4:54, 3 users, load average: 18.24, 18.28, 18.93

High-load averages (4 and above) are due to resources claimed by I/O waiting for available CPU time. This degree of load average is normal under heavy I/O.

Symmetrix microcode 5771/5772

Symmetrix microcode version 5772 and some 5771 versions (.86.95, .97.95, .91.99, .94.102, .95.103 and later) have been enhanced to prevent prefetch tasks spawned by sequential reads from flooding Symmetrix cache. This is done by quickly reusing Symmetrix cache used by the prefetch tasks. For environments where it is highly desirable that this prefetched data not be flushed out (for example, OLTP or configurations where a file is read sequentially in to cache and the cached data is repeatedly reused), contact your EMC Customer Support Representative and reference Knowledgebase article emc164574.

Uninstalling HRDP After uninstalling an EMCmpfs RPM, hrdp is still loaded in the kernel. Reboot to remove it.

Unmounting a file system

MPFS does not implement the umount –f option to force the unmount of a file system that is in use. The result of this is that if processes are hung, or the network has been disconnected, it may not be possible to unmount all MPFS mount points. In such a situation, shut down any applications before forcing a reboot of the system. Then, use the reboot -f and -n options to reboot. This will force the machine to reboot.

“Unmounting the MPFS file system” on page 104 describes how to unmount an MPFS file system.

170 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Error Messages and Troubleshooting

Connecting CLARiiON CX3-40C iSCSI Cables 171

Cnvisible Body Tag

After the components are installed in a cabinet and cabled, the next step is to connect the iSCSI ports on the CLARiiON CX3-40C-based storage array. If the cables are already connected, verify that they are connected properly and all of the connectors are fully seated. This appendix includes the following topic:

◆ ISCSI cabling..................................................................................... 172

Connecting CLARiiONCX3-40C iSCSI Cables

172 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Connecting CLARiiON CX3-40C iSCSI Cables

ISCSI cablingThe CLARiiON CX3-40C has four 10/100-gigabit Ethernet ports (RJ45 connectors) for iSCSI I/O to a network switch, server NIC, or HBA:

1. Before configuring the ports, record the network information for the ports in Table 12 on page 172.

2. Connect up to four copper Ethernet cables from iSCSI ports 0 iSCSI through 3 iSCSI, as shown in Figure 8 on page 173, on each SP to the iSCSI network.

Table 12 CLARiiON SP iSCSI IP address

Port IP Netmask Gateway

0 iSCSI (SP A)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

1 iSCSI (SP A)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

2 iSCSI (SP A)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

3 iSCSI (SP A)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

0 iSCSI (SP B)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

1 iSCSI (SP B)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

2 iSCSI (SP B)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

3 iSCSI (SP B)

_____ . _____ . _____ . _____ _____ . _____ . _____ . _____ _____ . _____ . _____ . _____

Connecting CLARiiON CX3-40C iSCSI Cables 173

Connecting CLARiiON CX3-40C iSCSI Cables

Figure 8 CLARiiON CX3-40C storage processor ports

+-

BE 0 BE 1 4 Fibre 5 Fibre

0 iSCSI 1 iSCSI 2 iSCSI 3 iSCSI

iSCSI ports

AC cord

EMC3533

Back end Fibre Channel ports

Front end Fibre Channel ports(CX3-40C only)

Power andfault LEDs

Service andSPS ports Management LAN

Service LAN

174 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Connecting CLARiiON CX3-40C iSCSI Cables

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 175

Glossary

This glossary defines terms useful for MPFS administrators.

CCelerra MPFS over

iSCSIMulti-Path File System over iSCSI-based clients. MPFS client running an iSCSI initiator works in conjunction with an IP-SAN switch containing an iSCSI to SAN blade. The IP-SAN blade provides one or more iSCSI targets that transfer data to the storage area network (SAN) storage systems. See also Multi-Path File Systems (MPFS).

Celerra NetworkServer

EMC network-attached storage (NAS) product line.

Challenge HandshakeAuthentication

Protocol (CHAP)

Access control protocol for secure authentication using shared passwords called secrets.

client Front-end device that requests services from a server, often across a network.

command lineinterface (CLI)

Interface for typing commands through the Control Station to perform tasks that include the management and configuration of the database and Data Movers and the monitoring of statistics for the Celerra cabinet components.

Common Internet FileSystem (CIFS)

File-sharing protocol based on the Microsoft Server Message Block (SMB). It allows users to share file systems over the Internet and intranets.

Control Station Hardware and software component of the Celerra Network Server that manages the system and provides the user interface to all Celerra components.

176 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Glossary

Ddaemon UNIX process that runs continuously in the background, but does

nothing until it is activated by another process or triggered by a particular event.

Data Mover In a Celerra Network Server, a cabinet component running its own operating system that retrieves data from a storage device and makes them available to a network client. This is also referred to as a blade. A Data Mover is sometimes internally referred to as DART since DART is the software running on the platform.

disk volume On Celerra systems, a physical storage unit as exported from the storage array. All other volume types are created from disk volumes.

Eextent Set of adjacent physical blocks.

Ffallthrough Fallthrough occurs when MPFS temporarily employs the NFS or CIFS

protocol to provide continuous data availability, reliability, and protection while block I/O path congestion or unavailability is resolved. This fallthrough technology is seamless and transparent to the application being used.

Fast Ethernet Any Ethernet specification with a speed of 100 Mb/s. Based on the IEEE 802.3u specification.

Fibre Channel Nominally 1 Gb/s data transfer interface technology, although the specification allows data transfer rates from 133 Mb/s up to 4.25 Gb/s. Data can be transmitted and received simultaneously. Common transport protocols, such as Internet Protocol (IP) and Small Computer Systems Interface (SCSI), run over Fibre Channel. Consequently, a single connectivity technology can support high-speed I/O and networking.

File Mapping Protocol(FMP)

File system protocol used to exchange file layout information between an MPFS client and the Celerra Network Server. See also Multi-Path File Systems (MPFS).

file system Method of cataloging and managing the files and directories on a storage system.

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 177

Glossary

FLARE Embedded operating system in CLARiiON disk arrays.

Ggateway Celerra network server that is capable of connecting to multiple

storage arrays, either directly (direct-connected) or through a Fibre Channel switch (fabric-connected).

Gigabit Ethernet Any Ethernet specification with a speed of 1000 Mb/s. IEEE 802.3z defines Gigabit Ethernet over fiber and cable, which has a physical media standard of 1000Base-X (1000Base-SX short wave, 1000Base-LX long wave) and 1000Base-CX shielded copper cable. IEEE 802.3ab defines Gigabit Ethernet over an unshielded twisted pair (1000Base-T).

Hhost Addressable end node capable of transmitting and receiving data.

IInternet Protocol (IP) Network layer protocol that is part of the Open Systems

Interconnection (OSI) reference model. IP provides logical addressing and service for end-to-end delivery.

Internet Protocoladdress (IP Address)

Address uniquely identifying a device on any TCP/IP network. Each address consists of four octets (32 bits), represented as decimal numbers separated by periods. An address is made up of a network number, an optional subnetwork number, and a host number.

Internet SCSI (iSCSI) Protocol for sending SCSI packets over TCP/IP networks.

iSCSI initiator iSCSI endpoint, identified by a unique iSCSI name, which begins an iSCSI session by issuing a command to the other endpoint (the target).

iSCSI target iSCSI endpoint, identified by a unique iSCSI name, which executes commands issued by the iSCSI initiator.

178 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Glossary

Kkernel Software responsible for interacting most directly with the

computer’s hardware. The kernel manages memory, controls user access, maintains file systems, handles interrupts and errors, performs input and output services, and allocates computer resources.

Llogical device One or more physical devices or partitions managed by the storage

controller as a single logical entity.

logical unit (LU) For iSCSI on a Celerra Network Server, a logical unit is an iSCSI software feature that processes SCSI commands, such as reading from and writing to storage media. From a iSCSI host perspective, a logical unit appears as a disk device.

logical unit number(LUN)

Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the term is often used to refer to the logical unit itself.

logical volume Logical devices aggregated and managed at a higher level by a volume manager.

Mmetadata Data that contains structural information, such as access methods,

about itself.

metavolume On a Celerra system, a concatenation of volumes, which can consist of disk, slice, or stripe volumes. Also called a hyper volume or hyper. Every file system must be created on top of a unique metavolume.

mirrored pair Logical volume with all data recorded twice, once on each of two different physical devices.

mirroring Method by which the storage system maintains two identical copies of a designated volume on separate disks.

mount Process of attaching a subdirectory of a remote file system to a mount point on the local machine.

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 179

Glossary

mount point Local subdirectory to which a mount operation attaches a subdirectory of a remote file system.

MPFS session Connection between an MPFS client and a Celerra MPFS Network Server.

MPFS share Shared resource designated for multiplexed communications using the MPFS file system.

Multi-Path File Systems(MPFS)

Celerra Network Server feature that allows heterogeneous servers with MPFS software to concurrently access, directly over Fibre Channel or iSCSI channels, shared data stored on a EMC Symmetrix or CLARiiON array. MPFS adds a lightweight protocol called File Mapping Protocol (FMP) that controls metadata operations.

Nnested mount file

system (NMFS)File system that contains the nested mount root file system and component file systems.

nested mount filesystem root

File system on which the component file systems are mounted read-only except for mount points of the component file systems.

network-attachedstorage (NAS)

Specialized file server that connects to the network. A NAS device, such as Celerra Network Server, contains a specialized operating system and a file system, and processes only I/O requests by supporting popular file sharing protocols such as NFS and CIFS.

network file system(NFS)

Network file system (NFS) is a network file system protocol allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks.

PPowerPath EMC host-resident software that integrates multiple path I/O

capabilities, automatic load balancing, and path failover functions into one comprehensive package for use on open server platforms connected to Symmetrix or CLARiiON enterprise storage systems.

180 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Glossary

RRedundant Array ofIndependent Disks

(RAID)

Method for storing information where the data is stored on multiple disk drives to increase performance and storage capacities and to provide redundancy and fault tolerance.

Sserver Device that handles requests made by clients connected through a

network.

slice volume On a Celerra system, a logical piece or specified area of a volume used to create smaller, more manageable units of storage.

small computersystem interface

(SCSI)

Standard set of protocols for host computers communicating with attached peripherals.

storage area network(SAN)

Network of data storage disks. In large enterprises, a SAN connects multiple servers to a centralized pool of disk storage. See also network-attached storage (NAS).

storage pool Automatic Volume Management (AVM), a Celerra feature, organizes available disk volumes into groupings called storage pools. Storage pools are used to allocate available storage to Celerra file systems. Storage pools can be created automatically by AVM or manually by the user.

storage processor (SP) Storage processor on a CLARiiON storage system. On a CLARiiON storage system, a circuit board with memory modules and control logic that manages the storage system I/O between the host’s Fibre Channel adapter and the disk modules.

Storage processor A(SP A)

Generic term for the first storage processor in a CLARiiON storage system.

Storage processor B(SP B)

Generic term for the second storage processor in a CLARiiON storage system.

stripe size Number of blocks in one stripe of a stripe volume.

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 181

Glossary

stripe volume Arrangement of volumes that appear as a single volume. Allows for stripe units, that cut across the volume and are addressed in an interlaced manner. Stripe volumes make load balancing possible.

See also disk volume, metavolume, slice volume, and volume.

Symmetrix RemoteData Facility (SRDF)

EMC technology that allows two or more Symmetrix systems to maintain a remote mirror of data in more than one location. The systems can be located within the same facility, in a campus, or hundreds of miles apart using fibre or dedicated high-speed circuits. The SRDF family of replication software offers various levels of high-availability configurations, such as SRDF/Synchronous (SRDF/S) and SRDF/Asynchronous (SRDF/A).

Ttar Backup format in PAX that traverses a file tree in depth-first order.

Transmission ControlProtocol (TCP)

Connection-oriented transport protocol that provides reliable data delivery.

Uunified storage Celerra network server that is connected to a captive storage array

that is not shared with any other Celerra network servers and is not capable of connecting to multiple storage arrays.

VVirtual Storage Area

Network (VSAN)SAN that can be broken up into sections allowing traffic to be isolated within the section.

volume On a Celerra system, a virtual disk into which a file system, database management system, or other application places data. A volume can be a single disk partition or multiple partitions on one or more physical drives.

See also disk volume, metavolume, slice volume, and stripe volume.

182 EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Glossary

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide 183

AAccess Logix configuration 77accessing storage 81administering MPFS 117architecture

Celerra gatewayFibre Channel 20, 43iSCSI 22, 44iSCSI (MDS-based) 23, 44

Celerra unified storageFibre Channel 19, 43iSCSI 21, 44iSCSI (MDS-based) 21, 44

arraycommpath 77, 79, 81, 82, 94, 96, 98Asynchronous I/O support 148authentication, CHAP 35

Bbest practices

Celerra Gateway 67Celerra volumes 55Celerra with MPFS 32CLARiiON 67file system 55, 56LUNs 56MDS iSCSI ports 70MPFS 32, 55, 62MPFS threads 32storage systems 34stripe size 62

CCelerra gateway

Fibre Channel 20, 43iSCSI 22, 44iSCSI (MDS-based) 23, 44

Celerra Network Serverconfiguring 56enabling MPFS 66setup 53

Celerra Startup Assistant (CSA) Utility 54Celerra unified storage

Fibre Channel 19, 43iSCSI 21, 44iSCSI (MDS-based) 21, 44

Celerra with MPFS configuration 32Challenge Handshake Authentication Protocol.

See CHAPCHAP

one-way authentication 35reverse authentication 35secret 35session authentication 35

CLARiiONbest practices 67configuring using CLI commands 67CX3-40C iSCSI cables 171iSCSI port configuration 75storage array requirements 47

command line interface. See mpfsctl commandscommands

/proc/mpfs/devices 137mpfsctl diskreset 123mpfsctl diskresetfreq 124

Index

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide184

Index

mpfsctl help 123mpfsctl max-readahead 125mpfsctl prefetch 127mpfsctl reset 128mpfsctl stats 128mpfsctl version 131mpfsctl volmgt 131mpfsinfo 139mpfsinq 134mpfsquota 137mpfsstat 132

comments 15configuration

overview 30planning checklist 41

configuringCelerra Network Server 56Gigabit Ethernet ports 45iSCSI target 75storage 81storage access 67, 69zones 67, 69

creatingfile system 63metavolume 63mountpoint 64MPFS file system 56security file 73storage groups 79stripe 61

DData Mover capacity 33DirectIO support 146disabling

arraycommpath 77, 82, 94, 96, 98failovermode 77, 82, 94, 96, 98HVM 149iSCSI targets 67read and write protection for Celerra

volumes 119displaying

accessible LUNs 56disks 58MPFS devices 134, 136MPFS software version 131

MPFS statistics 128

EEMC HighRoad Disk Protection (hrdp) program

118EMCmpfs parameters

hrdp_sleep_time 148mpfs_discover_sleep_time 148mpfs_diskspeed_buffer_size 149mpfs_iscsi_pid_file 149mpfs_iscsi_rediscover_time 149mpfs_mount_hvl 149mpfs_scsi_cmd_timeout 149perf_timeout 149

enablingarraycommpath 77, 82, 94, 96, 98failovermode 77, 82, 94, 96, 98SP interfaces 72

error messages 156

Ffailovermode 77, 79, 81, 82, 94, 96, 98Fibre Channel

adding hosts to storage groups 82, 93, 98configuring adapters 36driver installation 81switch installation 68switch requirements 49

File Mapping Protocol (FMP) 28file syntax rules 151file system

creating 56exporting 65mounting 64names of mounted 59names of unmounted 60setup 55

firewall FMP ports 109

GGigabit Ethernet port configuration 45

HHierarchical volume management (HVM)

185EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Index

default settings 149enable/disable 149overview 38values 149

II/O sizes 32installing

Fibre Channel switch 50MPFS software 106MPFS software, troubleshooting 157storage system 47

IP-SAN switch requirements 50iSCSI CHAP authentication 35iSCSI discovery address 45iSCSI driver

starting 92stopping 92

iSCSI driver configurationCentOS 5.3 88RHEL 4 84RHEL 5 88SLES 10 88

iSCSI initiatorcommunication with MDS console 69configuring 90configuring ports 36, 67connection to a SAN switch 34names 68show IQN name 88

iSCSI proxy new zone 72iSCSI target configuration 75iSCSI-to-Fibre Channel bridge configuration 69

LLinux Server

configuration 34error messages 156

LUNsaccessible by Data Movers 56adding 79best practices 56displaying 59displaying all 86failover 81maximum number supported 56

mixed not supported 47rediscover new 149total usable capacity 56

Mmanaging using mpfs commands 117MDS iSCSI port configuration 70metavolume 63mounting an MPFS file system 100

troubleshooting 158mountpoint, creating 64MPFS client troubleshooting 157mpfs commands

/proc/mpfs/devices 137mpfsinfo 139mpfsinq 134mpfsquota 137mpfsstat 132

MPFS configurations 19, 21, 24MPFS devices, displaying 134MPFS file system

creating 56, 63exporting 65mounting 64, 100setup 55storage requirements 47unmounting 104, 160

MPFS overview 18MPFS parameters

conf parameters 143kernel parameters 141persistent parameters 143

MPFS softwarebefore installing 106blocks per flush 130install over existing 110installation 106installing from a CD 108installing from a tar file 106managing using hrdp commands 119managing using mpfs commands 117managing using mpfsctl commands 117post installation instructions 113starting 110, 112uninstalling 115upgrading 110

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide186

Index

upgrading from an earlier version 111upgrading with MPFS file system mounted

112verifying upgrade 114version number 131

MPFS threads 32mpfsctl commands

mpfsctl diskreset 123mpfsctl diskresetfreq 124mpfsctl help 123mpfsctl max-readahead 125mpfsctl prefetch 127mpfsctl reset 128mpfsctl stats 128mpfsctl version 131mpfsctl volmgt 131

mpfsinqtroubleshooting 159

Nnumber of blocks per flush 130

Ooverview of configuring MPFS 30

Pperformance

Celerra Network Server 48Celerra volumes 55CLARiiON 67file systems 55gigabit ethernet ports 45iSCSI ports 165Linux server 51, 67MPFS 32, 33, 34, 55, 62, 127, 132, 168MPFS reads 126MPFS threads 32MPFS with Data Movers 33MPFS with PowerPath 32problems with Linux server 129read ahead 126storage system 34stripe size 32, 55, 62

post installation instructions 109PowerPath support 32

PowerPath with MPFS 77Prefetch requirements 34

RRainfinity Global Namespace 37Read ahead performance 126Read cache requirements 34removing MPFS software, troubleshooting 163

SSAN switch zoning 68secret (CHAP) 35security file creation 73SendTargets discovery 90setting

arraycommpath 78, 82, 94, 96, 98failovermode 77, 82, 94, 96, 98

setting upCelerra Network Server 53MPFS file system 55

software componentsCelerra Network Server NAS software 46iSCSI initiator 46MPFS software 46Red Hat Enterprise Linux 46, 168SUSE Linux Enterprise 46

starting MPFS 110, 112starting the iSCSI driver 92statistics

displaying 128, 132resetting counters 128

stopping the iSCSI driver 92storage group

adding Fibre Channel hosts 82, 93, 98adding initiators 93configuring 81

storage guidelines 34storage pool, created CSA 40, 54storage system

configuration recommendations 34configuring 68installation 47requirements 47storage array requirements 47

stripe size 55system component verification 43

187EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide

Index

TTCP parameters 70troubleshooting

cannot write to a mounted file system 162installing MPFS software 157Linux client 157mounting an MPFS file system 158mpfsinq command 159NFS server response 163removing MPFS software 163uninstalling MPFS software 115unmounting a file system 160

Uuninstalling MPFS software 115Unisphere software 54unmounting an MPFS file system 104, 160, 169upgrading MPFS software 110

Vverifying an MPFS software upgrade 114version number, displaying 131VMware ESX server

Fibre Channel adapters 36iSCSI initiator ports 36limitations 36requirements with Linux 36

volume stripe size 32volumes mounted 61volumes, names of 61

Wwrite throughput 168

Zzone activation 72

EMC Celerra MPFS over FC and iSCSI v6.0 Linux Clients Product Guide188

Index