celerra icon lab guide

173
Celerra ICON Celerra Training for Engineering Celerra Clients IP Network Back-end Storage SAN EMC Education Services February, 2006

Upload: balla-harish

Post on 10-Sep-2015

49 views

Category:

Documents


2 download

DESCRIPTION

Celerra ICON Lab Guide

TRANSCRIPT

  • Celerra ICON Celerra Training for Engineering

    Celerra Clients

    IP Network

    Back-end Storage

    SAN

    EMC Education Services

    February, 2006

  • Copyright 2006 EMC Corporation. All Rights Reserved. Version 5.4 Page 2 of 9

  • Copyright:

    Copyright 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC, ICDA (Integrated Cached Disk Array), and EMC2 (the EMC logo), and Symmetrix, are registered trademarks of EMC Corporation. EMC and SRDF are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.

  • Celerra ICON Student Lab Guide

    Copyright 2006 EMC Corporation. All Rights Reserved. Version 5.4 Page 5 of 9

    Trademark Information:

    EMC Trademarks

    EMC2, EMC, Symmetrix, Celerra, CLARiiON, CLARalert, Connectrix, Dantz, Documentum, HighRoad, Legato, Navisphere, PowerPath, ResourcePak, SnapView/IP, SRDF, TimeFinder, VisualSAN, and where information lives are registered trademarks and EMC Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap, Access Logix, AutoAdvice, Automated Resource Manager, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, Centera, CentraStar, CLARevent, CopyCross, CopyPoint, DatabaseXtender, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover, MirrorView, NetWin, OnAlert, OpenScale, Powerlink, PowerVolume, RepliCare, SafeLine, SAN Architect, SAN Copy, SAN Manager, SDMS, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, Universal Data Tone, and VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.

    Third Party Trademarks

    AIX is a registered trademark of International Business Machines Corporation. Brocade, SilkWorm, SilkWorm Express, and the Brocade logo are trademarks or registered trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Compaq and the names of Compaq products referenced herein are either trademarks and/or service marks or registered trademarks and/or service marks of Compaq. Hewlett-Packard, HP, HP-UX, OpenView, and OmniBack are trademarks, or registered trademarks of Hewlett-Packard Company. McDATA, the McDATA logo, and ES-2500 are registered trademarks of McDATA Corporation. Microsoft, Windows, and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. NobleNet is a registered trademark of Rogue Wave Software, Inc. SANbox is a trademark of QLogic Corporation. Sun, Sun Microsystems, the Sun Logo, SunOS and all Sun-based trademarks and logos, Java, the Java Coffee Cup Logo, and all Java-based trademarks and logos, Solaris, and NFS, are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. UNIX is a registered trademark of The Open Group.

  • Copyright 2006 EMC Corporation. All Rights Reserved. Version 5.4 Page 6 of 9

    Document Revision History:

    Rev # File Name Date

    1 CelerraICON_Lab Guide .doc 2/20/06

  • Celerra ICON Student Lab Guide

    Copyright 2006 EMC Corporation. All Rights Reserved. Version 5.4 Page 7 of 9

    Table of Contents: Celerra ICON Student Lab Guide

    Copyright : ........................................................................................................................4 Trademark Information: ...................................................................................................5 Document Revision History:............................................................................................6 Lab 1: Celerra Gatway Installation Lab 2: Basic Configuration Lab 3: File System and NFS Exports Lab 4: CIFS Configuration Lab 5: SnapSure and Celerra Replicator Lab 6: iSCSI on Celerra Appendix: Hurracane Marine Case Study

  • Copyright 2006 EMC Corporation. All Rights Reserved. Version 5.4 Page 8 of 9

    Format of Lab Guide

    This Lab Guide introduces Celerra Manager v5.4. Since not all functionality is available within the GUI, you will see a combination of Celerra Manager GUI and CLI commands. The CLI commands can be executed from one of three places:

    1. Putty on your workstation 2. CLI Command in the Celerra Manager tree hierarchy CLI from the GUI

    supports most CLI commands. However, it does not support interactive commands, such as nas_quotas. It also does not support running vi or other text editors.

    3. Tools in the tree hierarchy > SSH shell This option gives you full access to all CLI commands, including interactive commands and text editors. This is the option we recommend throughout this lab. When logging in to SSH from Celerra Manager, be sure to log in as nasadmin. You can su to root from there if required. As always, be careful when logged in as root.

    Some of the Celerra Manager screen shots will have different values than what you will be entering because the lab was created in a lab environment with different IP addresses and CLARiiON instead of Symmetrix storage.

    Be aware that Celerra Manager can be purchased as Basic Edition or Advanced Edition. The Celerras used in this lab are set up for Advanced Edition with an Enterprise License (NFS and CIFS). All licensing has been enabled. When using other Celerras, be aware that some of the features may not be available because of licensing choices.

  • Celerra ICON: Lab 1

    Celerra ICON Celerra Training for Engineering

    Lab 1: Installing a Gateway System

    EMC Education Services

    Date: September, 2005

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 1

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 2

    Document Revision History:

    Rev # File Name Date

    2.0 01_Lab1_Celerra Install.doc February, 2006 2.1 01_Lab1_Celerra Install.doc April, 2006

    Table of Contents: Exercise 1: Planning and Data Collection

    Exercise 2: Verify Cable Connections

    Exercise 3: Configure the Boot Array

    Exercise 4: Install and Configure NAS Software

    Exercise 5: Verify Installation

    Appendix: Sample Zoning Script

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 3

    Lab 1: Installing a Gateway System

    Purpose:

    In this lab you will verify the proper cabling of a Celerra Gateway system and perform a complete installation of LINUX and the NAS code on the Control Station and DART on the Data Movers.

    Objectives: Using appropriate documentation: Verify correct cabling between the Data Movers and The control Station, the Data Mover and the Control Station to the IP switch, and the back-end cabling to the storage system

    Install LINUX and NAS code on the Control Station Configure a CLARiiON back-end Configure Switch Zoning Complete the installation of DART on the Data Movers Verify Successful Installation

    References: Celerra Network Server Celerra NS500G/NS600G/NS700G Gateway Configuration

    P/N 300-002-071 Rev A03 Version 5.4

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 4

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 5

    Lab 1 Exercise 1: Planning and Data Collection Step Action

    Reference Appendix G of Celerra Network Server Celerra NS500G/NS600G/NS700G Gateway Configuration

    1.

    Gather Software: What version of NAS are you going to Install? ____________________________ Do you have the CD and boot Floppy? __________________________________

    2.

    Gather Information about CLARiiON back-end Storage Is this a dedicated or shared back-end?__________________________________ SPA Management Port IP address: _____________________________________ SPA Management Port IP address: _____________________________________ Navisphere Administrator User ID: _____________________________________ Navisphere Administrator User Password: _______________________________

    3.

    Gather Information about Fibre Channel Switch IP address: _______________________________________________________ Administrator User ID: ______________________________________________ Navisphere Administrator User Password: _______________________________

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 6

    4. Control Station Network Worksheet: Primary Internal (Private) Network Setup Hostname: emcnas_i0 IP Address_____ . _____ . _____ . 100 (default 192.168.1.100) Netmask: 255.255.255.0 Backup Internal (Private) Network Setup Hostname: emcnas_i1 IP Address _____ . _____ . _____ . 100 (default 192.168.2.100) Netmask: 255.255.255.0 Configure TCP/IP (if Static IP Address selected) IP Address _____ . _____ . _____ . _____ This is the IP addresses of port eth3 that is cabled to the customers public or maintenance LAN. Netmask _____ . _____ . _____ . _____ Default Gateway (IP) Address _____ . _____ . _____ . _____ Primary Nameserver _____ . _____ . _____ . _____ Configure Network Domain Name ___________________________________________ This is the domain name of the customers network to which the Control Station is attached. Host Name ______________________________________________ Secondary Nameserver (optional) _____ . _____ . _____ . _____

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 7

    Lab 1 Exercise 2: Verify Cable Connections Step Action

    1.

    Reference Chapters 3-10 of Celerra Network Server Celerra NS500G/NS600G/NS700G Gateway Configuration

    Reference the appropriate chapter depending on the specific model and configuration.

    2.

    Verify the Fibre Channel cables from the Data Mover BE ports to the Fibre Channel switch(es).

    3.

    Verify Fibre Channel cables from Switch to Storage Array.

    4.

    Verify Serial Cables from Control Station to the Data Movers.

    5.

    Verify Private LAN cables.

    6.

    Verify External Network cables.

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 8

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 9

    Lab 1 Exercise 3: Configure the Boot Array Step Action

    1. Reference Chapter 11 of Celerra Network Server Celerra NS500G/NS600G/NS700G Gateway Configuration

    2. Power On the Boot Array.

    3. Verify CLARiiON Array Software Versions.

    Verify Release Levels

    Verify all software is committed

    Verify Access Logix is enabled

    4. Verify CLARiiON Array Write Cache settings.

    Write cache is enable

    Memory is allocated to write cache

    5. Verify Array Privileged User Account Settings.

    Empty

    Or, it must include root user from the Control Station

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 10

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 11

    Lab 1 Exercise 4: Installation and Configure NAS Software Step Action

    1.

    Reference Chapter 12 of Celerra Network Server Celerra NS500G/NS600G/NS700G Gateway Configuration

    2.

    Connect Null Modem cable and configure Hyper Terminal Session.

    19200 BPS 8 data bits No parity 1 stop pit No flow control

    3. Insert the CD and the floppy and boot the Control Station.

    4.

    Select serialinstall when prompted. Follow prompts to install LINUX on the control station.

    5.

    When prompted, remove the boot media and press return to boot from the newly installed LINUX on the control station.

    6.

    Enter the Network parameters when prompted. Use information gathered in Exercise 1 of this lab.

    7.

    Select Gataway System configuration, CLARiiON back-end, and Fabric Connect.

    8.

    Specify the IP address of the CLARiiON Back-end.

    9.

    Select NO when prompted Do you want to use Celerra Auto-config? In many cases, you would want to allow the auto-config scripts to all the configuration but for this exercise we want to manually perform the steps.

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 12

    10. Reference Appendix E of Celerra Network Server Celerra NS500G/NS600G/NS700G Gateway Configuration

    Reference Appendix E Zoning FC switches and Manually configuring Control LUNs.

    When you entered No at the previous prompt, the Control station will query the data Movers and report the WWNs of the Fibre Channel HBAs. Record this information below: Data Mover 2 HBA 0 : WWPN = _______________________________________

    Data Mover 2 HBA 1 : WWPN = _______________________________________

    Data Mover 3 HBA 0 : WWPN = _______________________________________

    Data Mover 3 HBA 1 : WWPN = _______________________________________

    Note: It might be best to cut and paste this information into a notpad document. It is critical that you capture this information exactly!

    11.

    Collect the WWPNs of the CLARiiON SP ports. Using either the NaviCLI or the Navisphere Manager GUI, Collect the WWPNs of the SP ports and record the information below: SPA Port 0 : WWPN = ______________________________________________ SPA Port 1 : WWPN = ______________________________________________ SPB Port 0 : WWPN = ______________________________________________ SPB Port 1 : WWPN = ______________________________________________

    12.

    Zone the Fibre Channel Switch. Using the information created previously create zones from each Data Mover port to each port on the SP. Create and activate the Zone configuration. Reference the last page of lab for example of the zoning commands.

    13.

    Create a Navisphere Storage Group for the Celerra Data Movers.

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 13

    14. Register the Initiator Records for each Data Mover ports. Right click on the CLARiiON Array and select Connectivity Status from the drop down menu. Initiator records are created for each connection from a Data Mover HBA to the CLARiiON Storage processor port. Each Initiator Record is configured with the specific operating parameters required for the Celerra when they are Registered. Registration can be performed record at a time or all at once by performing a Group Edit. Use the following Parameters when configuring Initiator Records:

    Initiator Type - CLARiiON Open Array Comm Path - Disabled Failover Mode - 0 Unit Serial Number - Array Vendor - Celerra Model - This HBA Belongs to - New Host IP Address of the Control Station

    15.

    Create RAID5 Raid Group. Using either CLI or GUI create a RAID 5 RAID group that includes 5 disks.

    16.

    Bind two LUNs of Size 11GB.

    17.

    Bind four additional LUNs of size 2 GB.

    18.

    Configure a Hot Spare. Create a Raid Group with one disk in it and bind a LUN of type HS.

    19.

    Add the newly create LUNS to the Storage Group you previously created. Note: It is critical that the Host LUN Numbers (hlu) be correct. The two 11GB volumes must be hlu 0 &1 The four 2 GB volumes must be hlu 2-5

    20.

    Add the Initiator records that you registered in step 14 to the storage group.

    21.

    Complete the installation. Return to the Control Station Hyper terminal session and enter c to continue with the installation. The installation will continue and will create the appropriate file systems and load

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 14

    the DART software. Finally, the Data Movers will reboot off the newly install software. This will take 30 minutes or longer. Do not walk away as you will be prompted one more time to select enable UNICODE.

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 15

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 16

    Lab 1 Exercise 5: Verify Installation Step Action

    1.

    Using the Serial console connection or SSH, connect to the Control Station.

    2.

    Verify the software version of the code running on the Control Station. $ nas_version

    What is the version number? __________________________________________

    3.

    Verify the software version of the DART code running on the Data Movers. $ server_version ALL

    What is the version number? __________________________________________

    Verify that the Data Movers have access to the Control Volumes. $ nas_disk -list

    What are the IDs sand sizes of the disk volumes?

    ID: _________________ Size: __________________

    ID: _________________ Size: __________________

    ID: _________________ Size: __________________

    ID: _________________ Size: __________________

    ID: _________________ Size: __________________

    ID: _________________ Size: __________________

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 17

    4. List the file systems that are mounted on each Data Mover. $ server_df ALL What file systems are mounted on each Data Mover? _______________________ __________________________________________________________________

    5.

    Verify connection between the Data Movers and the Control Station over the both internal networks using the ping command. $ ping c 1 server_2 $ ping c 1 server_2b $ ping c 1 server_3 $ ping c 1 server_3b

    End of Exercise

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 18

    Sample Zoning Script for Brocade and NS704/CX700

    zonecreate "DM_2_BE_0_SPA_Port0","50:06:01:60:30:60:2f:3b;50:06:01:60:00:60:02:42" zonecreate "DM_2_BE_0_SPB_Port0","50:06:01:60:30:60:2f:3b;50:06:01:68:00:60:02:42" zonecreate "DM_2_BE_1_SPA_Port1","50:06:01:61:30:60:2f:3b;50:06:01:61:00:60:02:42" zonecreate "DM_2_BE_1_SPB_Port1","50:06:01:61:30:60:2f:3b;50:06:01:69:00:60:02:42" zonecreate "DM_3_BE_0_SPA_Port0","50:06:01:68:30:60:2f:3b;50:06:01:60:00:60:02:42" zonecreate "DM_3_BE_0_SPB_Port0","50:06:01:68:30:60:2f:3b;50:06:01:68:00:60:02:42" zonecreate "DM_3_BE_1_SPA_Port1","50:06:01:69:30:60:2f:3b;50:06:01:61:00:60:02:42" zonecreate "DM_3_BE_1_SPB_Port1","50:06:01:69:30:60:2f:3b;50:06:01:69:00:60:02:42" zonecreate "DM_4_BE_0_SPA_Port0","50:06:01:60:30:60:2e:d2;50:06:01:60:00:60:02:42" zonecreate "DM_4_BE_0_SPB_Port0","50:06:01:60:30:60:2e:d2;50:06:01:68:00:60:02:42" zonecreate "DM_4_BE_1_SPA_Port1","50:06:01:61:30:60:2e:d2;50:06:01:61:00:60:02:42" zonecreate "DM_4_BE_1_SPB_Port1","50:06:01:61:30:60:2e:d2;50:06:01:69:00:60:02:42" zonecreate "DM_5_BE_0_SPA_Port0","50:06:01:68:30:60:2e:d2;50:06:01:60:00:60:02:42" zonecreate "DM_5_BE_0_SPB_Port0","50:06:01:68:30:60:2e:d2;50:06:01:68:00:60:02:42" zonecreate "DM_5_BE_1_SPA_Port1","50:06:01:69:30:60:2e:d2;50:06:01:61:00:60:02:42" zonecreate "DM_5_BE_1_SPB_Port1","50:06:01:69:30:60:2e:d2;50:06:01:69:00:60:02:42" cfgcreate "Celerra_cfg", "DM_2_BE_0_SPA_Port0; DM_2_BE_0_SPB_Port0; DM_2_BE_1_SPA_Port1; DM_2_BE_1_SPB_Port1; DM_3_BE_0_SPA_Port0; DM_3_BE_0_SPB_Port0; DM_3_BE_1_SPA_Port1; DM_3_BE_1_SPB_Port1; DM_4_BE_0_SPA_Port0; DM_4_BE_0_SPB_Port0; DM_4_BE_1_SPA_Port1; DM_4_BE_1_SPB_Port1; DM_5_BE_0_SPA_Port0; DM_5_BE_0_SPB_Port0; DM_5_BE_1_SPA_Port1; DM_5_BE_1_SPB_Port1" cfgenable "Celerra_cfg" cfgsave

  • Celerra ICON: Lab 1

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 1 Page 19

  • Celerra ICON: Lab 2

    Celerra ICON Celerra Training for Engineering

    Lab 2: Basic Configuration

    EMC Education Services

    Date: February, 2006

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 1

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 2

    Document Revision History:

    Rev # File Name Date

    1.0 02_Lab2_CelerraICON_Basic Config February, 2006

    Table of Contents: Exercise 1: Configure Data Mover Failover Policy

    Exercise 2: Configure Network Interface

    Exercise 3: Configure Network High Availability

    Exercise 4: Testing Basic Data Mover Failover

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 3

    Lab 2: Basic Configuration

    Purpose:

    In this lab you will examine the components of your Data Mover to determine if they are compatible for Data Mover failover. Then you will configure server_2 to use server_3 as its standby Data Mover. Next you will configure network interfaces and high availability networks using Ethernet trunking and FailSafe Networks. Finally, you will test Data Mover failover.

    Depending on the back-end storage being used, the Data Movers may already be configured with a standby. For example, an NS700 initial setup configures one Data Mover as a primary and one as a standby.

    Objectives: Configure Data Mover failover Configure Celerra Data Movers Network Interface Cards Configure a Data Mover for EtherChannel Trunking Configure a Data Mover for Fail Safe Network and test Test Data Mover Failover

    References: Configuring Celerra Quick Start P/N 300-002-003 Rev A01 Version 5.4 April 2005

    Configuring Standbys on Celerra P/N 300-002-034 Rev A01 Version 5.4 April 2005

    Configuring and Managing Celerra Networking P/N 300-002-016 Rev A01 Version 5.4 April 2005

    Configuring and Managing Celerra Network High Availability P/N 300-002-034 Rev A01 Version 5.4 April 2005

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 4

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 5

    Lab 2 Exercise 1: Configuring Data Mover Failover Step Action

    1.

    Connect to your Celerra Control Station: Using Putty or other SSH tools, connect to your Celerra Control Station and log in as nasadmin.

    2.

    View the current Data Mover configuration: View the server table for your Celerra and record the following information for each Data Mover: $ nas_server -list id type slot name _________________________________ _________________________________ _________________________________ _________________________________ _________________________________ What does DM Type 1 indicate? ______________________________ What does DM Type 4 indicate? _______________________________

    3.

    Confirm that server_2 and server_3 have the same hardware components. $ server_sysconfig server_2 Platform Record the following information from server_2: Processor speed (MHz) _____________ Total main memory (MB) ___________ Mother board _____________________ Bus speed (MHz)__________________ $ server_sysconfig server_2 pci |more Make note of all of the PCI devices in your Data Mover. Now issue the same commands for server_3 and compare the results. Do your two Data Movers have the same hardware components? ______

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 6

    4. Configure Data Mover failover standby relationships: Use the server_standby command to configure server_2 to use server_3 as its standby Data Mover; use the auto policy. $ server_standby server_2 c mover=server_3 policy auto

    5.

    Verify standby configuration for servers: $ nas_server info server_2 Does the output identify server_3 as the standby server? _______ $ nas_server info server_3 Does the output indicate that server_3 is a standby for server_2? ______

    6.

    Using Celerra Manager, confirm Data Mover failover standby relationships: Log on to Celerra Manager as nasadmin by launching Internet Explorer and enter the IP address of your Control Station.

    7.

    Verify Celerra Manager Licenses: Click on the IP address of your Control Station > Licenses tab to verify that all licenses have been enabled. If they have not, enable them.

  • Celerra ICON: Lab 2

    8.

    Verify that server_3 is configured as type Standby:

    Expand Data Movers in the tree hierarchy > select server_3. View the Role field to determine server_3 is configured as a Standby. If not, select standby from the Role: dropdown and click apply (the data

    mover will reboot).

    9.

    Confirm that server_2 is configured to use server_3 as its standby Data Mover and the Failover policy is auto:

    Expand Data Movers in the tree hierarchy > select server_2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 7

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 8

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 9

    Lab 2 Exercise 2: Configure Data Mover Network Interface Cards Step Action

    In this lab you will be configuring Celerra with networking settings. This is a basic task that will likely be required for all Celerra configurations.

    1.

    Gather required information: Before you begin, you will want to confirm the following key information from Appendix E:

    Hurricane Marines DNS domain name is: hmarine.com The DNS IP address is: 10.127.*.161 Hurricane Marines NIS domain name is: hmarine.com The NIS IP address is 10.127.*.163 The IP address of your Control Station:

    ______ . ______ . ______ . ______ The IP address information for server_2 & server_3. Note: while each Data Mover has multiple network devices, we are only going to configure a single interface so we only need one IP address per data mover. server_2:

    IP address: ______ . ______ . ______ . ______ Subnet mask: ______ . ______ . ______ . ______ Broadcast address: ______ . ______ . ______ . ______ Default gateway: ______ . ______ . ______ . ______

    server_3: IP address: ______ . ______ . ______ . ______ Subnet mask: ______ . ______ . ______ . ______ Broadcast address: ______ . ______ . ______ . ______

    Default gateway: ______ . ______ . ______ . ______

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 10

    2. Connect to your Celerra Control Station: Using Putty or other SSH tools, connect to your Celerra Control Station and log in as nasadmin

    3.

    Verify hardware configuration for Network Interface Card (NIC): Determine the physical network hardware in server_2.

    $ server_sysconfig server_2 pci What kind of Network Devices are in server_2?_____________________________

    4. Configure speed and duplex for the network cards in server_2: Configure the Ethernet card in server_2 to use a transmission speed of 100Mbs, and full duplex mode. $ server_sysconfig server_2 pci x o speed=100,duplex=full (where x is the port name, for example cge0) Repeat step 4 for ports: cge1, cge2, cge3. Note: While auto negotiate is the default, it is a Best Practice to hard configure the speed and duplex of the network devices.

    5.

    Configure IP address for a network interface in server_2: Configure the IP address for the 10/100/1000 Ethernet Device (cge0) for server_2. $ server_ifconfig server_2 c D cge0 n cge0_if0 p IP < IP Address> < Netmask> < Broadcast Address > Hint: Reference the manual page (man server_ifconfig)

    6.

    Verify settings for interface: server_ifconfig server_2 cge0_if0 Verify that the IP, Netmask, and Gateway addresses matches what was identified in Step1.

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 11

    7. Test the interface:Test the network interface by pinging the external IP address of your Control Station. $ server_ping server_2 Note: You may also want to ping back from the Windows workstation to your Data Mover by connecting to your Windows workstation and issuing the ping command from a command prompt.

    8.

    Configure the default gateway: Configure server_2 to use the default gateway recorded in step 1. $ server_route server_2 a default

    9.

    Test connectivity to Gateway: Test your configuration by pinging from your Data Mover to your Windows workstation. $ server_ping server_2

    10.

    Configure Data Mover to use DNS and NIS: Configure server_2 to use the names and addresses of the DNS and NIS servers that you recorded in step 1. $ server_dns server_2 hmarine.com $ server_nis server_2 hmarine.com

  • Celerra ICON: Lab 2

    11. Using Celerra Manager, confirm that the Fast Ethernet card in server_2 is configured to use a transmission speed of 100Mbs, and full duplex mode:

    Select Network in the tree hierarchy > click the Devices tab Select server_2 in Show Network Devices for: To set speed and duplex double click on ana0(cge0) to view its properties >

    select 100FD for Speed/Duplex:

    12. Using Celerra Manager, test the network interface by pinging the external IP address of your Control Station:

    Select the Ping tab > select Data Mover: server_2 > select the Interface address you just configured > enter the IP address of your control station in Destination:

    Click OK

    You should see the following:

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 12

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 13

    Answer the following question(s) using the Celerra man pages as a resource. How would you shut down the network interface temporarily? (See man page for

    server_ifconfig.) ___________________________________________________________________

    How would you delete this configuration? (See man page for server_ifconfig.) ___________________________________________________________________ How would you configure the Celerra to use a specific network interfaces to get to the DNS and NIS servers? (See man page for server_route.) ___________________________________________________________________ ___________________________________________________________________

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 14

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 15

    Lab 2 Exercise 3: Configuring EtherChannel Ethernet Channel trunking provides network high availability in the event of an adapter, cable, switch port, or switch failure. It is a CISCO specific implementation and an alternative to the open standard LACP. EtherChannel is configured either by itself or in conjunction with Failsave Networks (FSN)) that will be configured in the following exercise.

    Step Action

    1. Planning and Verification of Ethernet switch Configuration: Prior to configuring EtherChannel Trunking on the Data Mover, it is critical that you verify the correct setup of the Ethernet switch. Gather and record the following information:

    The ports on the Data Movers that will be setup as an EtherChannel are cge0 and cge1 for this exercise; and cge2 and cge3 for the following exercise.

    Referencing Appendix G, note which physical ports on the Ethernet switch connect the Data Movers ports.

    Primary Data Mover DM port Switch

    port cge0 (ana0) cge1 (ana1) cge2 (ana2) cge3 (ana3)

    Standby Data Mover DM port Switch

    port cge0 (ana0) cge1 (ana1) cge2 (ana2) cge3 (ana3)

    Note: The Ethernet switch must be configured for EtherChannel. Confirm with your instructor that the appropriate ports are channeled together.

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 16

    2. Delete existing network interface configurations: Confirm that the Data Mover ports you wish to configure as an EtherChannel trunk are not already being used. $ server_ifconfig server_2 a Delete any existing interface configuration $ server_ifconfig server_2 delete cge0

    3.

    Confirm that all interfaces are set for 100Mb/Full Duplex: $ server_sysconfig server_2 -pci

    4.

    Create a Trunk Virtual Device: Configure a virtual device on the Data Mover as an trunked Ethernet Channel using cge0 and cge1. Call the virtual device trk0 $ server_sysconfig server_2 virtual name trk0 create trk -o device=cge0,cge1 Verify the trunk virtual device was created: $ server_sysconfig server_2 v info trk0

    5.

    Assign an IP address to the Virtual Device you just configured: You do this in the same manner as you did in the previously for a physical device. $ server_ifconfig server_x create Device trk0 name trk0_if0 protocol IP Verify the configuration of the interface. $ server_ifconfig server_x trk0_if0

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 17

    6. Test your IP configuration by pinging the Data Movers default gateway: $ server_ping server_x 10.127.*.y (Where y is the IP address of the Data Movers default gateway.)

    7.

    Generate network traffic over the Ethernet Channel Interface: On the Control Station, set up a continuous ping. $ while true > do > server_ping server_2 10.127.*.y > done (Where y is the IP address of the Data Movers default gateway.)

    8.

    Monitor which of the two interfaces are being used: Do not close the window that is running the do-while loop above. In a separate terminal window, log on to the Celerras Control Station, and use server_netstat with a while true loop to identify and monitor which network interface is being used for the connection. $ while true > do > server_netstat server_2 i > sleep 1 > done (Where x is the number of your Data Mover.) You should see only one cge port with increasing packet counts.

    9.

    Simulate a switch port failure: Ask your instructor to disable the switch port to which your active Data Mover port is connected. Examine the activity from step 10. You should see the activity has switched to the other Ethernet port. Ask your instructor to re-enable the switch port which was disabled.

    10.

    Stop the while true loops using ctl c.

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 18

    11.

    Remove the IP configurations from the EtherChannel trunk. The IP address is associated with the Interface that is configured for the device. Delete the interface on trk0. $ server_ifconfig server_2 delete trk0_if0 Note: The trunk is still configured but there is no IP address configured on it.

    12.

    Verify that the interface was removed but the virtual device is still configured. $ server_ifconfig server_2 -all $ server_sysconfig server_2 v info trk0

  • Celerra ICON: Lab 2

    Lab 2 Exercise 4: Configuring an FailSafe network device comprised of 2 Etherchannel Trunks and test FSN functionality

    In this exercise you will configure a failsafe network that consists of two Ethernet channel trunks. Note: This is not a truly highly available as a FSN should be configured with multiple Ethernet switches interconnected with an ISL trunk. Step Action

    1.

    Verify that trk0 is still configured: In this exercise we will create a FailSafe Network (FSN) device using two Ethernet Channel trunks. In the previouse exercise we created trk0. Verify that it is still configured $ server_sysconfig server_2 v

    2.

    Configure a second Ethernet Channel Trunk virtual device on the Data Mover: Previously you used the CLI to create trk0. This time we want to do the same thing but using Celerra Manager. Create a new EtherChannel Trunk virtual device called trk1 using cge2 and cge3

    Open Celerra Manager and select Networks from the tree. In the Devices tab, click New > Enter the requested information

    Click OK.

    X

    trk1

    X

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 19

  • Celerra ICON: Lab 2

    3.

    Create an FSN virtual device from the two EtherChannel Trunk virtual devices. Then assign the IP address for your Data Mover to the FSN virtual device.

    Select Network in the tree hierarchy > click Devices > New Select your server > Type is Fail Safe Network > Device Name is fsn0 Select trk0 and trk1 as the standby Click OK

    To assign the IP address to the FSN device: select the Interfaces tab > New Select your Data Mover > Device Name is fsn0 Enter the appropriate Address and Netmask Click OK

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 20

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 21

    4. Verify the FSN configuration using the CLI: $ server_sysconfig server_x v info fsn0 Which trunk (trk0 or trk1) is active? __________________________________

    5.

    Generate network traffic over the FSN device: Log into the Control Station and set up a continuous ping. $ while true > do > server_ping server_2 10.127.*.y > done (Where is the IP address of the Data Movers default gateway.)

    6.

    Monitor which of the two interfaces are being used: Do not close the window that is running the do-while loop above. In a separate terminal window, log on to the Celerras Control Station, and use server_netstat with a while true loop to identify and monitor which network interface is being used for the connection. $ while true > do > server_netstat server_2 i > sleep 1 > done Which network device is showing IO activity? ______________________________

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 22

    7.

    Test FSN Failover: Ask your instructor to disable both Ethernet connections one at a time and notice which interface is carrying the network traffic. Disable the active port. Does the traffic move to the second device in trk0? ______ Disable the new active port. Does the traffic move to one of the devices in trk1? ______ Disable the port that is currently active. Does the traffic move to the last remaining port? ______ Is the do-while loop still running? _________________

    8.

    Verify the status of the FSN: Log on to your Control Station and verify the status of your virtual devices. $ server_sysconfig server_x v info fsn0 Which trunk (trk0 or trk1) is active? __________________________________

    9.

    Re-enable all of your Ethernet ports for your Data Mover.

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 23

    Lab 2 Exercise 5: Testing Basic Data Mover Failover Step Action

    1.

    Verify that your data movers have been configured in a Primary/Standby relationship: View the server table for your Celerra and verify that server_2 is type primary and server_3 is type standby. $ nas_server info server_2 $ nas_server info server_3 If server_2 is not type primary and server_3 is not type standby, reference Lab 2 and configure the relationship appropriately.

    2.

    Verify the status of the FSN: $ server_sysconfig server_x v info fsn0 Which trunk (trk0 or trk1) is active? __________________________________

    3.

    Generate network traffic over the FSN device: On the Control Station and set up a continuous ping. $ while true > do > server_ping server_2 10.127.*.y > done (Where y is the IP address of the Data Movers default gateway.)

    4.

    Perform Data Mover Failover: From your Control Station, activate Data Mover failover. Server_3 should assume all configuration of server_2 including the FSN configuration. $ server_standby server_x activate mover Observe the ping command as the failover test takes place.

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 24

    5. Confirm the failover test: Verify that the failover took place. $ nas_server list You should see that server_2 is now listed as server_2.faulted.server_3 the Data Mover that was the standby should now list as server_2.

    6.

    Verify the status of the FSN: Server_3 (now called server_2) should have the identical configuration as prior to the failover, including the configuration of the FSN. $ server_sysconfig server_x v

    7.

    Restore server_2.faulted.server_3 to primary status. $ server_standby server_2 restore mover

    8.

    Clean up the network configuration: From your Celerra Control Station, delete the interface on FSN0 interface and remove all virtual network devices (FSN and TRK). Note: When configuring an interface using Celerra Manager, the default name assigned is the same as the IP address. Confirm the name of the Interface: $ server_ifconfig server_2 all Delete the interface: $ server_ifconfig server_x delete Delete the virtual devices: $ server_sysconfig server_x v d fsn0 $ server_sysconfig server_x v d trk1 $ server_sysconfig server_x v d trk0

    End of Lab Exercise

  • Celerra ICON: Lab 2

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 2 Page 25

  • Celerra ICON: Lab 3

    Celerra ICON Celerra Training for Engineering

    Lab 3: Creating & Testing File Systems for NFS Client Access

    EMC Education Services

    Date: February, 2006

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 1

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 2

    Document Revision History:

    Rev # File Name Date

    1.0 03_Lab3_CelerraICON_File Systems February, 2006

    Table of Contents: Exercise 1: Creating Volumes and File Systems

    Exercise 2: Mounting and Exporting File Systems for NFS Access

    Exercise 3: Integrating a Celerra File Server with NIS

    Exercise 4: Exporting File Systems with Root Privileges

    Exercise 5: Exporting a File System for Read Mostly Permissions

    Exercise 6: Extending File Systems

    Exercise 7: Testing Data Mover Failover with NFS Clients

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 3

    Lab 3: Creating and Exporting File Systems

    Purpose:

    In this lab you will manually create slices, stripes, and meta volumes and then use them to add a file system. Next you will mount the file system and export it for NFS client access various options. On a UNIX workstation, you will NFS mount the file system and test access. Finally, you well test Data Mover failover and effect on NFS clients.

    Objectives: Create Volumes and File systems using CLI, AVM and Celerra Manager Mount and Export file systems for NFS Client Access Export file systems: Assigning root to another host Export file systems: Read mostly Integrating into a NIS Domain Test Data Mover failover and understand impact to client

    systems

    References: Managing Celerra Volumes P/N 300-01-977 Rev A01 Version 5.4 April 2005

    Managing NFS Access to the Celerra Network Server P/N 300-002-036 Rev A01 Version 5.4 April 2005

    Configuring and Using Secure NFS with Celerra P/N 300-002-082 Rev A01 Version 5.4 April 2005

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 4

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 5

    Lab 3 Exercise 1: Configure File Systems for Celerra - CLI

    In this exercise you will be configuring a Celerra file system and the underlying volume structure. In following exercises, it will be exported for client access.

    1

    Connect to the Control Station: Open a SSH session to your Control Station.

    2

    Determine what disks are available: Before configuring file systems, first verify what disks are available to your Data Movers. Use the following command to get lists of the disk volumes that are seen by your Data Movers. $ nas_disk l Note: The disk numbers will be different depending on whether youre configured with a Symmetrix or CLARiiON Back-end.

    With Symmetrix back-end, available disk volumes will begin at d3 With CLARiiON back-end, available disk volumes begin at d7

    How many standard (STD/CLSTD) disk volumes are available (not in use)? __________________________________________________________________

    3

    Verify disk configuration: When creating stripe volumes, it is a Best Practice to make sure that the disk volumes used are on separate physical drives on the back-end. Confirm that disks d7, d8, d9, & d10 (d3, d4, d5, & d6 for Symmetrix) are on different physical drive. First , map the Celerra disk name to the Logical Volume IDs on the back-end. $ nas_disk info disk_volume_name What are the LUN numbers (stor_dev) associated with the following disk volumes? d7 (d3): ______________________ d8 (d4): ___________________________ d9 (d5): ______________________ d10(d6): ___________________________

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 6

    Determine the serial number of the attached storage array. $ nas_storage list Determine the RAID groups associated with the LUNs above $ nas_storage info |more The output will be verbose. For each RAID Group you will see a list of LUNs associate with that RAID group. Other Best Practices:

    Stripe using LUNS across (not within) RAID Groups RAID Groups should be of the same protection type and architecture Generally, random workloads perform best with four volume stripes

    Refer to Celerra Network Server Best Practices for Performance

    4

    Create four 250MB slices on separate disks: Create (4) 250 mb slice on each of disks d7-d10 (d3 d6 for Symmetrix). Name these slices sl1 through sl4. $ nas_slice n slx c dy 250 (Where x is the number of the slice you are creating, and y is the number of the disk volume.) Verify that the slices were created: $ nas_slice -list

    5

    Create a stripe volume from the four slices you just created: Create a stripe volume from sl1, sl2, sl3, and sl4. Make the size, or depth of the stripe 32768 bytes for Symmetrix back-end and 8192 for CLARiiON back-ends. Name the stripe volume str1. $ nas_volume create n str1 S 8192 sl1,sl2,sl3,sl4

    Best Practice: A stripe depth of 32768 (32K) is recommended for both CLARiiON and for Symmetrix.

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 7

    6

    Verify Volumes: Confirm that your slices now show as being in use. $ nas_volume l (Reference second column for in use status.)

    7.

    Create a meta volume from the stripe volume that you have just created: File systems reside on meta volumes. Create a meta volume out of the stripe volume that you created above. Name this meta volume mtv1. $ nas_volume n mtv1 c M str1

    8.

    Create a file system from the meta volume that you have just created: Create a file system on the meta volume that you created above. Name this file system fs1. $ nas_fs n fs1 -c mtv1

    9.

    Verify the file system was created. $ nas_fs -list

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 8

    Lab 3 Exercise 2: Mount and Export File System

    In this exercise you will Mount the file system you just created on a Data Mover and export it for access on the network by NFS clients.

    1.

    Configure IP interface: Configure the Fast Ethernet card in server_2 to use a transmission speed of 100Mbs, and full duplex mode. $ server_sysconfig server_2 pci x o speed=100,duplex=full (where x is the port name, for example cge0) Configure the IP address for the 10/100 FastEthernet NIC (port cge0) for server_2. Reference IP address information you gathered in Lab2, Exercise 2, Step 1. $ server_ifconfig server_2 c D cge0 n cge0 p IP < IP_Address Netmask Broadcast_Address > Verify that the IP, Netmask, and Broadcast addresses match what was identified in Lab2, Exercise 2, Step 1. $ server_ifconfig server_2 cge0

    2.

    Verify the interface you just configured can be used to connect to the outside network. $ server_ping server_2

    3.

    Confirm that the file system previously created exists. $ nas_fs -list

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 9

    4. Create a mountpoint on the Data Mover, and mount your file system to it: Create a mountpoint on your Data Mover. $ server_mountpoint server_ x c /mp1 (Where x is the number of your Data Mover.) Verify the mountpoint was created $ server_mountpoint server_ x list

    (Where x is the number of your Data Mover.)

    5.

    Mount your file system to this mountpoint: Mount your file system to this mountpoint using the default mounting mount options. $ server_mount server_ x fs1 /mp1 (Where x is the number of your Data Mover.)

    6.

    Export this file system for the NFS client access: Export this file system for the NFS protocol, assigning anonymous users root access, UID 0 (zero). $ server_export server_x o anon=0 /mp1 (Where x is the number of your Data Mover.) Note: Giving anonymous users root access is NOT a Best Practice! Why? ____________________________________________________________

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 10

    7. Confirm file system is exported: Verify that your exported file system is listed in the export table for your Data Mover. $ server_export server_x -list

    8.

    Log on to your UNIX workstation as the root user. In the following steps you will be perform a NFS mount of the file system you previously exported on the Data Mover. Change directories to the root directory and confirm. # cd /

    # pwd

    9.

    Create a local mountpoint: In UNIX, mountpoints are simply empty directories. Create a directory called studentx. # mkdir studentx (Where x is your Celerra number.)

    10.

    Verify that the /studentx directory is empty: # ls studentx (Where x is your Celerra number.)

    11.

    Perform a NFS Mount on the client system: NFS mount this directory to the exported file system on your Data Mover. # mount zzz.zzz.zzz.zzz:/mp1 /studentx (Where zzz.zzz.zzz.zzz is the IP address of the FastEthernet port of your Data Mover, and x is your Celerra number.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 11

    12.

    Verify the contents of the mounted NFS file system: Check the contents of your /studentx directory. # ls studentx (Where x is your Celerra number.) What is in this directory now?__________________________________________ By default a new directory is empty. In comparison, a new file system contains a lost+found directory. When you created your /studentx directory it should have been empty. After NFS mounting your new file system to the /studentx directory, you now have full access to the contents of the file system.

    13.

    Verify write access to the NFS file system: Change directory to /studentx and create a new file. # cd /studentx # touch filex (Where x is your Celerra number.) Were you able to create a new file? ____________

    14.

    Unmount the NFS file system: When unmounting a file system you specify the name of the mountpoint. # cd / # umount /studentx (Where x is your Celerra number.)

    15.

    Connect back into the Celerra Control Station: Log into the Celerra Control Station as nasadmin

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 12

    16. Permanently unexport your file system: By default, when you unmount a file system, it is temporary and on the next reboot of the Data Mover, it will automatically be re-exported. Specifying the p parameter makes the unmount permanent. $ server_export server_x u p /mp1

    17.

    Permanently unmount your file system: The same is true with Data Mover umounts. Unless you specify the p parameter, the umount is temporary. $ server_umount server_x p /mp1

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 13

    Lab 3 Exercise 3: Integrating Celerra File Server with NIS

    Client users must present user credentials when connecting to a Celerra File Server. One way that the Celerra can authenticate users is using Network Information Systems (NIS). In the lab configuration, NIS Services are already setup. In this exercise you will configure your Data Mover to access these services. In later exercises you will test the ability of users to access to his or her own directories on the Data Movers file system. The Data Mover will authenticate the users through the NIS services.

    1.

    Configure your Data Mover to use NIS: Log on to your Celerras Control Station as nasadmin Verify that your Data Mover to integrate with Hurricane Marines NIS server. $ server_nis server_x If not configure the Data Mover to use NIS. $ server_nis server_x hmarine.com 10.127.*.163 (Where x is the number of your Data Mover.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 14

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 15

    Lab 3 Exercise 4: Exporting File Systems with root Privileges

    In the prior exercise you exported a file system for NFS access. However, there was minimal security to the file system. A production system requires a higher level of security for file systems that are available to the network. In this exercise you will make the export more secure by exporting the file system for root access to only a specific host.

    1.

    Preparation: In this exercise you will be using your assigned UNIX workstation and one other workstation. Before you begin this exercise you will want to record the following information:

    The IP address of your UNIX workstation: 10.127. ______. ______ The IP address of another UNIX workstation (See below):

    10.127. ______. ______ Use the following table to learn what other UNIX station to record

    If you are at: Use: sun1 sun2

    sun2 sun1

    sun3 sun4

    sun4 sun3

    sun5 sun6

    sun6 sun5

    2.

    Confirm that the file system you created in a previous exercise still exists: List the status of all configured file systems. $ nas_fs list

    The file system you created previously should exist but not be in use. To determine the size of the file system, use the size parameter. $ nas_fs size fs1

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 16

    3. Create a mountpoint on the Data Mover: Create a new mountpoint for your Data Mover named /mp2. $ server_mountpoint server_x c /mp2 (Where x is the number of your Data Mover.)

    4.

    Mount the file system on the Data Mover:Mount your file system to this mountpoint using the default mounting mount options. Note: by default, a file system is mounted for Read/Write access. Only one Data Mover at a time can have a file system mounted for Read/Write. $ server_mount server_ x fs1 /mp2 (Where x is the number of your Data Mover.)

    5.

    Export the file system, assigning root permission to your UNIX workstation: Use the server_export command to assign root permission to only your UNIX workstation. $ server_export server_x o root=yyy.yyy.yyy.yyy /mp2

    (Where x is the number of your Data Mover, and yyy.yyy.yyy.yyy is the IP address of your UNIX workstation.)

    6.

    Log on to your UNIX workstation as root: Change directories to the root directory and verify that the local mountpoint directory /studentx is still exists. # cd / # ls

    If it is missing make a directory for your NFS mount. # mkdir /studentx (Where x is the number of your Celerra.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 17

    7. On the UNIX workstation NFS mount the file system: NFS mount the exported file system on your Data Mover to this directory. # mount zzz.zzz.zzz.zzz:/mp2 /studentx (Where zzz.zzz.zzz.zzz is the IP address of the FastEthernet port of your Data Mover, and x is your Celerra number.)

    8.

    Test write access to the file system: Change to the /studentx directory and create a new directory and a new file in that directory. # cd /studentx # mkdir dirx # cd dirx # echo THIS IS A TEST >filex (Where x is the number of your Celerra.) Were you able to create a new directory and file? ____________

    9.

    Connect to another UNIX workstation and NFS mount the file system: Telnet to another UNIX workstation as root. # telnet 10.127.*.x (Where x is the address of another UNIX workstation that you recorded in step 1.) Create a directory off of the root file system, name the directory /remotestudentx. (Where x is the number of your Celerra.) # cd / # mkdir /remotestudentx NFS mount the exported file system to the above directory. # mount :/mp2 /remotestudentx (Where x is your Celerra.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 18

    10.

    Test read access to the File System: Verify that you can you navigate to and read the file that you created on your UNIX workstation. # cd /remotestudentx/dirx # cat filex (Where x is the number of your Celerra.) Are you able to read the file? ___________________

    11.

    Test write permissions to the File System: Try to create a new file. # touch filex (Where x is the number of your Celerra) Do you have write permissions? _____________ Why/Why not?_____________________

    12.

    Exit your telnet session to your other UNIX workstation: # exit

    13.

    As the root user administer the permissions of the file system from your UNIX workstation: Verify that you are logged on to your UNIX workstation as root. # who am i # hostname

    Change directories to the NFS mounted file system. # cd /studentx Assign all users full access to dirx. # chmod 777 dirx

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 19

    (Where x is the number of your Celerra.)

    14.

    Connect back into the other UNIX workstation: Telnet back to the other UNIX workstation. # telnet 10.127.*.x (Where x is the address of another UNIX workstation that you recorded in step 1.) Change user to Selma Witt. #su switt

    15.

    Test Permissions from the other UNIX workstation: Navigate (change directory) to /remotestudentx/dirx. % cd /remotestudentx/dirx (Where x is the number of your Celerra) Can you create a subdirectory in dirx? _____________ % mkdir swittx (Where x is the number of your Celerra)

    16.

    Verify that only the Owner can write to the directory that was created: Change user to Earl Pallis % su epallis password: epallis Can you get into swittx, and create another subdirectory? ______ % cd swittx % mkdir epallisx (Where x is the number of your Celerra) Can Earl Pallis create a subdirectory inside dirx? _______ % cd .. suny% mkdir epallisx (Where x is the number of your Celerra)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 20

    Answer the following questions:

    What were the effective permissions in /remotestudentx/dirx when you exported the file system? (Who had write access?)

    _________________________________________________________________

    What were the effective permissions after you changed the permissions to 777 on dirx?

    _________________________________________________________________ _________________________________________________________________

    17. Change user back to root: Exit from user epallis. % exit % Exit from user switt. % exit #

    18.

    Unmount /remotedstudentx at the remote workstation: # cd / # umount /remotestudentx (Where x is the number of your Celerra.) Exit from telnet session to your other UNIX workstation. # exit

    19.

    Unmount /studentx at your UNIX workstation: # cd / # umount /studentx (Where x is the number of your Celerra.)

    20.

    Permanently unexport your file system: Log on to your Celerras Control Station as nasadmin: $ server_export server_x u p /mp2

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 21

    Lab 3 Exercise 5: Exporting File System with Read mostly Permission

    In this exercise you will test the ability to provide Read-Write permission to some users, while allowing Read-only to others. In the Celerra documentation this is referred to as Read mostly.

    1.

    Log on to your Celerras Control Station as nasadmin:

    2.

    Provide Read-Write permission to your UNIX workstation, while giving Read-only access to everyone else: Export your file system as Read mostly. Give your UNIX workstation read-write permission, and everyone else read-only permission. $ server_export server_x o ro,rw=10.127.*.1z,root=10.127.*.1z, /mp2 (Where x is the number of your Data Mover, and z is the number of your UNIX workstation.) Verify that the file system was exported as specified. server_export server_x -list

    3. Log on to your UNIX workstation as root:

    4.

    Test Read-Write permission from your workstation: Mount the exported file system to the /studentx directory. If you cant remember the command syntax, refer to the prior exercise. Verify that you have read-write permission by creating a directory named unixz. # cd /studentx # mkdir unixz (Where x is the number of your Celerra, and z is the number of your UNIX workstation.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 22

    Were you able to create the directory? ________________________

    5. Verify that all other host have Read-only access: Log onto your other UNIX workstation as root. Mount the exported file system to the your /remotestudentx directory. Verify that you have read-only permission by trying to create a directory named remoteunixz. (Where z is the number of the remote UNIX workstation.) # cd /remotestudentx # mkdir remoteunixz (Where x is the number of your Celerra, and z is the number of your other UNIX workstation.) Were you able to create the directory? ________________________ Can you read the directory created in step 4? ______________ # cd unixz # ls al (Where unixz is the directory that was created in step 4.)

    Answer the following questions: How could you have provided read-write access to all hosts in your subnet?

    (Hint: man server_export) _____________________________________________________________

    _____________________________________________________________

    6. Unmount the exported file system on both UNIX clients: Umount /remotestudentx. # cd / # umount /remotestudentx End your telnet session to your other UNIX workstation. # exit Umount /studentx. # cd /

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 23

    # umount /studentx

    7. Permanently remove all exports from your Data Mover: Log on to your Control Station as nasadmin. $ server_export server_x u p a

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 24

    Lab 3 Exercise 6: Extending a File Systems

    In this lab, you will extend the file system you previously created to check their status, and extend existing file systems.

    1.

    Determine what volumes are available: Log on to your Celerra Control Station and obtain a list of all Celerra volumes. $ nas_volume list

    Display detailed information on the slice, stripe, and meta volumes that you created in the prior exercise. $ nas_volume info volume_name What client (clnt) volume and file systems are associated with your str1 and mtv1 volumes? _______________________________________________ _______________________________________________ What percentage of your mtv1 Meta volume is available? $ nas_volume size mtv1 _______________________________________________

    2. View file systems information: Obtain a list of all Celerra file systems. $ nas_fs list

    Display detailed information on fs1. $ nas_fs info fs1 Which disks are being used for the file systems?__________________________

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 25

    3. Determine file system size and utilization: $ nas_fs size fs1 What percentage of fs1 has been utilized?______________________________

    4.

    Create a new volume that will be used to extend the file system: Create a new stripe volume. (Use the entire capacity of disks d11, d12, and d13, d14 (d7, d8, d9, and d10 for Symmetrix). Name the stripe volume str2. Previously, you used the CLI to create the components of a meta Volume. This time try to do it using the Celerra Manager. Best Practices for extending a file system: Use only stripe volumes containing an equal (or greater) number of component

    spindles. Do not extend a file system using the same spindles on the back-end. Extend a volume using the same drive type (parity, FC vs. ATA, spindle size).

    5.

    Extend the capacity of the file system: Verify the strip2 volume was created. $ nas_volume list

    Extend your file system to include your new stripe volume. $ nas_fs x fs1 str2

    Check the detailed information on your meta volume. $ nas_volume info mtv1

    What volume sets make up your meta volume now?________________________

    6.

    Verify the new size of the file system: $ nas_fs size fs1 What is the new size of the volume?_____________________________________ Could this operation be performed while the file system was mounted and exported?____________________________________

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 26

    Lab 3 Exercise 7: Testing Data Mover Failover with NFS ClientsStep Action

    1.

    Verify that your data movers have been configured in a Primary/Standby relationship: View the server table for your Celerra and verify that server_2 is type primary and server_3 is type standby. $ nas_server info server_2 $ nas_server info server_3 Alternatively: $ nas_server list

    If server_2 is not type primary and server_3 is not type standby, reference Lab 2 and configure the relationship appropriately.

    2.

    Mount and export the file system for client access: On the Control Station, verify that the file system you previously created still exists. $ nas_fs list Verify that the mountpoint /mp2 still exists. $ server_mountpoint server_x -list If not recreate it. Mount your file system to this mountpoint using the default mounting options. $ server_mount server_ x fs1 /mp2 (Where x is the number of your Data Mover.) Use the server_export command to export the file system for client access an assign root permission to only your UNIX workstation. $ server_export server_x o root=yyy.yyy.yyy.yyy /mp2

    (Where yyy.yyy.yyy.yyy is the IP address of your UNIX workstation.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 27

    3. Log on to your UNIX workstation and mount the NFS file system: Log in to the UNIX workstation as the root user and NFS mount the exported file system to the directory /studentx. # mount zzz.zzz.zzz.zzz:/mp2 /studentx (Where zzz.zzz.zzz.zzz is the IP address of the FastEthernet port of your Data Mover, and x is your Celerra number.)

    4.

    Simulate IO activity to the file system: For this test, we want to simulate continual client access so we need to run a small script. Change to the /studentx directory, copy /sbin as /mysbin, and run a do-while loop in the directory to generate file system access from the UNIX host to the Data Mover. # cd /studentx # cp R /sbin ./mysbin # while true > do > ls al ./mysbin > done Do not close the window that is running the do-while loop.

    5.

    Test Network connectivity to the Data Mover: In a separate terminal window, log on to the Celerras Control Station, and use server_netstat with a while true loop to monitor the network connectivity $ while true > do > server_netstat server_x i > sleep 1 > done (Where x is the number of your Data Mover.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 28

    6.

    Perform Data Mover failover: Verify that the do-while loop is still running from your UNIX workstation. From your Control Station, activate Data Mover failover. The standby Data Mover should take over the IP address of the Primary, mount the file system and re-export it. $ server_standby server_x activate mover Observe the activity on your UNIX workstation as the failover test takes place. Roughly how long did it take for the failover to complete and the File system was again available?

    7.

    Confirm that the Failover was successful: $ nas_server list You should see that server_2 is now listed as server_2.faulted.server_3 the Data Mover that was the standby should now list as server_2.

    8.

    Failback to the original Primary: Restore server_2.faulted.server_3 to primary status. $ server_standby server_2 restore mover Watch the while-true loop on your UNIX workstation to monitor connectivity throughout the process.

    9.

    End test and clean up: Stop the continuing while-true loops (by pressing CTRL-c) on the Control Station and the UNIX workstation. Unmount the NFS file system. # cd / # umount /studentx (Where x is your Celerra number.)

  • Celerra ICON: Lab 3

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 3 Page 29

    10. Clean up the file system on the Data Mover: Log onto the Celerra Control Station as nasadmin Permanently unexport your file system. $ server_export server_x u p /mp2 Permanently unmount your file system. $ server_umount server_x p /mp2 Using the Celerra man pages if needed, delete all of the following from the Data Mover:

    file systems meta volumes stripe volumes slice volumes mountpoints

    End of Lab Exercise

  • Celerra ICON: Lab 4

    Celerra ICON Celerra Training for Engineering

    Lab 4: Configuring and Testing a Windows CIFS Environment

    EMC Education Services

    Date: February, 2006

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 1

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 2

    Document Revision History:

    Rev # File Name Date

    1.0 04_Lab4_CIFS.doc February, 2006

    Table of Contents: Exercise 1: Configuring Usermapper

    Exercise 2: Configuring CIFS Servers and verifying client access

    Exercise 3: Configure Local Groups and test permissions

    Exercise 4: Removing CIFS configuration

    Exercise 5: Configure Mixed Windows and UNIX file system access

    Exercise 6: Configuring Home Directory support

    Exercise 7: Configuring DFS Root file system

    Exercise 8: Virtual Data Movers

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 3

    Lab 4: Configuring and Testing a CIFS Environment

    Purpose:

    The purpose of this exercise is to configure and test the CIFS services on the Data Mover. It also explores some of the optional features of CIFS such as Home Directory Support and DFS. In addition it includes an exercise where CIFS Servers are configured inside of a Virtual Data Mover (VDM) and then relocates the VDM to a second Data Mover.

    Objectives: Verify the Usermapper configuration Configure CIFS servers Join the CIFS Server to a Windows domain Start and stop CIFS services Configure Home Directory support Configure DFS Root file systems on a Data Mover Configure VDM Relocate a VDM to another Data Mover

    References: Configuring CIFS on Celerra P/N 300-001-974 Rev A01 Version 5.4 April 2005 Configuring Virtual Data Movers for Celerra P/N 300-001-978 Rev A01 Version 5.4 April 2005

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 4

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 5

    Lab 4 Exercise 1: Configure Internal Usermapper Because the Celerra Network Server uses UIDs and GIDs to identify users,

    Windows clients must be assigned UIDs and GIDs so that the Celerra Network Server can determine access to system objects, such as files and directories. Internal Usermapper is a Celerra service that automatically maps each Windows users and group to a UNIX-style user ID (UID) and group ID (GID).

    Step Action 1.

    Verify the status of the of the Usermapper service: Internal Usermapper is automatically installed and configured as part of the DART installation on the Data Mover. There should only be one instance of usermapper in a single Celerra environment. This is referred to as the primary and runs on the Data Mover in Slot 2 (server_2) Verify the status of the of the Usermapper service. $ server_usermapper ALL Is the service enabled and is server_2 the Primary? ____________________

    2.

    Verify why That the Usermapper database is empty. Because no windows users have accessed the Data Mover since the Celerra was installed, the Usermapper database should be empty. Later on in the lab we will be looking at this again. On the Control Station, export the Usermapper database. server_usermapper server_2 Export user mymapfile View the contents of the Usermapper database. more mymapfile

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 6

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 7

    Lab 4 Exercise 2: Configure CIFS Servers Step Action

    1.

    Gather required information: Before you begin, you need to record the following key information: The IP address of the DNS server: 10.127. ______ . 161 The Fully Qualified Domain Name (FQDN) of the Windows 2000 domain that will hold the CIFS Servers computer account: corp.hmarine.com The computer name for your CIFS Server on the Data Mover:_________________ (See Appendix E, Hurricane Marine, IP Addresses and Schema)

    2.

    Log on to the Control Station as nasadmin.

    3.

    Configure NTP Services: Set your Data Mover to synchronize with the Kerberos using the a time server. $ server_date server_2 timesvc start ntp 10.127.*.162 Verify that you were able to get updates from the time service. $ server_date server_2 timesvc update $ server_date server_2 timesvc stats

  • Celerra ICON: Lab 4

    4. Confirm that DNS has been configured to allow Dynamic Updates: Confirm with the DNS administrator, in this case, your instructor, that the forward (corp.hmarine.com) and reverse (10.127.x.161 subnet, where z is the number of your Celerra) zones have Allow dynamic updates set to yes.

    Instructor to demonstrate the following: To verify that Allow dynamic updates is set to yes perform the following while logged on to the DC for hmarine.com.

    Click on Start > Run Type dnsmgmt.msc This should open the Windows 2000 DNS Management Console.

    Click the + sign to the left of HM-1 to expand the domain. You should see the forward and reverse lookup zones.

    Click the + sign to the left of Forward Lookup Zones Click the + sign to the left of Reverse Lookup Zones

    Your display should look similar to the following

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 8

  • Celerra ICON: Lab 4

    To verify that Allow dynamic updates is set to yes, confirm the following zones: hmarine.com Forward Lookup Zone, ALL Subnet Reverse Lookup Zones

    Click on a zone (for example 10.127.50.x Subnet) to highlight it. Right-click on the same zone and select Properties.

    You should see the following window. The Allow dynamic updates option is in the center of the window. It should be set to Yes

    5. Remove any previous DNS configuration: Determine if DNS is configured. $ server_dns server_2 Delete the current DNS configuration if configured:. $ server_dns server_2 delete

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 9

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 10

    6. Configure DNS on the Data Mover: Configure the Data Mover for DNS and to be in the corp.hmarine.com domain. Then stop and start DNS. $ server_dns server_2 corp.hmarine.com 10.127.*.161 $ server_dns server_2 o stop $ server_dns server_2 o start

    7.

    Start the CIFS service on your Data Mover: $ server_setup server_2 P cifs o start

    8.

    Configure CIFS Server: Setup youre the CIFS Server on the Data Mover for the Windows 2000 domain corp.hmarine.com. $ server_cifs server_2 a compname=celydm2, domain=corp.hmarine.com,interface=cge0 (Where y is the number of your Celerra.)

    9.

    Join the CIFS Sever to the domain: Configure the CIFS Server on the Data Mover to join the Windows 2000 domain. $ server_cifs server_2 Join compname=celydmx,domain=corp.hmarine.com,admin=administrator (Where x is the number of your Data Mover and y is the number of your Celerra.) When prompted, enter the domain administrators password server_x: Enter Password

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 11

    10. Verify that the CIFS Server successfully joined the domain: $ server_cifs server_2 Ask your instructor to show the following results to you:

    The container that was created EMC Celerra The computer account in that container

    11.

    Create a File System to be used by your Windows clients: In prior exercises we manually created volumes and file systems. In this example you will create a file system named fs4 using Automatic Volume Manager. Determine what storage pools are have available space: $ nas_pool size -all Make the file system at least 5 GB in size using an available storage pool. $ nas_fs n fs4 c size=5G pool= This command may take a few minutes to complete. Verify the actual size of the file system: $ nas_fs size fs4

    12.

    Mount the file system: Mount the file system to a mountpoint on your Data Mover, name the mountpoint /win2k. $ server_mountpoint server_2 c /win2k $ server_mount server_2 fs4 /win2k

    13.

    Export the file system: Export /win2k for CIFS using the share name w2kdata. $ server_export server_2 P cifs n w2kdata /win2k

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 12

    14. From Windows Client, connect to share: Logon to your Windows 2000 workstation as administrator of corp.hmarine.com (CORP). Connect to your Data Mover using the UNC path: Start > Run > Type \\celydmx\w2kdata (Where y is the number of your Celerra and x is the number of your Data Mover.) You should be able to connect to the share and view the contents of your exported file system. What contents are displayed in the w2kdata share? _________________________________________________ _________________________________________________ You should see the lost&found and .etc directories. What could you have done differently in order to hide these two directories from the CIFS share? ___________________________________________________________ ____________________________________________________________

    NOTE: The Data Movers do not display in Network Neighborhood because there are no Microsoft Windows computers in that network segment. If it is desired that the Data Mover appear in the browse list, then at least one Microsoft Windows computer must reside in the same network segment. For more information on Network Neighborhood and the browse list see the Microsoft Knowledge Base at http://support.microsoft.com. (See articles Q117633, Q120151, etc.)

    http://support.microsoft.com/
  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 13

    Lab 4 Exercise 3: Configure Local Groups and Test Permissions NOTE: What follows is an example. The creation of the CIFS Full and CIFS

    Read-only Local Groups, as well as managing permission with these groups, is simply to illustrate basic User / Group / Permissions management concepts. In production environments, careful planning is required to architect an appropriate scheme.

    Step Action

    1. Log on to your Windows 2000 workstation as the Administrator of the CORP domain.

    2.

    Connect to Computer Management Console on the CIFS Server: Using the Windows 2000 Computer Management console, create a Local Group on the Data Mover. Start > Run > type compmgmt.msc and click OK. Verify that Computer Management (Local) at the top of the Tree window pane is highlighted. From the menu choose Action > Connect to another computer In the Select Computer dialog box, select the computer name of your CIFS Server and click OK. Result: The Computer Management console should now say Computer Management (CELxDMy.CORP.HMARINE.COM) at the top of the Tree window pane.

    3.

    Create a Local Group on the Data Mover: In the Tree window pane:

    a) Expand System Tools b) Expand Local Users and Groups c) Click on the Groups.

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 14

    Create a local group for full access on the CIFS Server. Add the Domain Admins, Managers, and the two engineering Global Groups from Hurricane Marines CORP domain to this local group.

    a. With the Groups folder highlighted choose Action from the main menu and choose New Group

    b. In the New Group dialog box enter the Group name CIFS_Full and a description of Permissions Test.

    c. Click on the Add button. In the Select Users or Groups dialog box, choose corp.hmarine.com from the Look in: drop-down menu. Double-click on the following groups to add them to the group then click OK to return to the New Group dialog box. Domain Admins Engineering Propulsion Engineering Structural Managers

    d. In the New Group dialog box you should now see these four groups in

    the list. Click the Create button to complete creation of this Local Group on the Data Mover.

    The New Group dialog box will remain open for creation of additional groups.

    4. Create another Local Group: Repeat the previous step (omitting part a) to create a Local Group named CIFS Read Only and add the Sales East and Sales West groups (from the CORP domain) to the CIFS Read Only group. After clicking the Create button in part d, click the Close button to close the New Group dialog box. You should now see the two groups that you created in the right window pane of the Computer Management console.

  • Celerra ICON: Lab 4

    Copyright 2006 EMC Corporation. All Rights Reserved. Lab 4 Page 15

    5. Add the Local Groups you just created to the Share Permissions: With the Computer Management console still connected to your Data Mover, expand the Shared Folders object in the Tree window pane. Click on the Shares folder to open the list of shares on the Data Movers CIFS server. Right-clic