netapp commands helpful

53
Filer General Messages at screen is configured at /etc/syslog.conf.sample By default, there is no such file, but if user modifies this file, they will have /etc/syslog.conf ----------- which will tell where to direct messages at screen ( typically /etc/messages ) Sysconfig –t ( tape information ) Source –v /etc/rc - this command reads and executes any file containing filer commands line by line Auto support ( user – trigger support ) Options.autosupport.doit [email protected] Telneting to Filer Only one user can do telnet Options telnet Autosupport Configuration Filer>Options autosupport autosupport.mailhost < > autosupport.support.to < [email protected] > autosupport.doit <string>

Upload: nirisha-malladi

Post on 26-Oct-2014

269 views

Category:

Documents


11 download

TRANSCRIPT

Page 1: Netapp Commands Helpful

Filer General

Messages at screen is configured at

/etc/syslog.conf.sample

By default, there is no such file, but if user modifies this file, they will have

/etc/syslog.conf ----------- which will tell where to direct

messages at screen ( typically

/etc/messages )

Sysconfig –t ( tape information )

Source –v /etc/rc - this command reads and executes any file

containing filer commands line by line

Auto support ( user – trigger support )

Options.autosupport.doit [email protected]

Telneting to Filer

Only one user can do telnet

Options telnet

Autosupport Configuration

Filer>Options autosupport

autosupport.mailhost < >

autosupport.support.to < [email protected] >

autosupport.doit <string>

Page 2: Netapp Commands Helpful

autosupport.support.transport https or smtp

autosupport.support.url < url address must be reachable >

Autosupport troubleshooting

1. ping netapp.com from filer

2. TCP 443 SSL should be open at SMTP server

SMTP server may stay in DMZ side

3. Mail relay in exchange must be specified. Filer’s host name or IP address must be specified in mail relay. Routing for netapp.com or routing by this host or routing by this ip must be enabled for filer. Filer is acting as a SMTP client. In general setup of mail system, no SMTP client is able to send the mail thru mail server to other SMTP server when host’s identity is different as far as mail id is concerned. Relaying is blocked generally.

4. Proxy server http / https must pass http url

Raid Scrub weekly

a. raid.scrub.duration 360

b raid.scrub.schedule sun@01

a. scrub to happen for only 6 hrs

b. forcing the scrub to start on Sunday at 1 am

RAID group

Vol add vol0 –g rg0 2 add 2 disks to raid group 0 of vol0

Vol options yervol raidsize 16 changes the raidsize settings

of the vol yervol to 16

vol create newvol –t raid_dp –r 16 32@36

- newvol creation with raid_dp protection.

Page 3: Netapp Commands Helpful

RAID group size is 16disks. Since the vol

consists of 32 disks, those will form 2 RAID

group, rg0 & rg1

Max Raid groupsize

Raid DP 28

Raid 4 14

vol options for snapshots

nosnapdir off < default off >

nosnap off < default off >

Disk Fail/unfail

priv set advanced when disk goes bad

disk fail partially then prefail copy

disk unfail is seen when sysconfig -r

sysconfig –d is done. Somestimes it may

Disk troubleshoot just hang there, so disk fail

Priv set advanced -i <disk name> would

Led_on < 1d.16> release the disk &

Led_off < drive id > reconstruct the the RAID

group

Page 4: Netapp Commands Helpful

Blink_on 4.19 ( failed disk now will be orange )

Blink_off 4.19

Spare disks in vol

Vol status -s

FAS 270 ( this must be done, otherwise not seen )

Priv set advanced

Disk show –v ----- to see who owned it. If this has come from

Another filer, disk block header needs to

Remove. For that

Disk unfail <disk id>

Disk assign 0b.23

Fcadmin device_map

If drive not shown in filer view

Filer> storage show disk -p

Zeroing disks

Priv set advanced

Disk zero spares --- to zero out the data in all spares

Sysconfig –r ( will show % of zero disk ) - spares disks

R100 & R150 Disk Swap

1. find bad disk , identify it

Page 5: Netapp Commands Helpful

2. type disk swap < disk id >

3. Remove disk

4. Wait 20 sec

5. disk swap again

6.insert new disk

7. wait 20 sec to rescan

Out of inodes

1. Check % used of inodes by

Filer> df –i

2. to increase

Filer> maxfiles < vol name > <max>

NVRAM

Battery check

Filer> priv set diag

Filer> nv

=> should show battery status as OK and voltage as

NVRAM3 6V

Raidtime out in options raid controlls ( 24 hr ) the trigger when bat low

In 940s – NVRAM5 is used as Cluster interconnect card as well, “two in one” on slot 11

Time Deamond

Page 6: Netapp Commands Helpful

(port 123, 13, 37 must be open)

When there is large skew, lot of messages from

CfTimeDaemon : displacements /skews:10/3670,10/3670, 11/3670

Because of this hourly snapshot creation also fails or in progress message appears.

Because of timed.max_skew set to 30 min, we may see above message in every 30min- 1 hr

If we set this to 5s and see how skew happening – if we see lot of skew messages (once we turned ON to timed.log ON ), MB replacement may require.

For temporary do

Cf.timed.enable ON on both cluster filers and watch those off errors

Checking from unix host

# ntptrace –v filername

From filer check

Filer>options timed

( check all the options of this )

From filer view => set date and time : Synchronize now < ip of NTP server > => do synchronize now and check NTP from unix host.

Tip : if there are multiple interfaces in filer, make sure that they are properly listed in NIS or DNS server – same host name , multiple ip addresses may require

BPS ( Blocks Per Sector ) of Disk

Block Append Checksum requires each disk drive to format it to 520 or 512 BPS per sector This provides a total of 4160 bytes in 8 sectors. This space is broken into two parts. First part is 4096 bytes ( 4K - the WAFL file size ) of file system data. The remaining 64 bytes contain the checksum data for previous 4096 bytes. In this manner, the checksum block is appeneded to each block of data.

Enviromental Status

Page 7: Netapp Commands Helpful

The top line in each channel says failures to yes , if there is any.

Subsequent messages should say

Power

Cooling

Temperature

Embedded switching [ all to none ]

( if there is no problem )

Volume

Vol options vol0

Vol status vol0 -r ( raid info of volume )

Sysconfig –r

Vol options vol0 raidsize 9

Vol add vol0 <number of disk >

Vol status –l ---- to display all volume

Aggr Volume creation

Filer> create aggr1 10

Filer> vol create log1 aggr1 20g

When vol is gone bad

Vol wafliron start <vol name> -f

Page 8: Netapp Commands Helpful

To list broken disk in volume

Vol status –f

Sysconfig –r will tell the failed disks

Double Parity

Vol create –t raiddp –r 2 ( minimum of two )

(there are two parity disks for holding parity and double parity data)

Enviroment status – like temp/shelf issues

Environment chesis list_sensors

Environment dump

RSH options - rsh access of fier

Options rsh.enabled on

Adminhost needed to add to do RSH ( can be done from filer

View ) - not root. RSH sec settings must be set with either ip or hostname, but with matching username for logon accounts ( not root, but the domain admin account )

RSH access from unix host

# rsh –l root <console p/w> <ip of filer> “<command>”

( add this unix host in /etc/hosts.equiv file – similar for windows host as well )

( this command can be corned in unix to make it scheduled )

RSH Port 514 / TCP

Registry Walk

Filer> registry walk status.vol.<vol name>

Page 9: Netapp Commands Helpful

NFS

Scheduling any job at filer

From windows host ( admin host ), enable rsh ( windows 2003 box )

C:\> rsh sim –l root –n sysconfig –r gave the output result ( sim is filer )

Filer http access

1. license http

2. httpd.enable ON

3. httpd.rootdir xxxx ---- location like /vol/vol0/<share path or

qtree >

Volum performance Optimization

Vol options volname minra ON

(minimal read ahead )

P/W

To change admin host administrator’s p/w

Filer>passwd

Filer>login administrator

Filer>new password:…..

To change root p/w

1. attach to console – straight console

Page 10: Netapp Commands Helpful

2. press Ctrl-C while booting

3. On the menu choose option 3 – password change - root

Ctrl-C - boot menu options

1. Password reset --- root

2. Disk Initialize and destroy and setup new filer

New filer setup

Software get url –f filename

Software install url

Enviroment

Environment status all

Previous ONTAP on flash

Priv set diag

Version –b --- will show the contents in flash

Previous firmware upgrade of disks

Priv set advanced

Filer*>disk_fw_update

Quotas

Lines in /etc/quotas

Page 11: Netapp Commands Helpful

/vol/vol0/testftp tree 10m

WAFL stuffs

Vol walf iron - checks the vol in wafl level

Wafl check ( when inconsistencies happen, when vol

becomes restricted all of a sudden )

to correct for inconsistencies volume

1. Ctrl –C while boot

2. Options – Selection ? WAFL_Check -z

For slow access or backup or performance issues

Filer> wafl scan masure_layout vol0

Filer> wafl scan measure_layout /vol/vol0/filename

Filer > wafl scan status [vol|file] ---- to view

NFS General

/etc/exports

/vol/test -rw,root=sun1

/vol/vol1 rw,root=sun1

Page 12: Netapp Commands Helpful

#mkdir /mnt/filer

#mount filer1:/vol/vol1 /mnt/filer

/etc/rntab - maintains the mount point

/etc/hosts - name and IP address

/etc/nsswitch.conf - resolution order file

Filer> exportfs

Filer > rdfile /etc/exports

filer> exportfs –a

filer>exportfs –I –o rw=<ip address>, root=<ip address>

NFS troubleshooting

Wcc –u <unix user> ---------- unix credential

>exportfs -c host pathname ro|rw|root #checks access cache

for host permission

>exportfs -s pathname # verifies the path to which a wol is

exported

>exportfs -f #flush cache access entries and reload

>exportfs -r #ensures only persistent exports are loaded

NFS error 70 - stale file handle

Page 13: Netapp Commands Helpful

>vol read_fsid

# mount --- will display what protocol being used for mounting ( in unix host )

# mount –o tcp < >

Qtree security

Portmap –d

Rpcinfo –p < filer ip >

Lock Manager Release

Priv set advanced

Sm_mon –u < NFS_client_hostname>

While changing the mode

chmod 4710 oidldapd

chmod: changing permissions of `oidldapd': Input/output error

If I look in /var/log/message I see the following error:

Mar 30 19:44:59 bilbo kernel: nfs_refresh_inode: inode number mismatch

Mar 30 19:44:59 bilbo kernel: expected (0x950485c3/0x9b7609), got (0x950485c3/0x7d0b11)

Told customer to get rid of the nosuid on the exports file and that solved the issue.

Page 14: Netapp Commands Helpful

Permission Denied : File handle

67000000 6ad77710 20000000 107754a 99f750f 84ce0064 67000000 6ad77700

First two numbers FSID

Next three : FID, Inode, FID

Next three : FID export point

Now, inode is different for different volume

It is found by

Priv set diag

Vol read_fsid vol0

=> gives hex number – should match any number above so that it indicates, file of which volume has problem. Hex number can be converted to decimal value as well

In unix side

# find –inum <decimal value >

# find /mnt/cleearcase –inum _________

( checking FID for above mount point )

# /etc/mnttab

( look here to find that number as well )

# ls – li prints inode numbers – in decimal – convert that to

Page 15: Netapp Commands Helpful

hex

# find . –inum < number > print

( Sometimes, vol fsid number found must be reversed to get the exact place of innode )

General Permission Problems

Check the export permissions

Check the local unix system – file level and owner level

Permission and also qtreee security

( Sometimes filer permission comes to stay on top of local permission at unix box, so that it cannot be seen – they will become hidden )

To find use

# chmod

#chown

Read unix files

# cat

# more

# vi

NFS Performance

Pktt – start e5a , -dump e5a, pktt –stop ( all three– start to end)

Sysstat

Nfsstat –d ( displays cache statistic )

-z ( zero out the stat )

Page 16: Netapp Commands Helpful

-m ( mount point statistics )

Perfstat –b –f filename > perfstat.begin

Perfstat –e –f filename > perfstat.end

# time mkfile 10m test ( time it takes )

# time cp test

Windows host > sio_ntap_sol 100 100 4096 100m 10 2 a.file

b.file –noflock

CPU utilization 100 percent

Customer needs to collect and send

Perfstat –f <file name> -t 5 > perfstat.out

More detail perfstat

Perfstat –t 2 –f nasx > text.txt

Perfstat –t 2 –f nasx –P flat > text.txt

-P domains ( SMP )

~ flat

~ kahuna

~ network

~ raid

~ storage

Other NFS options

Page 17: Netapp Commands Helpful

Options wafl.root_only_chown on

cifs.nfs_root.ignore_acl ON

Common NFS error messages

Nfs mount : /remote_file_system_name : Stale NFS file

handle=20

this error message means that an opened file or directory

has been destroyed or recreated

NFS error 70

File or directory that was opened by NFS client was either removed or replaced on the NFS filer server

Locked file findings in NFS

Filer> priv set advanced

Filer > lock_dump –h | -f ( h or f )

21048 0x00000687 : 0x00088720 0 : 0 1/3 :3 LOCK_ (0xfffffc000598, ……….)

a. 21048 is the pid of the process, check in solaris that it is running

b. take the value of 0x00000687 convert to decimal to obtain the value

( in solaris $ echo 0x000006d7=D|adb) will convert

c. to find the file

solaris $ find .inum 1751 -print

Page 18: Netapp Commands Helpful

Networking Troubleshooting

Filer>Traceroute

Filer>Ping

Filer > ifconfig for IP address related issues

Filer > routed status

Filer > routed OFF

Filer > routed ON

DHCP

Filer cannot have DHCP dynamic address. It is stored in /rc file as static even if DHCP is choosen.

Packet

Netstat –i

Netstat – I <interface name like ns0,e5a etc >

Netdiag –vV

Ifstat –a - Flow control information at bottom

10/100/1GB flow etc purely switch based : what

Ever switch is set, filer takes that

Routing table of filer

Netstat –rn

Route –f -------------- to flush

Page 19: Netapp Commands Helpful

Port

Netstat –a to check all open ports on filer

Netstat ----- to see all connected connections

Port numbers

514 / tcp rsh

135 tcp/udp rpc

111 udp rpc for sun

Network troubleshooting

Cannot Ping to other subnet

1. netstat –rn should have default route addresss at top

2. do routed status if no entry

3. Even if rc file shows default gateway address – add

Manually

Route add default <ip address> 1 and check above

Checking steps

> rdfile /etc/rc

> ifconfig –a

>netstat –rn ---- gateway line must be there

>routed status

>routed ON --- if gateway is not there add manually

Page 20: Netapp Commands Helpful

Packet Tracing on filer

1. pktt start e0 –b 1m –i 192.168.136.130

2. pktt status e0 ( should show some traces )

3. pktt dump e0 –f/mytrace.trc

4. pktt stop all

5. file is created at C$ of filer

6. make cifs connection to filer and point to \\<filer>\C$

7.get file mytrace.trc file

8. open by ethereal or packetizer

Brocade Switch

#switchshow

# wwn

10:00:00:05:1e:34:b0:fc - may be the output

# ssn "10:00:00:05:1e:34:b0:fc" - setting the switch serial

number to wwn

MCData Switch

If direct connection works but not thru mcdata, verify that OSMS is licensed and enabled.

Page 21: Netapp Commands Helpful

> config features show

> config features opensysMS 1

> storage show switch

Switchshow

Cfgshow

Portdisable

Portenable

Switchdisable

Portperfshow

Page 22: Netapp Commands Helpful

CIFS

CIFS setup

Cifs setup

Cifs configuration files

/etc/cifsconfig_setup.cfg

/etc/usermap.cfs

/etc/passwd

/etc/cifsconfig_share.cfg

Cifs general

Cifs shares

Cifs access permission

Cifs restart

Cifs shares eng

Cifs shares –add eng /vol/cifsvol/eng

Cifs access eng full control

Cifs sessions

Cifs sessions –s

Cifs terminate –t 10

Priv set advanced

Cifs perfdc add <domain name > <ip address>

Cifs perfdc pdc add <pdc ip address>

Page 23: Netapp Commands Helpful

Cifs homedir load # 7.0 load to registry

Cifs nbalias

Cifs testdc

Cifs domaininfo - also check /etc/rc file

Cifs.trace-login ON - to trace cifs issues

CIFS performance

Cifs stats

smb_hist -z

sysstat –c 15 2 ( 15 iterations every 2 seconds )

statit

WAFL_susp

Ifstat -a

netstat –m -r -i ( can be used any one )

netdiag –v, -sap

cifs sessions

cifs performance optimize

options cifs.oplocks.enable ON

options cifs.tcp.window_size 64240

options cifs.max.mpx 253

options cifs.neg_buf_size 65340 - max

( 32K + 260 =~ 33028 this number can also be set )

Check switches to enable forwarding mode immediately

Page 24: Netapp Commands Helpful

# set spantree portfast module/port enable

options cifs.oplocks_opendelta 0

( if client disconnects too much after this change, change this back to 8 (default ))

Cifs homedirctory

1. volume snapvol is created

2. qtree is created as root of this vol => snapvol ; sec is unix

3. share is created as snaphome of this qtree as

/vol/snapvol/home with everyone/full control

4. options cifs.home-dir /vol/snapvol/home

Options cifs.home-dir-namestyle <blank>

5. edit /etc/cifs_homedir.cfg file and add at the end

/vol/snapvol/home

CIFS troubleshooting NT4 domain

Cifs setup error : Filer’s security information differs from domain controller, CIFS could not start

Sol :

NT4 PDC/BDC : Server Management – Delete the account, recreate the account and rerun the setup.

NT4 PDC and BDC secure channel communication/verification

BDC c:> netdom bdc \\bdcname /query

Page 25: Netapp Commands Helpful

CIFS troubleshooting

Wcc –s domain\name -----windows – match with

/etc/lclgroups.cfg file - any

changes here requires reboot

Wcc –u username --------------unix

Cifs domaininfo - tells dns entry

Rdfile /etc/rc --------- will have dns info

Options wafl

Should see unix

Pcuser

/etc/usermap

/etc/passwd these two files are read at the first time

Cannot Ping DNS server

A.

1. Enter the host address in dns

2. Make sure that there is no deny/untrusted entry in /etc/rc file3. Check the filer view - > Networking -> DNS entry4. If qtree is created and shared for CIFS access, make sure that qtree settings are correct, otherwise may get access denied error

Page 26: Netapp Commands Helpful

B.

1. Check DNS servers, must point to itself and must have at least 4,5 services - AD

C.

1. Check where currently pointing to ( DNS )

Filer> priv set diag

Filer> registry walk auth

If requires to rerun cifs setup, this registry can be deleted

Filer> registry deltree auth

D.

Net view \\filername should show all shares from windows side and cifs shares should show from filer side

But, when share is accessed from windows machine, we get No Network Provider Present. Ping works, iscsi works, iscsi drives are OK – can access. But, cifs shares does not work. In filer side we see ‘Called name not present ( 0x82). Cifs resetdc also gives the same message.

Check :

a. If filer and windowsdc is rebooted at the same time because of power failure this is seen. Filer needs to come first and then DC

b. make sure that there is no virus related activities goin on that host. Virus scan to windows host or filer can also make this happen

When trust is there

When trust is newly established – No Logon Server available may come while accessing.

Cifs resetdc will make it work. Also in some permission issues.

Disable WINS on interface e0 ( if requires to go by pure DNS only )

Page 27: Netapp Commands Helpful

Filer> ifconfig e0 –WINS

( so that filer do not talk to WINS server )

Process to find CIFs problem

Cifs shares should see everyone full control

Qtree security NTFS

Check options wafl

< > blank

< > unix

< > pcuser

Check /etc/usermap.cfg

/etc/passwd file

/vol/test - check this is UNIX or NTFS

When WINS address is changed

Options cifs.wins_servers ( ip address, , , ) ---- to view WINS

Cifs resetdc

Common Cifs issues - cannot access , access denied

1. time lag between pc and filer ( change from filer view )

2. qtree security [unix | ntfs | mixed ] - change temporarily

From ntfs to unix and

Page 28: Netapp Commands Helpful

back to ntfs or ntfs to

mixed and back to ntfs

(when folder is mapped…in its drive letter you do not see security tab…..as well.)

Cifs Options

Cifs.show_snapshot ON

Options cifs.netbios_aliases.names --- alternate names of

Filer

Wafl.net_admin_priv_map_to_root ON*

Options cifs.trace_login ON

* to take ownership of file by windows top level administrator when file is created from unix side and has only unix ACLs

CIFS + NFS both

Scenario A

1. qtree in vol is created with mixed sec

2. share that qtree

3. groupwise users access in unix are defined in /etc/group file

/etc/group - > is in unix side. Client or NIS server

Eng::gid:khanna, Uddhav

In client side

Ls –l file / directory listing

Chmod

Chgroup

Page 29: Netapp Commands Helpful

Chown

( to see both permission in cifs shares – permission from unix and permission from windows use secureshare access )

4. In windows create group and give access

5. /etc/usermap.cfg file is used to map user accounts in windows and their corresponding account in unix to access/manage resources

Win unix

- <= - (unix to windows)

- >= - (windows to unix)

- == - (both)

Test\* == - ( all users of test windows domain)

Domain\<user> => root

( if the user is not able to see home directory but all other users folders ; CIFS restart and access home )

6. when file is created in that cifs directory or nfs mounted place, the ownership is maintained by who ever created it and access is granted by usermap.cfg file

7. Make sure that

Wafl.net_admin_priv_map_to_root ON

( sometimes permissions are locked and some files gets corrupted; while accessing it says do not have access or encrypted. Every other files works fine. In this case changing

Options cifs.nfs_root_ignore_acl from off to ON and

Change the permission from NFS mounted side -unix

to Chmod 777 and access file. Change back to OFF.

Page 30: Netapp Commands Helpful

Will work after this all the time

(this was the cause when user upgraded from 6.4 to 6.5 and some files in mixed qtree’s folders were not able to access nor change the permission from even root user from NFS side. Above permission reset made it work.

Scenario B

1. Qtree is created its security is unix

2. Share is created of that qtree – so location is the same

3. Cifs client cannot chdir into directory if the user has execute

Permission – d-wx-wx-wx eg MODE == 111. User gets

NT_Status_access_Denied message when accessed

4. If the user is granted read only – MODE == 444 ), chdir is

Successful.

CIFS audit

Options Cifs.audit.enable ON

Cifs.audit.file_access_events.enable ON

Cifs.audit.logon_events.enable ON

Cifs.audit.logzie 524288

Cifs.audit.saveas /etc/log/adtlog.evt

Filer > cifs audit save –f

Read /etc/log/adtlog.evt as event log thru windows

CIFS errors

LSAOpenPolicy2 : Exception rpc_s_assoc_grp_max exceeded

Page 31: Netapp Commands Helpful

Veritas Backup Exec 9.1 : mycomputer -> shares -> sessions shows Veritas Backup Exec Administrative account connections for every share in filer. One connection per share and it grows each and every day as well as stays there each and everyday. This must be wiped out.

Virus Scan

Vscan ---- to see the status of virus scan

Vscan on

Vscan off

Vscan options

Vscan scanners

Vscan options client_msgbox [on|off]

Vscan scanners secondary_scanners ip1 [ip address]

Fpolicy

Fpolicy show

Fpolicy enable

Fpolicy options

Fpolicy server

Quotas

rdfile /etc/quotas

Page 32: Netapp Commands Helpful

Cluster Prerequisite

Volume option create_ucode ON

Options coredump.timeout.enable ON

Options coredump.timeout.seconds 60 or less

Cluster

Cf disable

Cf enable

Cf status

Partner cifs terminate –t 0

Cf giveback

F1 F2

Cf takeover Can shutdown

When comes up

Waiting for giveback

from partner

cf giveback

Sometimes, due to active state, this may not run. Make sure that no cifs sessions are running. Also snapmirror should be off

Page 33: Netapp Commands Helpful

San FCP

Switch>cfgshow

>fcp show cfmode (standby,partner,mixed)

>fcp set cfmode mixed

>fcp show adapters

>fcp show initiators

>fcp setup

>fcp set cfmode [dual_fabric | mixed | partner | standby ]

>fcp nodename

>fcp config

>fcp status

>fcp start

>fcp config 10b

>igroup show

>fcp stats vtic

( virtual target interconnect adapter )

>fcp stats 5a

>sysstat –f 1

Igroup show

Lun show –m

Lun show -v

/usr/sbin/lpfc/lputil - to verify the bindings

Page 34: Netapp Commands Helpful

/opt/NTAPsanlun/bin/create_binding.pl –l root –n <filer ip>

/kernel/drv/sd.conf (make sure that target id and adapters are here)

Lputilnt - utility from windows host attach kit from Netapp

#san lun show

#devfsadm - to allow discovery of the new lun

#newfs /dev/rdsk/c1t1d0s6

#sanlun fcp show adapter –v

#reboot -- -r

#sanlun

Igroup bind <initiator group> <portset>

Igroup unbind <initiator>

OSSV and VSS

C:\> vssadmin list shadows

C:\> vssadmin list writers

C:\> vssadmin list providers

LUN

Lun create

Lun setup

Lun show –m, -v

Page 35: Netapp Commands Helpful

Lun stat –a –o –i 2

Lun destroy -f < lun path > ( the –f command destroy the lun even if

it is mapped )

lun move

lun map | unmap <lun path><initiator group>[<lun ID>]

lun online

priv set diag

lun geometry

SNAP drive LUN creation process

1. create qtree

2. share qtree

3. create lun – snap drive can be used – so that lun is created inside qtree

(if qtree is not set properly, cannot access cifs shares – access denied error message happens )

LUN restore from snapshot (snap restore of lun – snap restore licensing req )

Filer > snap restore –t file –S snap1 /vol/lunvol/lun1/lun1.lun

Q asked – Choose Y ; Proceed => Y

Filer> lun unmap <lun path> <initiator group>

Filer > lun map <lun path> <initiator group> [lin id]

Filer > lun online <lun path>

Page 36: Netapp Commands Helpful

When volume, qtree,files their space reserve is disabled by default, to change this – we must do:

Vol options vol1 create_reserved on | off

Lun create –o noreserve -f ( overrides the default settings on the

volume , including the file level )

lun set reservation

Solaris lun increase

# dd if=/dev/zero of=/dev/rdsk/clt0d1s2 count=1 bs=512

#format clt0d1

Snapshot of LUN

Rws is the file created when snapshot of LUN is taken. 124 event ID is generated by SnapDrive. When deletion of this snapshot LUN is tried 134 is created as well. When there is busy snapshot, other snapshot may hang and 134 is also generated

124 - > 249 - > 134 can be seen

( must see kb2370)

NDMP copy of LUN

Ndmpcopy –da root:netapp /vol/vol0/lun/test.lun 10.1.1.1:/vol/vol0/lun/test.lun

( lun files can only be restored to either root volume or qtree root directories )

( Also, when the lun is copied, it may not be full, so it my go fast while copying )

After this – on destination :

Dest filer > reallocate start –o –n lunpath

LUN backup from snapmirrored volume

Page 37: Netapp Commands Helpful

1. on both source and destination create_upcode, convert_upcode ON

2. from destination filer : snapmirror update [options] <dest_filer : dest_vol>

3. On the source filer : lun share <lun path> read

4. run snapmirror update command

iSCSI

iscsi show interface

iscsi

fcp

iswt interface show iswta --- shows sessions and its initiator information

( iscsi software target )

igroup

iscsi show initiator

iscsi stats

sysstat –i 1

iscsi config

iscsi status - to make sure that iscsi is running, also check that

iscsi licensing is enable at filer

iscsi windows troubleshooting

iscsicli – command line version from Microsoft

SuSe iscsi LUN setup – Chap authentication

Page 38: Netapp Commands Helpful

1. filer > iscsi security generate

2. filer> iscsi security add –I initiator –s method –p password –n inname [ -o outpassword –m outname]

( particular initiator connection )

OR

Filer > iscsi seurity default –s method –p inpassword –n inname [-o outpassword –m outname ]

( any initiator connection ) [[ only this one works]]

Troubleshooting

1. filer > iscsi config

2. linux # /etc/iscsi.conf

3. linux # /etc/fstab.iscsi

4. linux # uname –r

5. linux # cat /etc/issue

6. filer > iscsi show initiator

7. filer > iscsi security show

8. linux # cat /etc/initiatorname.iscsi

Iscsi private network connection

Filer> iswt interface show

Filer> iscsi show adapter

Filer> iswt session show –v iswta

( will show tcp connection – ip addresses or )

Page 39: Netapp Commands Helpful

Now to change this to use private network only

a.Snapdrive -> iscsi management -> disconnect

b. From filer disable iscsi on public nic

iswa disable adapter < >

c. Then reconnect and use prive ip from snap drive

Space Reservation

Df -r

.snapshot

Hourly snapshot create failed kb 4764

See Time Deamond at Filer General as well

to see snapshots by windows client

check two things

a. vol options vol0

nosnap = off, nosnapdir = off < default >

These should be off , when turned ON, cifs

windows client cannot access this and restore,

they can see it

b. options cifs

Page 40: Netapp Commands Helpful

cifs.show_snapshot ON

To get access of \\<ip of filer>\vol0\.snapshot - from

windows cifs access host

vol0 must be shared, otherwise cannot access

\vol0\.snapshot

Nfs snapshot

.snapshot directory is shown only at the mount point, although it actually exists in every directory in the tree

Cifs snapshot

.snapshot directory appears only at the root of the shared.

Priv set advanced

Snap status

( blocked owned = x 4K = )

Snap list

(generally snap reserve is 20 % )

Solaris troubleshotting for lun

Page 41: Netapp Commands Helpful

Solaris_info [-d <directory name>][-n<name>]

Snap Drive troubleshotting tool

SnapDrvDc.exe

Snapdrive snapshot lun restore from mirror site

1. Break mirror

2. Check that lun is online

3. if using by terminal services and ge the Failure in Checking Policies error , Errro Code : 13040, then log off, and log back in or if this does not work, reboot the windows host.

Single File Snap Restore ( SFSR ) is done before snapdrive makes the connection. During this time snap drive virtually does not work and issues 13040 error.

No other lun restore can be done from same host. As SFSR is going on in background sol is : wait patiently. Log off and log back in after while, the drive should come.

Snap restore

Volume Restore

Snap restore –t vol path_and_vol_name

Snap restore –t vol –s snapshot_name path_and_vol_name

File restore

Snap restore –t file path_and_file_name

Page 42: Netapp Commands Helpful

Snap restore –t file –s snapshot_name –r new_path_and_file_name path_and_file_name

Snapshot restore

Snap restore –t file –s winblocktemp /vol/winblocks/winblocksum

Qtree or directory restore

Snap restore –f –t file –s < snapshot > /vol/vol0/<directory name> - to restore for directory

Vol

Vol status –b

vol create vol1 2

vol restrict vol1

vol copy start vol0 vol1

vol online vol1

snap list vol1

… snapshot_for_volcopy.0

snap create vol1 snap1

Snap Mirror

/etc/snapmirror.conf

Page 43: Netapp Commands Helpful

vol status –b vol1 (size in blocks)

vol status vol1

options snapmirror.access host=filerA

filerB>vol restrict vol2

>wrfile /etc/snapmirror.conf

filerA:vol1 filerB:vol2 - * * * * (min hour day-mon day-week)

filerA:vol1 filerB:vol2 – 45 10,11,12,13,14,15,16 * 1,2,3,4,5

snapmirror on

vol status –v

filerB>snapmirror initialize –S filerA:vol1 filerB:vol2 #baseline data transfer

snapmirror status

snapmirror status –l more detail info

snapmirror off

snapmirror break filerb:vol2

snapmirror on

snapmirror quiesce filerB:/vol/vol0/mymirror (break a qtree snapmirror)

snapmirror resync –S filerB:vol2 filerA:vol1

----

snapmirror update filerb:vol2

snapmirror off #disable snapmirror

Page 44: Netapp Commands Helpful

snapmirror on #resume snapmirror,reread /etc/snapmirror.conf

snapmirror break vol2 # converts a mirror to a read/write vol or qtree on dest filer

snapmirror destinations -s source_volname

snapmirror release vol1 filerc:vol1

snapmirror status -l vol1

for qtree:

snapmirror quiesce mirror_qtree

snapmirro break mirror_qtree

Breaking snapmirror

1. snapmirror quiesce < destination path> --- check from

Snapmirror.conf

file

2. snapmirror off

3. snapmirror break < destination path>

To Resume the operation

Have to resync

snapmirror store #initialize a volume sanpmirror from tape on source vol

snapmirror retrieve # on mirror vol

Page 45: Netapp Commands Helpful

Synchronous Snapmirror

/etc/snapmirror.conf

filera:/vol1 filerb:/vol2 - sync

#multi path

src_con = multi()

src_con:/vol1 dest:/vol2 - sync

#src_con = failover()

Steps to create Mirror

1. Enter the license on both

2. user snapmirror.access option to specify the destination filer

3. on the destination filer , edit /etc/snapmirror.conf file

4. On both source and destination filers enter snapmirror on

command

5. on the destination filer, run snapmirror initialize <destination > command

Requirement

Destination vol must be restricted

Everything in destination will get deleted once initiazlied

snapmirror optimization

filer > options snapmirror.window_size 199475234 (default )

Page 46: Netapp Commands Helpful

(this will cause large brust of packet – does not work

well for WAN. May cause large packet drops resulting

in the termination of snapmirror transfer or resulting

very low throughput )

To change this

Dest filer > options snapmirror.window_size < between 8760

~ 199475234 )

Window size calculation

Window size = roundtrip delay * desired delay

Eg: for 10 mbps and RTD 100 millisec

(100/1000)*10,000,000 /8 = 125,000

Options snapmirror.delayed_acks.enable off

Snapmirror problem

On the source filer

Snapmirror source transfer from <vol> to <destination filer>:<vol. : request denied, previous request still pending

Socket connect error : resource temporarily unavailable

Sol : On Destination

1. make sure that vol is there

2. other source is pingable

Destination mirror filer> snapmirror abort netapp01:vol1 pcnetapp01:vol1 OR snapmirror abort netapp01 pcnetapp01

Destination filer> snapmirror status

Page 47: Netapp Commands Helpful

( see transfer has stopped )

Destination filer> snapmirror resync –s netapp01:vol1 pcnetapp01:vol1

Snapvault

>options snapvault.enable on

>options snapvault.access host=name

baseline qtree

>snapvault start –S filer:/vol/vol1/c1-v1-q1 vault:/vol/volx/t1-

v1-q1

>snapvault modify -S src_filer:qtree_path

>snapvault update dest

>snapvault snap sched <volume> <snapname> count@day_list@hour_list

>snapvault snap sched vol1 sv_1900 4@mon-sun@19

>snapvault snap unsched

>snapvault snap create #manually create a snapshot on the primary or secondary

#snapshot name must exist

>snapvault restore –s snapname –S srcfiler:/vol/volx/qtree

>snapvault stop destfiler:/vol/volx/qtree

>snapvault abort dest_qtree

>snapvault release src_qtree dest_qtree

>snapvault status

Page 48: Netapp Commands Helpful

>snapvault start –r <source qtree> <destination qtree>

Snapvault troubleshooting

Ifa backup relationship from OSSV is created and then deleted from secondary, any attempt to recreate it fails with error message:

“Transfer aborted: the qtreee is not the source for the snapmirror destination”

Example

Twain*> snapvault start –s fsr-pc1:E:\smalldir /vol/tinysmalldir

( error at console : snapvault : destination transfer from fsr-pc1:E:\smalldir to /vol/tiny/smalldir : the qtree is not the source for the snapmirror destination

Transfer aborted : the qtree is not the source for the snapmirror destination

On the primary log : error : E:\smalldir twain:/vol/tiny/smalldir Invalid qtree/snapshot requested

Log: e:\smalldir twain:/vol/tiny/smalldir unexpected close getting qsm data

To workaround

Release the relationship on primary using

Snapvault.exe release e:\smalldir twain:/vol/tiny/smalldir

To see what relationships are releasable type

Page 49: Netapp Commands Helpful

Snapvault.exe destinations

backup with DFM

>options ndmpd.enable on

>options ndmpd.access dfm-host

> options ndmpd.authtype <challenge | plaintext >

Non root user get ndmp password as

> ndmpd password <user name >

> ndmp password ……..

add snapvault license>options snapvault.enable on>options snapvault.access host>options ndmpd.preferred_interface e2 #optional importing existing relationship-just add the relationship-schedule/retention not imported DFM and NDMPa. First st this at filerOptions ndmpd.enable ONOptions ndmpd.access < dfm host > b. While DFM is downloaded and installedPrimary storage system Primary System Name NDMP user < root > , if no other users are defined NDMP p/w < > c. Telnet to port 10,000 and make sure that it can talk and not blocked. Diagnosis between DFM host and FilerAt host where DFM is downloadedC:\> dfm host diag < primary filer >

Page 50: Netapp Commands Helpful

DFM database files of windows host machineC:\> dfm database get dbDir c:/Program files/Network Appliance/Data Fabric Manager/DFM/Data dblogDir dbCacheSize Snaplock Vol create src1 –L 2 (at this point question is asked , read that carefully, this volume cannot be deleted. It is one time)vol create dst1 2vol status ( you will see snaplock compliance vol here )Snapmirror initialize –S giardia:src1 –L giadia:dst1 VIF Create stepsa. vif create vif1 e0 e7a e6b e8 --------single mode ORVif create multi vif0 e4 e10 ---- multi mode b. ifconfig vif1 < ip of vif > netmask 255.255.255.0 mediatype 100tx -fdc. update /etc/rc filed. reboot Tip 1 check filer > routed status filer > routed ON Tip 2If there is 3 port ( eg : 2 Gig and 1 100 bast T Ethernet ) so that e0 ( default – 100 base T ) – e0 must be turned off Vfiler If the hosting filer administrator does not have CIFS or NFS access to the data contained in V filers, except for that in Vfiler0. After storage unit is assigned to a Vfiler, the hosting filer administrator loses access ot htat storage unit. The Vfiler administration gains access to the Vfiler by rsh to the Vfiler’s IP address. As hosting filer administrator, before you create a Vfiler with the /vol/vol1 volume, you can configure the /etc/exports file so that you can mount the /vol/vol1 volume. After you create the Vfiler, an attempt to mount the /vol/vol1 volume would result in the Stale NFS file handle” error message. The Vfiler administrator can then edit the Vfiler’s /etc/exports file to export /vol/vol1, run the exportfs-a command on the Vfiler, tehn mount /vol/vol1, if allowed.

Page 51: Netapp Commands Helpful

>ipspace create vfiler1-ipspace>ipspace assign vfiler1 e3a >ifconfig e3a 0.0.0.0>ipspace destroy e3a_ipsapce>ipspace list>vfiler create vfiler2 -s vfiler2 -i 10.41.66.132 /vol/vfiler/vfiler2>vfiler status -a>vfiler status -r #running >vfiler run vfiler1 setup>vfiler stop|start|destroydoes it need to be started after setup? VFMCache locationC:\documents and settings \ all users\application data\nuview\storage\server\cache To change the location of the VFM application directory <C:\Documents and Settings\All Users\Application Data\NuView\>, which contains the cache directory: 1.Take a snapshot of the application in case there is a need to return to the working state. This can be done through VFM in the Tools menu by selecting Take Application Snapshot Now. Have the user create a snapshot and save it. 2. Save a copy of the VFM application folder < C:\Documents and Settings\All Users\Application Data\NuView> somewhere for backup purposes. 3.Exit VFM and stop the StorageXReplicationAgent service and the StorageXServer service. 4. Create a folder on a different drive on the VFM server where the application directory should reside in the future. Please use a local destination for the folder for example D:\VFMAppData. A mapped drive does not work in this situation. Create a new subdirectory called NuView in the new location. Ex: D:\VFMAppData\NuView 5. Go to the C:\Documents and Settings\All Users\Application Data\NuView directory and copy the StorageX directory to the new location created by the user under the NuView subdirectory. The new location should look something like this: D:\VFMAppData\NuView\StorageX 6. Open the registry with regedit.exe and find the HKEY_LOCAL_MACHINE\SOFTWARE\NuView\StorageX key. Add a new String Value here with the name AppDataDir and set the value data to the root of the new cache location. Ex: D:\VFMAppData 7. Close regedit and start the StorageX Server and Replication Agent services. 8. Start VFM and wait as it reads through the new cache directory and loads roots and information that were copied to the new location. Bakup media fundamentalsNdmpd should be ONTo checkFiler> ndmpd statusFiler> ndmp probe 0 [ session 0 , can be from 0 – 5 ] sysconfig –t ( will give some backup media information )mt –f nrst0a statusrestore tf nrst0a – display file list, there can be multiple backups in backup file, which is filelist

Page 52: Netapp Commands Helpful

mt –f nrst0a fsf 1storage disable adapter <port>storage enable adapter <port>storage show tape supported - should show wwn if yes(sysconfig –a – will tell port and also shows if adapter card online or offline – usually slot 10)/etc/log/backup ----- log files List all the files in backupFiler> restore tf rst0a rewind (rewind the tape)Filer> mt –f rst0a fsf 6 (move the head to file list6)Filer> mt –f rst0a status ( make sure )Filer> restore –xvbfDH 60 rst0a /vol/vol0/… ( restore ) TestingDump 0f rst0a /vol/vol0/etc/usermap.cfg ----- example SCSI tape diagnostics to send vendor ( more detail messages )Filer> mt –f diag 1 -------- ONFiler > mt –f diag 0 -------------- OFF Copy & paste console messages and send to vendor ( with diag 1 – ON, all the messages will go to etc/messages like when any backup job, command is executed like mt –f, rewind, offline , erase, status diag etc ) Some issuesa.If veritas is showing RED to LTO tape devices, then reboot LTO and restart veritas servicesb.If backup is done from Veritas software, make sure that no sessions are staying back as cifs share sessions. Go to my computer->Manage->connect to filer->shares->sessions.Administrative shares of backups are seen here as sticking – not going away even after backup is complete and you see huge list here. Fiber channel backup device Filer> Fcadmin online adapter 8aFiler> Fcadmin online adapter 8bFiler > fcp show adapter filer> storage show tape Tape Drive: FPN[200300051e35353d]:0.80Description: HP Ultrium 2-SCSI Serial Number: HUM2M00009World Wide Name: WWN[5:006:0b0000:1e01ae]Alias Name(s): st0 Device State: available (does not support reservations) McData SideCNXNAS*> storage show switchSwitch: WWN[1:000:080088:020751]Fabric: WWN[1:000:080088:020751]Name: CNX01Domain: 97Type: switch

Page 53: Netapp Commands Helpful