netapp and others

104
1/21/13 vipulvajpayee blog vipulvajpayeestorage.blogspot.com 1/104 1 week ago NETAPP_SNAPLOCK One day I was giving an technical interview, in which interviewer asked me a question, it was related to NetApp storage technology, and the question was that what is the “Compliance volume”, I got stuck and was not able to give any reply, because I was not aware of compliance volume, so after the interview got finished, I star searching for the compliance volume in NetApp and then I found that compliance volume is also know or called as snap lock volume. I knew about snap lock volume in NetApp, but I did not know that the snap lock volume is also known as compliance volume. After getting rejected in that interview, I thought of writing something about the snap lock volume, as the snap lock is not much used in most of the companies, so most of the NetApp administrator is not aware of this feature in NetApp and some who knows about this feature have not used it. SnapLock is NetApp WORM (Write Once Read Many) Some regulated environment require business records be archived on WORM media to maintain a non-erasable and nonrewritable electronic audit trail that can be used for discovery or investigation. DataONTAP SnapLock software complies with the regulations of those markets. With SnapLock, the media is rewritable but the integrated hardware and software controlling access to the media prevents modification or deletion. SnapLock is available in tow version: SnapLock compliance for strict regulatory environment, and SnapLock Enterprise, for more flexible environments. SnapLock can integrates with the snap Mirror and snap vault, and snap mirror allows the SnapLock volumes to be replicated to another storage system and Lock vault backs up SnapLock volumes to a secondary storage system to ensure that if the original data is destroyed than the data can be restored or accessed from another location. Once the data is created in the SnapLock volume they comes under the retention period and these files get treated as WORM, so that nobody can delete or modify the data until and unless it reach to its retention period, the SnapLock volumes cannot be deleted by the user, administrator nor by the application, the retention date on a WORM file is set when the file is committed to WORM state, but it can be extended at any time. The retention period can never be shortened for any WORM file. SnapLock Compliance (SLC) NETAPP_SNAPLOCK Home Classic Classic vipulvajpayee blog vipulvajpayee blog search Send feedback

Upload: tolukes

Post on 01-Dec-2015

309 views

Category:

Documents


31 download

DESCRIPTION

cheats

TRANSCRIPT

Page 1: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 1/104

1 week ago

NETAPP_SNAPLOCK

One day I was giving an technical interview, in which interviewer asked me a question, it was related to NetApp

storage technology, and the question was that what is the “Compliance volume”, I got stuck and was not able to

give any reply, because I was not aware of compliance volume, so after the interview got finished, I star searching

for the compliance volume in NetApp and then I found that compliance volume is also know or called as snap lock

volume.

I knew about snap lock volume in NetApp, but I did not know that the snap lock volume is also known as compliance

volume.

After getting rejected in that interview, I thought of writing something about the snap lock volume, as the snap

lock is not much used in most of the companies, so most of the NetApp administrator is not aware of this feature in

NetApp and some who knows about this feature have not used it.

SnapLock is NetApp WORM (Write Once Read Many)

Some regulated environment require business records be archived on WORM media to maintain a non-erasable

and nonrewritable electronic audit trail that can be used for discovery or investigation.

DataONTAP SnapLock software complies with the regulations of those markets. With SnapLock, the media is

rewritable but the integrated hardware and software controlling access to the media prevents modification or

deletion.

SnapLock is available in tow version: SnapLock compliance for strict regulatory environment, and SnapLock

Enterprise, for more flexible environments.

SnapLock can integrates with the snap Mirror and snap vault, and snap mirror allows the SnapLock volumes to be

replicated to another storage system and Lock vault backs up SnapLock volumes to a secondary storage system to

ensure that if the original data is destroyed than the data can be restored or accessed from another location.

Once the data is created in the SnapLock volume they comes under the retention period and these files get treated

as WORM, so that nobody can delete or modify the data until and unless it reach to its retention period, the

SnapLock volumes cannot be deleted by the user, administrator nor by the application, the retention date on a

WORM file is set when the file is committed to WORM state, but it can be extended at any time. The retention

period can never be shortened for any WORM file.

SnapLock Compliance (SLC)

NETAPP_SNAPLOCK

HomeClassicClassic

vipulvajpayee blogvipulvajpayee blog search

Send feedback

Page 2: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 2/104

SnapLock Compliance is used in strictly regulated environment, where data is retained for longer period of time

and these data are accessed frequently only for readable purpose.

SnapLock Compliance even does not allow the storage administrator’s to perform any operations that might

modify the file, it uses the feature called “ComplianceClock” to enforce the retention periods. SnapLock Compliance

requires the SnapLock license to enable the SnapLock features and to restrict the administration access to the file.

SnapLock Enterprise (SLE)

SnapLock Enterprise allows the administrator to destroy the SnapLock volume before all the file on the volume

reach their expiration date. However no one else can delete or modify the files.

It requires the SnapLock _enterprise license

NOTE:

1. From dataontap 7.1 and later, SnapLock compliance and SnapLock Enterprise can be installed on the same

storage system.

2. Uninstalling the SnapLock license does not undo the SnapLock volume or the Worm file properties. When

uninstalling the license, existing SnapLock volumes are still accessible for general reading and writing, but

won’t allow new files to be committed to WORM state on the SnapLock volume, or the creation of any

new SnapLock volumes.

SnapLock Configuration Steps

1. Verify that the storage systems time zone time and date are correct.

2. License SnapLock.

3. Initialize the compliance clock

4. Create SnapLock aggregate and volume.

5. Set retention periods

6. Create files in the SnapLock volume

7. Commit files to a WORM stat.

Use the “date –c initialize” command to initialize the compliance clock.

After compliance clock is initialized, it cannot be altered and WORM files, SnapLock compliance volumes, or

aggregates cannot be destroyed. As compliance volume get in sync mode with the system clock, so when the

volumes is taken offline then the compliance volume will get out of sync with the system clock, but when the

volumes is brought online back the data ontap will start to make up the time difference, the rate at which the

compliance clock catches up depends on the version of data ontap your system is running.

Page 3: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 3/104

Creating the snaplock volumes

Snaplock type, compliance or Enterprise, is determined by the installed license.

Creating the snap lock traditional volumes

“vol create –L vol_name <n_disk>”

For ex :

System> vol create viptrd –L 14

For creating the SnapLock flexible volumes, you need to create the snaplock aggregate first

“aggr create –L aggr_name <n_disks>”

“vol create vol_name aggr_name <size>”

For ex:

System> aggr create vipaggr –L 3

System> vol create vipflex vipaggr 100m

System> vol status vipflex (for checking the status of the vipflex volume)

Volume Retention Periods.

The retention periods are the volume’s attributes. If you do not explicitly specify an expiration date for a

WORM file, the file inherits the retention periods set at the volume level.

There are three retention periods per volume: a minimum, a maximum and a default.

1. Until you explicitly reconfigure it, the maximum retention period for all WORM files in a snap Lock

Compliance volume is 30 years.

2. If you change the maximum retention period on a SnapLock Compliance volume, you also change the

default retention period.

3. If you change the minimum retention period on a SnapLock Enterprise volume you also change the

Page 4: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 4/104

default retention period. The minimum retention period is 0 years.

How to set the Retention Periods.

Use the vol options command to set the retention period values.

For ex:

System> vol options vol_name snaplock_minimum_period [{period} |min |max | infinite <count>d|m|y]

System>vol options vipvol snaplock_minimum_period 1d

System> vol options vipvol snaplock_default_period min

Min is the retention period specified by the snaplock_minimum_period option, max is the retention period

specified by the snaplock_maximum_period option, infinite specifies that files committed to WORM state are

retained forever.

For storage system using dataOntap 7.1 and later , the maximum retention period can be extended up to 70 years.

You can also extend the retention date for a WORM file before or after the file retention date has expired by

updating its last accessed timestamp. This will re_enable WORM protection on the file as if it were being

committed for the first time. Once the retention date expires, you can re_enable the write permission on the file

and then delete it. However the file contents still cannot be modified.

Note: the SnapLock volume maximum retention period restrictions are not applied when extending the retention

date of a WORM file.

Committing Data to WORM

After placing a file on a snaplock volume, the file attributes must be explicitly changed to read-only before it

becomes a WORM file. These operations can be done interactively or programmatically, the exact command or

program required depends on the file access protocol (CIFS, NFS, and so on) and client operating system being

used.

Below command could be used to commit the document .txt file to a WORM state, with a retention date of

November 21,2020 ,using a unix shell.

touch –a –t 202011210600 document.txt

chmod –w document.txt

For a windows client, to make a file read-only, right-click the file, select properties, checks the read only checkbox

and then clicks OK.

Page 5: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 5/104

The last accessed timestamp of the file at the time it is committed to WORM state becomes its retention date

unless it is limited by the minimum or maximum retention period of the SnapLock volume. If the date was never

set, the default retention period of the SnapLock volume is used.

If a file is initially copied read-only, it is not automatically committed to WORM state. The only way to guarantee

that a file is committed to WORM state is explicitly assign a read-only attribute and optionally modify the last

access time to set a retention date while the file is in the SnapLock volume.

Auto-Committing Files.

A time-delay commit to WORM with an adjustable timer is available in Data Ontap 7.2 or later. If the file does not

change during the delay period, the file is committed to WORM at the end of the delay period. Auto-commit does

not take place instantly when the delay period ends. Auto-commit are performed using a scanner and can take

some time. The retention date on the committed file will be determined by the volume’s default retention period.

To specify a time delay, set the global option snaplock.autocommit_period to a value consisting of an integer

count followed by an indicator of the time period: “h” for hours, “d” for days. “m” for months, or “y” for years.

System> options snaplock.autocommit_period none | [count (h | d | y)]

The default value is none and the minimum delay that can be specified is two hours.

The following example sets the auto-commit period to 24 days.

System> options snaplock.autocommit_period 24d

Appending WORM File

A WORM file can be appended by simple creating an empty Snaplock file, appending data to it, and then locking

the file in place again, data is committed to WORM state in 256 Kb chunks; the previous 256 kb segment

automatically becomes immutable. This procedure can be done from ontap7.1 onward.

For ex:

1. Inside a WORM volume create a zero-length file with the desired retention date.

touch -a -t 202012150600 file

2. Update the file access time to indicate the file’s desired expiration.

3. Make the file read-only.

Page 6: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 6/104

chmod 444 file

4. Make the file writable again.

chmod 755 file

5. Start writing data to the file.

echo test data > file

at this stage you have a WORM appendable file. Data is committed to WORM in 256 kb chunks. Once data

is written to byte n*256K+1 of the file, the previous 256Kb segment is in a WORM state and cannot be

rewritten.

6. Make the file read-only again. The entire file is now in the WORM state and cannot be overwritten or

erased.

chmod 444 file

Deleting WORM volumes and Aggregates

You cannot delete SnapLock compliance volumes that contain unexpired WORM data. You can delete

the volume only when all the WORM files have passed their retention dates.

Note: If a compliance volume cannot be destroyed, it remains offline. It should be immediately

brought online so that retention period problem can be fixed and to keep the complianceclock from

falling behind.

You can destroy the compliance aggregates if they don’t contain volumes. The volume contained by a

snaplock compliance aggregate must be destroyed first.

You can destroy snaplock Enterprise volumes at any time, provided you have the appropriate

authority.

………..if you like this blog please comment……..Thanks…….

Page 7: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 7/104

Posted 1 week ago by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

3 weeks ago

Quantum DXi6700 series vs. EMC DataDomain

As we know that the quantum DXi series appliance is a disk based backup solution and

their solution are quite similar to the data domain solutions, one thing they claim in

market that the DD uses their patent variable size segment technology in their DD box.

So the quantum is giving quite a tough fight to the EMC data domain, but quantum is

new to the Indian market, it will take time for quantum to make their footprint in Indian

market.

Quantum DXi6700 series vs. EMC DataDomain

Page 8: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 8/104

Quantum has launched their Dxi Accent software with their new quantum Dxi6700 series

boxes which is Dxi6701 and Dxi6702

Dxi Accent software distributes the deduplication work between the backup software

and the Dxi appliance. This software is similar to the EMC DDBoost software.

Quantum have one big plus point is that they have the direct tape out capability in their

quantum Dxi storage box and they know very well that the tape is not going to get

replaced because they are still the cheaper options for the long term retention of data.

And quantum produces their own tape devices , so by this direct tape out capability the

quantum Dxi boxes can get attached to the quantum tape libraries and send data directly

to the tape, and in data domain there is no such type of features, which again creates a

performance bottle neck when we need to send the data from the data domain boxes to

the tape which will again use the separate or existing network, and as we know that EMC

does not manufacture any tape device , so if some issue get occurred while sending the

data from the datadomain to the tape via network then we can get stuck in getting

support from two different vendors, one of EMC datadomain and one of the tape vendor,

while in case of quantum if you are using the quantum Dxi appliance and their tape in

your environment , if some issue happens then you can get the complete support for

both the Dxi appliance and the tape device, which is again a plus point for quantum in

support and services.

Quantum vmpro software is a specially designed software to take the vmware backup as,

we know that in today’s world as fast the things are getting virtualized the more the data

getting created and tougher the process is getting to take backup of these virtual

machines, how to take a proper virtual machines backup, the quantum vmpro is the

solution for taking the dedup backup of the virtual machine properly and speedily to the

Dxi appliance.

Quantum Dxi 6700 series is compared by the EMC DD 670, and now I need to

show you all, some of the performance details published by the Evaluator group

regarding the quantum Dxi6702 performance in their report.

The Dxi6702 was the highest performing data deduplication system evaluated by

evaluator group to date. Perhaps more important than the raw performance level,

the price performance of the Dxi6700 systems outpace all currently competing

system in their category.

Performance met or exceeded claimed values for the Dxi6700 with multiple

protocols (NFS, VTL and OST)

1. OST performance was 5.9TB/hr. for Dxi6702 (vs. 5.4TB/hr. for DataDomain

DD670 with DD Boost per published specifications).

2. VTL performance was 5.9 TB/hr. for Dxi6701 (vs. 3.6 TB/hr. for DataDomain

DD670 per published specifications).

3. NFS performance was 4.4 TB/hr. for Dxi6701 (vs. no specification published by

the datdomain DD670).

As per the Evaluator group, Evaluation report.

Page 9: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 9/104

1. Quantum’s deduplication delivered the best result in the midrange category across

multiple protocols, with a minimal disk configuration.

2. Industry leading midrange performance for CIFS, NFS, VTL, OST with the

distributed deduplication.

3. Performance with or without distributed deduplication (Dxi Accent) met or

exceeded that of midrange industry leaders.

4. The throughput wasn’t limited with number of drives- maximum performance

was available with the minimum configuration of 8TB in a raid 6 configuration.

Additionally the following results were observed.

1. Hybrid backup modes using DXi Accent and traditional backup are

supported simultaneously.

2. DXi Accent is able to improve the backup performance when network

bottleneck exist and significantly accelerate WAN based backup.

3. The performance scaled rapidly, surpassing the stated performance levels

with 35 streams.

4. Unique path to tape feature can enhance the performance when moving a

copy of data to tape, providing a practical method for creating data for long

term retention.

As we know that the deduplication technology is so resource utilization technology, so

proper care should be taken for indexing the data files and because the DXi with solid

state disk not only increases the performance by storing the index files in the solid state

but also tune the indexing operations.

As datadomin also do its indexing operation on the cache, but it does not store it in

cache, so loading the index file from the sata to the cache again decrease some of the

performance.

Some of the Plus point features of Quantum.

Virtual backup Enhancement: the new Quantum VmPRO virtual backup appliance

combines a DXi system with software to enhance backing up VMware virtual machines.

This software runs as a virtual appliance and discover all virtual machines in the network.

Each VM’s data store is then mapped as individual file system, with complete access to

internal file system. This enables the backup application to treat and protect VMware

virtual machines as a collection of files.

As we know that DataDomain does not have some of specific software feature to take

care of the virtual machines backup, so mostly we need to depend on the backup

software for backing up of the virtual machine, which again can be bottle neck when we

face issue on backing up of virtual machines.

DXi6700 Direct Tape Creation: I would like to say that this is one of the best features

that will not only reduce the EMC footprints in backup world but also help many

customers to send the data back to the tape for the long term retention by using the

direct-tape-path features of quantum, as we know that keeping data for long term on

Page 10: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 10/104

disk can be very costly, so long term retention data has to be moved to the tape, so

again sending data to the tape from the disk based backup solution becomes again a big

issue , because of not having the direct tape out features, so we need to use either the

production lan or create a spate network to send the data from the disk based backup

device to the tape device.

Quantum has integrated the use of D2D appliance with tape by including direct tape

creation in their Dxi6700 D2D and VTL products. Quantum path to tape support

application directed movement of data to tape with either VTL or OST protocols. This

capability offloads production SAN and LAN networks and backup server when making the

secondary copies to the tape.

With direct connectivity between the Dxi system and tape via dedicated Fiber Channel

ports, data may be sent directly to tape drives, offloading backup servers, primary

storage system and application servers. Data is sent under the direction of data

protection software, allowing backup application to maintain awareness of where all

copies of data resides, and data copied to tape is expanded back to its native format and

then compressed using industry-standards algorithms. By storing data in non-

deduplication native format on tape, the long-term retention of data is enhanced while

minimizing requirements for data restoration from tape. This provides the LAN and SAN

free method of moving data to tape for off-site storage or long-term retention.

Well EMC data domain does not have direct –tape –path features, so sending the data

from the datadomain device to tape can again be a headache for customer or backup

administrator.

Ease of Management

Well at the end the management tools comes into picture after buying the compliance if

it’s GUI is difficult to understand or not much user friendly then the survival change of

that particular product is very less in market.

So the Management of that particular device from its GUI should be very user-friendly

and should also provide the useful monitoring data, to keep records for future reference,

proper graph like charts and maximum details should be gathered or viewed by that tool.

I have not used any management tool of quantum device, so I cannot explain my

experience of that tool, but below given is the data which Evaluation group has provided

of Quantum Dxi GUI features and its ease of usability.

Quantum DXi GUI:

Evaluator Group found Quantum’s DXi GUI and other management tools score

well in comparison to other deduplication system evaluated by EGI for ease of

use. This is based on the following factors:

1. Bundled pricing, without capacity-based licensing- results in lower and predictable

cost for users(significant benefit vs many competitors)

2. Both GUI and CLI can be used to perform the entire task.

3. The GUI was intuitive and as easy to use as any competing system.

4. The embedded GUI is accessible via web browser, requiring no external software

installation.

Page 11: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 11/104

5. The additional management tools were well integrated and provided significant

value.

6. The DXi Advanced reporting tool provided significant performance analysis

benefits such as

The ability to view six year of historical data for logged parameters exceeds the

leading competitor

DXi advance reporting can be run simultaneously for multiple appliances through

quantum vision

Enables administrator to find and resolve bottlenecks and generate historical

trend report.

7. Quantum vision now provides tighter integration and ability to monitor,

troubleshoot and analyze performance in enterprise setting.

8. New ability for remote access to the vision server via an iPhone, iPad or iPod

Touch, even over a VPN.

9. Integrated path–to-tape provides capability to move data to tape without impact

of the production storage or servers, thus offloading data copies to tape for

archive or compliance.

Quantum with good management tool and good throughput and direct-tape-out

and with other features is really going to rock in Indian marker by giving tough

fight to the EMC datadomain.

Posted 3 weeks ago by vipulvajpayee

0 Add a comment

Page 12: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 12/104

Enter your comment...

Comment as: Google Account

Publish Preview

13th December 2012

Points to remember when expanding the RAID group in Hitachi HUS (Hitachi Unified Storage)

Raid Group can be expanded on Hitachi HUS storages, but we need to take lot of care during the raid

group expansion, if you will not take care of some of points during expansion then you will end up with

data loss.

1. Raid Group can be expanded by adding the disk.

2. Minimum disks that can be added are 8 or less if we reach the maximum RG width.

3. R0 cannot be expanded.

4. Any number of RG expansion request can be given but at any point of time only each controller will

do one raid group expansion.

5. Expanding the raid group will not expand the luns inside the raid group luns are known as LU in

Hitachi.

6. When the Raid group is expanded then extra space get created in which we can create extra LUs in

that raid group.

7. Raid group expansion takes times and slightly performance gets decrease so it is recommended by

Hitachi to do this operation when the I/Ops are less on Hitachi storage.

8. Expanding the raid group will not change the raid level means the R5 (raid 5) will be R5 (raid5) when

it get expanded.

9. Only those RG can be expanded whose PG depth is 1.

10. Raid group can be expanded but cannot be shrunk.

11. There are two states when the raid group is expanded, one is the waiting state and one is the

expanding state.

12. Waiting state raid group state can be cancelled, in this state means that the raid group expansion is

yet not started and then you can cancel it if you want.

13. Expanding state raid group cannot be cancelled, means the expansion got started and if we

forcefully try to cancel it then we can end up by data loss.

Rules for expanding a RAID group

You cannot expand the raid group in following conditions

1. If the LU whose status of the forced parity correction is:

Correcting, waiting, waiting drive reconstruction, unexecuted, unexecuted 1 or unexecuted 2.

Means that when parity construction is going on please don’t perform the raid group expansion, let the

activity get completed and then we can do the expansion.

2. If an LU is being formatted and its part of the raid group which you need to expand then don’t

Points to remember when expanding the RAIDgroup in Hitachi HUS (Hitachi Unified Storage)

Page 13: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 13/104

expanded the raid group until and unless the formatting gets completed.

3. If you are expanding a raid group after setting or changing cache partition manager configuration,

the storage system must be rebooted, expand the raid group after rebooting the storage system in

which the power saving function is set. Change the status of the power saving features to “Normal

(spin-on)” and then expand the raid group.

4. If you are expanding a raid group when the dynamic sparing/correction copy/copy back is operating,

expand the raid group after the drive has been restored.

5. If you are expanding a raid group while installing the firmware, expand the raid group after

completing the firmware installation.

Best practices for raid group expansion

1. You can assign priority as Host I/O or RAID group expansion.

2. Perform backup of all data before executing expansion.

3. Execute the RAID group expansion at a time when host i/o is at a minimum.

4. Add drive with the same capacity and rotational speed as RAID group of expansion target to

maximize performance.

5. Add drives in multiples of 2 when expanding a RAID-1 or RAID-1+0 groups.

Now in 2 point it is mentioned to take backup of all the data before doing the expansion is because

some time during expansion , if there is power failure, or something happens to the system and if any

disaster happens, then the LUN associated with the expansion can become unformatted. And there

can be chance of data loss. So to be at safer side please takes a backup of the data before

performing the expansion.

Posted 13th December 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

30th November 2012

Avamar

As I was going through the EMC avamar product I thought of writing something about it in my blog. As

we know the backup is the evergreen technology which keeps on improving day by day and the IT

companies are really investing lots of money on backup and I would like to say that Symantec product

Avamar

Page 14: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 14/104

like backup exec and netbackup were mostly heard names in backup world, but now a day’s the Emc is

eating their market by Avamar and Networker and Data Domain products EMC is having 68% of

backup market and these product are really giving the tough fight to Symantec, and other backup

vendors.

Let me introduce about the Avamar.

Avamar is software + hardware compliance which takes the backup on disk, avamar does not comes

on model vise, rather it comes in capacity vise like 3.9 TB or etc... , Avamar do the source base

deduplication so it becomes easy to send data on LAN or WAN with nice speed, There is one more

good thing about this product is that it takes daily full back up , so the restoration of the data

becomes much faster than the traditional tape. As we know that because of virtualization, lots of

physical server moved into virtual world and so the complicity of taking backup of those virtual

machine also got increased and by Avamar we can easily and with nice speed take backup of those

virtual machine and even of the NAS devices sometimes the NDMP backup becomes quite time

consuming, so by Avamar the NDMP backup really becomes very fast and smooth and the backup

window also get reduced. One more thing I forget to mention the laptop and desktop backup, mostly in

IT companies and other companies we don’t take backup of our laptop and desktop and sometimes if

our laptop lost or crashed , it becomes difficult to retrieve the data, by avamar has an capability of

taking the desktop and laptop backup. Data Encryption on flight and on rest is added advantage on

security perspective ,and centralized management makes protecting hundreds of remote offices easy,

by avamar data transport it can transport the deduplicated data on the tape for the long term

retention, Finally avamar GRID architecture provides online scalability and patented redundant array

of independent node (RAIN) technology provide high availability.

Now when I was telling about this avamar features to one of my friend , he laughed and told me what

new in this technology like other product also offer the deduplication technology then what’s a new in

this deduplication , right what ‘s new?

As I would say that other companies have their own deduplication algorithm by using those algorithm

these deduplication software do the deduplication on the data, As I don’t want to discusses on

algorithm but as I worked on NetApp deduplication technology and their deduplication technology

breaks the data or scanned the data on 4kb of fixed block for finding the duplicate data, and so it

really takes lots of CPU utilization and the NetApp deduplication is always referred or recommended to

run in non-production hours, like night after 12, on Saturday or Sundays. So by these things we can

understand that its dedup technology eats lots of CPU utilization. Now I am not telling the NetApp

deduplication is not quite good, but here I want to say that the dedup technology is much depended

on type of algorithm used by some companies like they scanned their data on fixed block or in variable

segments or block.

Variable vs. fixed –length data segments

As segment size is key factor for eliminating the redundant data at a segment or sub file level, fixed –

block and fixed-length segment are commonly used by lots of deduplication technology, for ex if there

is small change to the dataset (for ex if I added a small word in beginning of the file) can change all

fixed-length segment in a dataset, despite the fact that very little of the data set has actually changed.

Avamar uses an intelligent variable –length method for determining the segment size that looks at the

data itself to determine the logical boundary points, eliminating the inefficiency.

Logical segment determination

Avamar’s algorithm analysis the binary structure of dataset (the 0’s and 1’s that make the data set) in

order to determine the segment boundaries that are context dependent, so that avamar’s client agent

will be able to identify the exact same segments for the dataset, no matter where the dataset is stored

in the enterprise. Avamar variable length segment average 24kb in size and then compressed to an

average of just 12kb.By analyzing the binary structure, Avamar method works for all file types and

sizes, including the database, for instance if paragraph is added in beginning or middle of a text file,

avamar algorithm will identify and backup only the new added segment rather than backing the whole

file again.

For each 24kb segment, avamar generates a 20byte ID, using SHA-1 algorithm this unique id is like a

fingerprint for that segment, Avamar software then uses the unique ID to determine whether a data

Page 15: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 15/104

segment has been stored before and then the only unique data is stored again , eliminating the

duplicate data.

Benefits of Avamar

1. Because of client side deduplication there is tremendous reduction in daily network bandwidth and

backup, as in avamar whitepapers they says that there is up to 500X of reduction in bandwidth and

backup data.

2. Because only changed data are sent, so you can see 10X faster daily full backup.

3. Restoration is fast because of daily full backup.

4. If you are sending the backup data through LAN/WAN to remote site then there also you can see

the tremendous amount of bandwidth reduction.

5. Now all the benefit of disk can be added to your backup, like end-to-end protection, restoration fast,

backup fast, reduction of backup window, etc…

6. As we know the deduplication technology is resource oriented that means the client will be highly

utilized while the backup is running for the avamar, then I will say that the avamar client deduplication

features run in low priority and does not take much of the CPU utilization.

Grid Server Architecture

Avamar Grid Server Architecture is also one of the features which make avamar more reliable,

scalable, performance _oriented Flexible and more available solution.

Avamar global deduplication feature also work in the unique _ID process which I have already

discussed above , by this features the duplicated data is not copied again, but to maintain this there

should be some good indexing technology, as we know that centralized index is not much reliable, as if

the index file is corrupted we will be not able to retrieve the data from the tape or anywhere , because

we will be not aware where the data is stored, so the avamar uses the distributed indexing feature .

Distributed Indexing

As the data volume increases, a centralized index becomes increasingly complex, and difficult to

manage, often introducing the bottleneck to backup operations. In addition the corruption of

centralized indexing can result in inability for organization to recover the backup data. Avamar uses

the distributed indexing method to overcome this challenge, Avamar uses the segment ID in a manner

similar to a phone number for landlines in phone number, the area code provides the first general

area where call needs to be routed and number itself tell the exact location where the call is targeted.

Avamar uses a portion of each unique ID which can determine, which segment of data will be stored in

which storage node, and then another portion on unique ID is to identify where the data will be stored

in the specific storage node, By this process the identification of the data is quite easy and the hence

the restoration becomes quite faster and easy, The automatic load balancing features distribute the

load across the each storage node, and distribute the data between each storage node.

RAIN Architecture (Redundant array of independent node)

Avamar also support the RAIN architecture across its node in order to provide the fault tolerance and

failover across its node so if any of the nodes get failed there will be no effect on the data, online

reconstruction of the node data will be started. Even the raid 5 or raid 1 will support up to disk failure.

Even the daily checking of the data also happens in Avamar, so that the data which is backed up can

be restored properly, because avamar is the disk based backup solution so the block checksum is

performed in the disk for checking the error block.

Flexible deployment options

Agent-only options: For smaller or remote offices you can install the agents on the server or

desktop laptop and can dedup the data on the source side and send it to the centrally located avamar

server through WAN.

EMC-certified server: Avamar software installed on an EMC-certified server running the red hat

enterprise Linux from vendors including the DELL, HP and IBM.

EMC Avamar Virtual Edition for VMware : This is the Industry first deduplication virtual appliance,

means you can install it as a virtual machine in an ESX server hosts, leveraging the existing server

CPU and disk storage.

Page 16: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 16/104

EMC Avamar Data Store : All in one packaged solution means Hardware + Software, it comes in two

models one is scalable model and one is single node. Scalable model can be placed in the centralized

datacenter where it can store the data coming from all the remote sites and grow up to petabytes of

storage.

A single node Avamar solution is ideal for the deployment for the remote offices that requires faster

local recovery performance , it provide up to 1TB,2TB or 3.3 TB of deduplication backup capacity,

which under typical backup solution of tape and disk can take up to tens of terabytes space.

Conclusion

Avamar is a best backup solution for Enterprise customer, who is really facing lot of problem on

backing up their data, but it becomes costly solution for some SMB customer, and hence avamar

solution as best in industry for desktop/laptop back and VMware back, still face lot of challenge when it

comes of costing, and because of the costing, the data domain is taking lot of backup market in SMB

sector,

Avamar is best known solution for its desktop/laptop backup solution and VMware backup solution,

and because it do only full backup, it makes restoration much faster.

One of the good thing of this solution is that it is end to end solution means software and hardware

and so if we implement this solution in our environment, we get end to end support from the EMC,

means if backup fails or some issue happens, I no need to call my backup software vendor and then

hardware vendor separately, which usually happens because backup software from other vendor and

hardware from other, will make you to log a two different support call if something happens with

backup.

Posted 30th November 2012 by vipulvajpayee

2 View comments

Rama krishna 4 December 2012 05:23

As always you are super sir.

Reply

vipulvajpayee 13 December 2012 02:10

Thank you Rama Krishna.

Reply

Page 17: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 17/104

Enter your comment...

Comment as: Google Account

Publish Preview

30th October 2012

Hitachi Unified Storage Specification and Features

Few days back I attended the Hitachi HUS modular training, and while the training was going

on, as I have already worked on NetApp storage, so I was keep on comparing Hitachi each

feature with the NetApp unified storage features, but apart from the architecture (hardware &

CPU and cache) I have not found anything new in Hitachi unified storage. While one thing I can

say that installing the Hitachi storage and upgrading the Hitachi firmware’s are quite easier

than the NetApp storage and even managing is also quite easier than the NetApp GUI.

Hitachi unified storage is not unified in Hardware , but it is unified from software front, means

they have Hitachi Command suite for management of Hitachi SAN/NAS and even the other

VSP, ASM, others product.

When I asked that why they have not merged all the hardware in single hardware like NetApp,

their straight answer was they don’t want to compromise on performance, more hardware

means good performance, because each hardware have its own memory, ram, CPU to perform

its own task, while merging in single hardware the load on central CPU will increase and then

there will be more hardware failure, decrease in performance.

Well I have not used any Hitachi product so I cannot say that whether they are good or not in

performance but the examples of some of their customer feedback they presented in training,

that quite say that Hitachi is really good performance storage box. If you see their controller

architecture their you can see that there is 6GB/s connectivity between the its RAID Processor

(DCTL) chip or we can say that the between the two controller there is 6GB/s connectivity

which help their controller to do the load balancing, that’s good, because this was new to me,

because I have not seen this in NetApp, yes controller failover and failback happens in NetApp ,

but NetApp never say that their failover /failback link do any type of load balancing activity.

Yes I know that in NetApp the disk are owned by the respective controller and their load will be

handled by that owned controller only, and each controller have some load handling capacity,

Hitachi Unified Storage Specification and Features

Page 18: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 18/104

but lot of customer don’t understand that and they keep on adding the disk shelf to one of the

controller in the hope that in future they will add to the other controller and by this one of the

controller which have more number of disk owned by the others have to handle lot more iops

than other and hence the utilization of that controller increases and the performance get

decreases.

As I have also worked in some of the EMC VNX series storage box, which are unified storage

box of EMC, well there they strictly recommend to keep adding the expansion shelf

alternatively, for ex if one expansion shelf is attached to the one of the controller bus then

other expansion shelf should be attached to the other controller bus, for load balancing.

So that Cleary state that neither the NetApp nor the EMC have such type of 6GB/s internal

connectivity between the controller which can do the automatic load balancing like Hitachi, but

still I cannot write much because I do not have any experience in Hitachi automatic load

balancing features so could not say that whether it really works fine .

But by doing some of enquiry with some of the colleagues there who have good experience in

Hitachi storage stated that they never seen much of hardware failure in Hitachi except the

disk, I mean like controller failure, PCI card failure, or fan failure or power supply failure types.

But I can say the hardware failure in NetApp is quite higher than the other vendor storage

product. That’s my personal experience and I don’t know why, even you can experience that

suddenly some mail will get dropped in your inbox from NetApp stating that please do this or

that firmware upgrade urgent to save you controller getting failed (W.T.F) and then you have

to plan for that activity, so bad, my personal experience, NetApp have lots of bugs and the

good part is they keep working on them.

While in Hitachi training they were more focusing more on the cache part that means in Hitachi

storage you can do lot of cache modification, like you can set the cache size as per your

application and the data is first written to the cache and then to the disk, so that if there is

any disk failure the cache data will get copied to the flash memory and the content can remain

there for infinite time (means until you get recovered from your power failure).

Even there is one more features in Hitachi unified storage that you can fix size of the block of

greater size, like every storage divides the data into their some default block size like 4kb or

etc. and then store it in disk, but in Hitachi you can increase the block striping size of greater

size also, but these all activity should only be done when you get the recommendation from

the Hitachi technical team.

Now striping the block of bigger size is really a fantastic feature and it really improves the

performance, so by all this good features you can see that Hitachi is more concentrated on

performance and that really good and best things of Hitachi storage apart from this basic

installation of Hitachi is quite easier compared to NetApp and EMC and then the part

replacement is also quite easier than the NetApp and EMC , well for the part replacement you

don’t to apply your extra brain just go and remove the faulty part that’s all .

But still if you see that today’s storage world is talking about deduplication and automatic

tearing, which is still not there in Hitachi storage, and these features will be soon launched in

coming version.

Below are the file module specifications:

Page 19: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 19/104

Below is HUS block storage specification:

Now in HUS unified storage you have to buy the block level storage and file level then only it is

unified, so when you say that u wan Hitachi unified you will get block and file both.

Well I have not deeply explained the features of Hitachi storage, because still I need to work in

this product and by working on any product you can easily understand about it, so in future I

will be writing some more blogs on Hitachi product.

Posted 30th October 2012 by vipulvajpayee

0 Add a comment

Page 20: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 20/104

Enter your comment...

Comment as: Google Account

Publish Preview

26th September 2012

NetApp Snap Mirror

Well every NetApp engineer will be aware of the snapmirror , it’s a common and important

feature of the NetApp, so today I thought of writing something about snapmirror , May be my

blog on snapmirror can help you to understand the snapmirror more nicely.

Why we need a snapmirror.

SnapMirror is replication feature of NetApp and it is fast and flexible enterprise solution to

replicate your critical and very precious data over local area, wide area and fiber channel

networks to the destination/different location, and it is the very good solution for the disaster

and even good solution for the online data migration without any additional overhead.

Snapmirror have three modes.

Async: Replicates snapshot copies from a source volume or qtree to a destination volume or

qtree. Incremental updates are based on schedules or are performed manually using the

snapmirror update command. It works both in volume level and qtree level.

Sync: Replicates writes from a source volume to a secondary volume at the same time it is

written to the source volume. Snap mirror Sync is used in environments that have zero

tolerance for data loss.

Semi-sync: It is between the Async and sync mode with less impact on performance. It can

configure a snapMirror sync replication to lag behind the source volume by a user-defined

number of write operations or milliseconds.

Volume snapmirror enables block-for –block replication. The entire volume, including its qtrees,

and all the associated snapshot copies, are replicated to the destination volume. The source

volume is online/writable and the destination volume is online/readonly and when the

relationship is break the destination volume becomes writable.

Initial Transfer and Replication.

To initialize a snapmirror relation, you first have to restrict the destination volume in which the

replica will reside. During the baseline transfer, the source system takes a snapshot copy of

the volume. All data blocks referenced by this snapshot copy, including volume metadata such

as language translation settings, as well as all snapshot copies of the volume are transferred

and written to the destination volume.

After the initialization completes, the source and destination file systems have one snapshot

copy in common. Update occur from this point and are based on the schedule specified in a

flat-text configuration file known as the snapmirror.conf file or by using the snapmirror update

command.

To identify new and changed blocks, the block map in the new snapshot copy is compared to

NetApp Snap Mirror

Page 21: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 21/104

the block map of the baseline snapshot copy. Only the blocks that are new or have changed

since the last successful replication are sent to the destination. Once the transfer has

completed the new shapshot copy becomes the baseline snapshot copy and the old one is

deleted.

Requirements and Limitations

Destinations Data Ontap version must be equal to or more recent than the source. In addition,

the source and the destination must be on the same Data ontap release.

Volume snapMirror replication can only occur with volumes of the same type either traditional

volumes or both flexible volumes.

Destination volumes capacity equal to or greater than size of the source, Administrators can

thin provision the destination so that it appears to be equal to or greater than the size of the

source volume.

Quota cannot be enabled on destination volume.

It is recommended that you allow a range of TCP ports from 10565 to 10569.

Qtree SnapMirror

Qtree snapMirror is a logical replication. All the files and directories in the source file system

are created in the target destination qtree.

Qtree Snap Mirror replication occurs between qtrees regardless of the type of the volume

(traditional or flexible).Even qtree replication can occur between different releases of Data

ONTAP.

Source volume and qtree are online/writable in qtree replication and Destination volume is also

online/writable (in qtree replication).

NOTE: Unlike volume snapMirror , a qtree snapMirror does not require that the size of the

destination volume be equal to or greater than the size of the source qtree.

In initial baseline transfer you not need to create the destination qtree , it gets automatically

created upon first time replication.

Requirements and limitations

Support Async mode only

Destination volume must contain 5% more free space than the source qtree and destination

qtree cannot be /etc

Qtree snapMirror performance is impacted by deep directory structure and large number (tens

of millions) of small files replicated.

Configuration process of snapmirror

1. Install the snapMirror license

For ex: license add <code>

2. On the source, specify the host name or IP address of the snapMirror destination systems you

wish to authorize to replicate this source system.

For Ex: options snapmirror.access host=dst_hostname1,dst_hostname2

3. For each source volume and qtree to replicate, perform an initial baseline transfer, For volume

snapmirror restrict the destination volume.

For Ex: vol restrict dst_volumename

Then initialize the volume snapmirror baseline, using the following syntax on the destination:

For Ex: snapmirror initialize –S src_hostname:src_vol dst_hostname:dst_vol

For a qtree snapmirror baseline transfer, use the following syntax on the destination

Snapmirror initialize –S src_hostname: /vol/src_vol/src_qtree

dst_hostname:/vol/dst_vol/dst_qtree

4. Once the initial transfer completes, set the snapmirror mode of replication by creating the

/etc/snapmirror.conf file in the destination’s root volume.

Snapmirror.conf

Page 22: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 22/104

The snapmirror.conf configuration file entries define the relationship between the source and

the destination, the mode of replication, and the arguments that control SnapMirror when

replicating data.

Entries can be seen like this in snapmirror.conf file

For ex: Fas1:vol1 Fas2:vol1 – 0 23 * 1,3,5

Fas1:vol1 : source storage system hostname and path

Fas2:vol1: destination storage system hostname and path

“-“: Arguments: Arguments fields let you define the transfer speed and restart mode and “– “

indicate the default mode is selected

Schedules

0: updates on the hours

23: updates on 11PM

*: Updates on all applicable days of the months

1,3,5: updates on Monday,Wednesday,Friday

You can Monitor transfer by running the cmd “snapmirror status” this cmd can be run on

source as well as on the destination also, it comes with two options –l and –q

-l : option display the long format of the output.

-q: option displays which volumes or qtree are quiesced or quiescing.

You can list all the snap shot copies of particular volumes by “snap list volumename” cmd,

snapmirror snapshot copies are distinguished from system snapshot copies by a more

elaborate naming convention.

The snap list command display the keyword snapmirror next to the necessary snapshot copy

Log files

Snapmirror logs record whether the transfer finished successfully or failed. If there is a

problem with the updates , it is useful to look at the log file to see what has happened since

the last successful update. The log include the start and end of each transfer, along with the

amount of data transferred.

For ex: options snapnmirror.log.enable (on/off) by default it is on.

Log files are stored in the source and the destination storage system root volume, in the

/etc/logs/snapmirror directory.

This guides you quickly through the Snapmirror setup and commands.

1) Enable Snapmirror on source and destination filer

source-filer> options snapmirror.enable snapmirror.enable on source-filer> source-filer> options snapmirror.access snapmirror.access legacy source-filer>

2) Snapmirror Access

Make sure destination filer has snapmirror access to the source filer. The snapmirror filer'sname or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to/etc/snapmirror.allow.

source-filer> rdfile /etc/snapmirror.allow destination-filer destination-filer2 source-filer>

3) Initializing a Snapmirror relation

Volume snapmirror : Create a destination volume on destination netapp filer, of same size assource volume or greater size. For volume snapmirror, the destination volume should be inrestricted mode. For example, let us consider we are snapmirroring a 100G volume - we createthe destination volume and make it restricted.

destination-filer> vol create demo_destination aggr01 100G destination-filer> vol restrict demo_destination

Page 23: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 23/104

Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy isreferred to as the baseline Snapshot copy. After performing an initial transfer of all data in thevolume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changedsince the last successful replication. When SnapMirror performs an update transfer, it createsanother new Snapshot copy and compares the changed blocks. These changed blocks are sentas part of the update transfer.

Snapmirror is always destination filer driven. So the snapmirror initialize has to be done ondestination filer. The below command starts the baseline transfer.

destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination Transfer started. Monitor progress with 'snapmirror status' or the snapmirror log. destination-filer>

Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. Thesnapmirror command automatically creates the destination qtree. So just volume creation ofrequired size is good enough.

Qtree SnapMirror determines changed data by first looking through the inode file for inodesthat have changed and changed inodes of the interesting qtree for changed data blocks. TheSnapMirror software then transfers only the new or changed data blocks from this Snapshotcopy that is associated with the designated qtree. On the destination volume, a new Snapshotcopy is then created that contains a complete point-in-time copy of the entire destinationvolume, but that is associated specifically with the particular qtree that has been replicated.

destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Transfer started. Monitor progress with 'snapmirror status' or the snapmirror log.

4) Monitoring the status : Snapmirror data transfer status can be monitored either fromsource or destination filer. Use "snapmirror status" to check the status.

destination-filer> snapmirror status Snapmirror is on. Source Destination State Lag Status source-filer:demo_source destination-filer:demo_destination Uninitialized - Transferring (1690 MB done) source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Uninitialized - Transferring (32 MB done) destination-filer>

5) Snapmirror schedule : This is the schedule used by the destination filer for updating themirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule fieldcan either contain the word sync to specify synchronous mirroring or a cron-style specificationof when to update the mirror. The cronstyle schedule contains four space-separated fields.If you want to sync the data on a scheduled frequency, you can set that in destination filer's/etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronoussnapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron stylefrequency.

destination-filer> rdfile /etc/snapmirror.conf source-filer:demo_source destination-filer:demo_destination - 0 * * * # This syncsevery hour source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * # Thissyncs every 9:00 pm destination-filer>

6) Other Snapmirror commands

To break snapmirror relation - do snapmirror quiesce and snapmirror break.To update snapmirror data - do snapmirror updateTo resync a broken relation - do snapmirror resync.

Page 24: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 24/104

To abort a relation - do snapmirror abort

Snapmirror do provide multipath support. More than one physical path between a source and adestination system might be desired for a mirror relationship. Multipath support allowsSnapMirror traffic to be load balanced between these paths and provides for failover in the

event of a network outage.

Some Important Points to be known about SnapMirror

Clustered failover interaction.The SnapMirror product complements NetApp clustered failover

(CF) technology by providing an additional level of recoverability. If a catastrophe disables

access to a clustered pair of storage systems, one or more SnapMirror volumes can

immediately be accessed in read-only mode while recovery takes place. If read-write access is

required, the mirrored volume can be converted to a writable volume while the recovery takes

place. If SnapMirror is actively updating data when a takeover or giveback operation is

instigated, the update aborts. Following completion of the takeover or giveback operation,

SnapMirror continues as before. No specific additional steps are required for the

implementation of SnapMirror in a clustered failover environment

Adding disks to SnapMirror environments.When adding disks to volumes in a SnapMirror

environment always complete the addition of disks to the destination storage system or

volume before attempting to add disks to the source volume.

Note: The dfcommand does not immediately reflect the diskor disks added to the SnapMirror

volume until after the first SnapMirror update following the disk additions.

Logging. The SnapMirror log file (located in /etc/logs/snapmirror.log) records the start and

end

of an update as well as other significant SnapMirror events. It records whether the transfer

finished successfully or whether it failed for some reason. If there is a problem with updates, it

is often useful to look at the log file to see what happened since the last successful update.

Because the log file is kept on the source and destination storage systems,quite often the

source or the destination system may log the failure, and the other partner knows only that

there was a failure. For this reason, you should look at both the source and the destination log

file to get the most information about a failure. The log file contains the start and end time of

each transfer, along with the amount of data transferred. It can be useful to look back and see

the amount of data needed to make the update and the amount of time the updates take.

Note: The time vs. data sent is not an accurate measure of the network bandwidth because

the transfer is not constantly sending data

Destination volume.For SnapMirror volume replication, you must create a restricted volume

to be used as the destination volume. SnapMirror does not automatically create a volume.

Destination volume type.The mirrored volume must not be the root volume.

Data change rate.Using the ‘snap delta’ command, you can now display the rate of change

stored between two Snapshot copies as well as the rate of change between a Snapshot copy

and the active file system. Data ONTAP displays the rates of change in two tables. The first

table displays rates of change between successive Snapshot copies. The second table displays

a summary of the rate of change between the oldest Snapshot copy and the active file

system.

Failed updates. If a transfer fails for any reason, SnapMirror attempts a retransfer

immediately, not waiting for the next scheduled mirror time. These retransfer attempts

continue until they are successful, until the appropriate entry in the /etc/snapmirror.conf file is

commented out, or until SnapMirror is turned off. Some events that can cause failed transfers

include:

Loss of network connectivity

Page 25: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 25/104

Source storage system is unavailable

Source volume is offline

SnapMirror timeouts. There are three situations that can cause a SnapMirror timeout:

Write socket timeout. If the TCP buffers are full and the writing application cannot hand off

data to

TCP within 10 minutes, a write socket timeout occurs. Following the timeout, SnapMirror

resumes

at the next scheduled update.

Read socket timeout. If the TCP socket that is receiving data has not received any data from

the application within 30 minutes, it generates a timeout. Following the timeout, SnapMirror

resumes at the next scheduled update. By providing a larger timeout value for the read socket

timeout, you can be assured that SnapMirror will not time out while waiting for the source file

to create Snapshot copies, even when dealing with extremely large volumes. Socket timeout

values are not tunable in the Data ONTAP and SnapMirror environment.

Sync timeouts. These timeouts occur in synchronous deployments only. If an event occurs

that causes a synchronous deployment to revert to asynchronous mode, such as a network

outage, no ACK is received from the destination system.

Open Files

If SnapMirror is in the middle of a transfer and encounters an incomplete file (a file that an FTP

server is still transferring into that volume or qtree), it transfers the partial file to the

destination. Snapshot copies behave in the same way. A Snapshot copy of the source would

show the transferring file and would show the partial file on the destination.

A workaround for this situation is to copy a file to the source. When the file is complete on the

source, rename the source file to the correct name. This way the partial file has an incorrect

name, and the complete file has the correct name.

Posted 26th September 2012 by vipulvajpayee

2 View comments

Anonymous 25 September 2012 19:34

thanks for sharing.

Reply

Anonymous 2 December 2012 22:40

Very good language use keep it up

Reply

Page 26: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 26/104

Enter your comment...

Comment as: Google Account

Publish Preview

15th September 2012

How to configure the netApp FC lun in windows server

Check for the ports in FC lun configuration we need to turn the port from initiator to the target, by

default the ports are in initiator mode.

The cmd to change the port from initiator to the target mode is

Netapp> fcadmin config –d 0c 0d

Netapp> fcadmin config -t target 0c 0d

Netapp> reboot

NOTE:To configure the port from initiator to the target , the storage need a reboot to apply the

changes.

NOTE: For FC lun we need to change the port to the target mode.

Then you can add the cluster license and enable the cluster.

Configure the FC HBA on each the storage controller.

Netapp> fcp status (this cmd will check that the fcp service is running or not.)

The output will show you that the FCP service is licensed or not if not the add the license by license

add cmd

Identify the WWNN of your storage controller.

Netapp> fcp nodename

Record the WWNN number of the storage controller.

Important note : In single image cluster failover mode (cfmode) both storage controllers (system1

and system2) in the storage system pair need to have the same FCP nodename. You can verify using

the command.

Netapp> lun config_check

If the lun config_check generate the mismatch of the WWNN of the storage controllers in the storage

system pair, you need to set the nodename of the one of the storage controller same as of the other.

For ex: to change the nodename of the netapp storage 2, where the node name was not same as of

the storage netapp1

Netapp2> fcp nodename 500a098086982dd0

List the installed FC target HBAs.

Netapp> fcp show adapters

In the above cmd you will get to know that wheater the port of the fcp adapter is online or offline and

you will come to know that the HBA card is of which company and you will get the WWNN number of

the port which you can note it down.

If you found that the FC port is down and you want to bring that FC port up then the cmd for

How to configure the netApp FC lun in windowsserver

Page 27: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 27/104

that.

Netapp> fcp config 0c up

Now when you did the port 0c up from the storage end, and if the cable is coming from the switch side

then check on the switch side.

Switch> switchshow

Here you need to check clearly that you will see that four ports will be in online mode in which 2 ports

will be of the storage and other will be its partner node. So cross check clearly that which port are of

your storage and which are of your partners.

Write down the WWNN numbers of port 0c and 0d of your storage system and of your partner

systems.

Now then enter the cmd on your storage.

Netapp> fcp show initiators.

This cmd will show the windows host initiators on your storage side, this will tell you that whether your

storage system deducted the host.

After this you need to enable the multipath services from the windows side and install the Netapp DSM

for the windows, and install the windows host utility kit.

Posted 15th September 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

15th September 2012

NAS

NAS device is optimized for the file serving functions such as storing, retrieving and accessing the files

for the application or the clients, a NAS device is dedicated for the file serving such a general purpose

server which has its own operating system the NAS device also have its own operating system for the

file serving by using open standard protocols.

Benefits of the NAS

NAS

Page 28: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 28/104

1. Support one- to many and many –to- one access of the data, by which one NAS device can serve

many clients at same time and one client, can connect to many NAS device simultaneously.by this

efficient sharing of the files between the many clients.

2. Improved efficiency as NAS device came into the picture the burden of file services got reduced from

the general purpose server and so the bottle neck of the performance got reduced because now the

NAS devices uses the operating system specialized for the file serving.

3. Flexibility got improved as the same file can be accessed by the different client using different

operating system, means the windows and UNIX server can access the same file from the NAS

devices.

4. Centrally storage saved the lot of space issue, by reducing the duplicate copy in different client s, now

the file can we stored in NAS device and can be accessed by different clients.

5. Centralized management also saved lot of time for managing the files and data.

6. Because of failover capability on NAS device the high availability of data is there, means data is only

every time.

7. Security is also got improved by locking the file and keeping the user base authentication.

NAS I/O operations.

1. The requestor packages the I/O request into TCP/IP packets and sends it to the NAS device through

the network.

2. The NAS devices receive the packet and break physical storage to block level I/O request and

execute the request and then repack it back to the appropriate file protocol response (CIFS/NFS).

3. And then the NAS head again pack the data in TCP/IP packets and send back the packet to the

requestor through the network.

Factor Affecting the NAS performance and Availability

As NAS uses the IP for communication so the bandwidth, latency issues associated with IP affect the

NAS performance.

1. Number of hops: A large number of hops can increase latency because IP processing is

required at each hop, adding to the delay caused at the router.

2. Authentication with a directory service such as LDAP, Active Directory, or NIS: The

authentication service must be available on the network, with adequate bandwidth, and must have

enough resources to accommodate the authentication load. Otherwise, a large number of

authentication requests are presented to the servers, increasing latency. Authentication adds to

latency only when authentication occurs.

3. Retransmission: Link errors, buffer overflows, and flow control mechanisms can result in

retransmission. This causes packets that have not reached the specified destination to be resent.

Care must be taken when configuring parameters for speed and duplex settings on the network

devices and the NAS heads so that they match. Improper configuration may result in errors and

retransmission, adding to latency.

4. Over utilized routers and switches: The amount of time that an over-utilized device in a

network takes to respond is always more than the response time of an optimally utilized or

underutilized device. Network administrators can view vendor-specific statistics to determine the

utilization of switches and routers in a network. Additional devices should be added if the current

devices are over utilized.

5. File/directory lookup and metadata requests: NAS client’s access files on NAS devices. The

processing required before reaching the appropriate file or directory can cause delays. Sometimes a

delay is caused by deep directory structures and can be resolved by flattening the directory structure.

Poor file system layout and an over utilized disk system can also degrade performance.

6. Over utilized NAS devices: Clients accessing multiple files can cause high utilization levels on

a NAS device which can be determined by viewing utilization statistics. High utilization levels can be

caused by a poor file system structure or insufficient resources in a storage subsystem.

7. Over utilized clients: The client accessing CIFS or NFS data may also be over utilized. An over

utilized client requires longer time to process the responses received from the server, increasing

latency. Specific performance-monitoring tools are available for various operating systems to help

Page 29: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 29/104

determine the utilization of client resources

Posted 15th September 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

15th September 2012

How to create a VLAN and assign the IP alias to the interfaces on NetApp

NOTE: VLAN commands are NOT persistent across a reboot and must be added in /etc/rc

files to make them permanent.

NetApp> vlan create “Interface name” vlan id

For ex:

NetApp> vlan create np1- vif1 801 802

NetApp> ifconfig np1-vif1-801 192.168.0.51 netmask 255.255.255.0

NetApp> ifconfig np1- vif1-803 192.168.0.52 netmask 255.255.255.0

Same entry you need to do in /etc/rc file to make it permanent, if you will not do it, then in reboot these

vlan will get deleted.

Now if you need to add IP alias to the vlan interface vif1-803 then you can add, you can add up to 3

alias (according to my knowledge).

For ip alias

NetApp> ifconfig np1- vif1-803 alias 192.168.0.34 netmask 255.255.255.0

NetApp> ifconfig np1- vif1-803 alias 192.168.0.44 netmask 255.255.255.0

Similary for IP alias also you need to do the entry in the /etc/rc files to make it permanent.

If you did any entry in /etc/rc files and you want to reload that entry in NetApp memory than you no

need to reboot the filer, you can run the cmd “source”

For ex:

NetApp> source /etc/rc

Then all the entry which are there in /etc/rc files will get load to the memory without rebooting the filer.

Below I will show the /etc/rc files entries for your help, so that you will be able to understand how the

entries are done on /etc/rc files.

How to create a VLAN and assign the IP aliasto the interfaces on NetApp

Page 30: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 30/104

Note: There should be no space between lines and always do the entry above the savecore

line, don’t do it below savecore . If you will add an # in front of any line then that line will be

act as comment line, means that line is as equal to deleted line, means it will not get loaded

to the NetApp storage memory.

Sample /etc/rc

#Auto-generated by setup Mon Mar 14 08:18:30 GMT 2005

hostname FAS1

vif create multi MultiTrunk1 e0 e1

ifconfig MultiTrunk1 172.25.66.10 partner MultiTrunk2

vif favor e0

ifconfig e5a `hostname`-e5a mediatype auto flowcontrol full netmask 255.255.255.0 partner

10.41.72.101

vlan create np1-vif1 801 803

ifconfig np1-vif1-801 192.168.0.51 netmask 255.255.255.0

route add default 10.41.72.1 1

routed on

options dns.domainname corp.acme.com

options dns.enable on

options nis.enable off

savecore

Now suppose your IT head told you to add the partner interface for the vlan you created, so that when

the failover happen then it should happen to the partner interface of that particular vlan interface.

Now in storage when you do the initial setup at that time it will ask for the failover partner interface for

the particular vifs, but it will not ask for the vlan failover interface, because vlan are created after the

initial setup is done, so you need to maulally run the cmd for the failover partner for the specific vlan

interfaces.

For ex: In above ex I have created a two vlan and assigned the IP for them and now suppose the

partner filer also have the two same vlan with other ip, for ex assume that the partner filer have the

vlan interface named np2-vif1-801 and np2-vif1-803 and their ip is 192.168.0.53 and 192.168.0.54

Now I want that my filer1 vlan interface np1-vif1-801 partner failover ip should be np2-vif1-801 then

cmd for it.

NetApp> ifconfig np1-vif1-801 192.168.0.51 partner 192.168.0.53

Now same entry you need to do in /etc/rc files to make it permanent, if you will not do it, then entry will

get removed after reboot.

Now if you see above sample /etc/rc file there I have shown one vlan interface entry, you just add

“partner 192.168.0.53” in front of the valn entry in /etc/rc , and your entry will become permanent.

For ex:

ifconfig np1-vif1-801 192.168.0.51 netmask 255.255.255.0 partner 192.168.0.53

Don’t add the new entry for the partner interface in /etc/rc because some times because of multiple

ifconfig entries they does not get loaded in the memory, so if you reboot the filer you will see the entry

in the /etc/rc files but you will not able to do the failover to the partner interface because the entry

does not got loaded to the memory.

So be careful while doing the entry on /etc/rc files, always take backup of /etc/rc files before doing any

modification on it.

I hope that you were able to understand, what I was trying to say in this blog, because I faced difficulty

while doing this activity, and I want you all to be aware of these things while doing such type of activity.

Page 31: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 31/104

Posted 15th September 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

15th September 2012

FC switch configuration.

To configure the SAN environment in NetApp first check by inter-operability matrix tool that each and

every component is supported by NetApp check whether the switch, Ontap version, switch firmware

version, the OS version, is supported by NetApp or not.

Data Ontap support the given fabric.

Single –fabric

Multifabric

Direct-attached

Mixed

Switch should have unique domain id. NetApp recommend starting the domain id from 10, because

some devices already reserve some ids, so it is recommended to start the domain id from 10 to

escape the conflict. And it there are two fabrics then gives the odd numbers to the 2nd fabric.

Zoning:

Advantage of zoning

Zoning reduces the number of paths between a host and logical unit number (LUN).

Zoning keeps the primary and the secondary path in different zones.

Zoning improves the security by limiting the access between nodes.

Zoning increases the reliability by isolating the problems.

Zoning also reduces the cross talks between the host initiators and the hba.

The two method of zoning

Port zoning: zoning done by grouping physical switch ports.

WWN: worldwide Name zoning the zones are created by the wwpn(worldwide port name) or

wwnn(worldwide node name).

Brocade switch only uses the hardware switch enforcement and cisco switch uses both the hardware

and software switch enforcement, means the in hardware enforcement everywhere will be used as

WWN name format and it is highly performance output and if we use the software enforcement mixed

FC switch configuration(cisco & Brocade)

Page 32: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 32/104

WWPN and WWNN are used which does not give performance as hardware enforcement method.

FC Zoning recommendations by NetApp

Use WWPN zoning.

When there is 4 or more than 4 server then do the zoning.

Limit zone size

Use single –Initiator zoning.

Configuring the Brocade FC switch.

1. Perform the initial configuration steps.

2. Upgrade firmware if needed.

3. Assign the domain ID.

4. Assign the port speed.

5. Verify the configuration.

6. Verify the host and storage connectivity.

7. Create the FC zones.

8. Save and backup the configuration.

9. Obtain the technical support.

Brocade Switch: Perform the initial Configuration steps.

Configure the Management Interface

Give the IP address, subnet mask, Gateway address.

Give the Host name of the switch

Give the administration password

The default access for the brocade switch is login: admin and password: password

It is best practice to use the same version of firmware or Fabric OS in the entire SAN switch

By giving the “version” cmd we can check the current version on the switch.

And by giving the “FirmwareDownlad” cmd we can use for the Brocade switch upgradation.

By default the Domain ID of the switch is 1 you should change the ID as per the best practice of the

brocade documentation.

Step to change the ID of the switch.

1. Enter the switchDisable cmd.

2. Enter the configure cmd

3. At the Fabric parameter prompt enter “yes”.

4. At the Domain prompt enter the Domain ID.

5. And at all the other prompt press Enter and let it is the default.

6. Use the switchEnable cmd to enable the switch.

After you assign the Domain Id you should assign the port speed to avoid negotiation error.

The cmd to configure the port speed is portCfgSpeed [slot/]port, speed

For ex:

Brocade>portCfgSpeed 0, 2

Use the swithcShow cmd to verify the switch configuration.

Check the switch Domain ID.

Check the port speed , the port speed is 1G, 2G,4G,8G and the negotiation speed is N1, N2, N4, N8.

Now the last step is to create the Alias and the zone.

1. Create alias by aliCreate cmd

For ex:

aliCreate WinA, 21:00:00:00:1b:32:1a:8b:1c

Page 33: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 33/104

2. Create Zones by zoneCreate cmd

For ex:

zoneCreate WinpriA, “winA;Fas01A”

3. create configuration cfgCreate cmd

For ex:

cfgCreate Wincfg, “WinpriA;WinpriB”

4. store the configuration by cfgSave cmd

5. Activate the configuration by cfgEnable cmd.

We can use the supportSave cmd to retrieve the support data. This cmd will generate the support

log files and you can save this support files to some server by giving the server IP.

Configuring the Cisco FC switch.

1. Perform the initial configuration steps.

2. Upgrade firmware if needed.

3. Create the VSAN and assign the port.

4. Assign the domain ID.

5. Assign the port speed.

6. Verify the configuration.

7. Verify the host and storage connectivity.

8. Create the FC zones.

9. Save and backup the configuration.

10. Obtain the technical support.

Cisco Switch: Upgrade the Firmware steps.

1. Use the show version cmd to check the current version of the Fabric Ware version.

2. Use the show impact all system cmd to verify the image and the system impact.

3. Use the install all system command to install and upgrade.

4. Use the show install all status to confirm the upgrade.

Cisco switch allow to create the VSAN , VSAN is the virtual switch created on the physical switch work

as a independent fabric switch Each cisco switch must have at least one active VSAN as VSAN1 ,

VSAN 1 should not be used for the production traffic.

How to create the VSAN in cisco switch.

1. Go to the configuration mode by typing config t cmd.

2. Go to the VSAN database configuration mode by typing the vsan database cmd.

3. Create the vsan by tying vsan id cmd for ex : vsan 2

4. vsan can be activated when it has atleast one physical port assigned to it.

5. Assign the physical port by typing the cmd vsan id interface fcslot/port for ex: vsan 2 interface

fc1/8

6. Repeat the each activation step for the each interface that has to be added to the vsan.

Assign the Domain Id.

Each VSAN has a unique domain id.

In cisco there are two type of Domain id.

Static: join with the given id or not at all.

Preferred: Try to join with the given id but join with the other id if necessary.

Use the fcdomain domain domain_id static vsan vsan_id

For ex:

Page 34: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 34/104

Switch> config t

Switch(config)> fcdomain domain 10 static vsan 2.

Switch(config)> end

Assign port speed.

1. Enter the config t

2. Enter the interface fc slot/port, where slot is the slot number and port is the port number.

3. Enter the switchport speed x, where x is 1000,2000,4000, or 8000 Mbps.

For ex;

Switch>config t

Switch(config)>interface fc 1/1

Switch(config)>switchsport speed 8000

Switch(config)> cntrl Z

Verify the switch configuration

1. Verify the vsan port assignment. By show vsan membership.

2. Verify one or more domain ids by show fcdoamin.

3. Verify the port speed configuration by show interface brief and confirm that it should show the

spped as 1g ,4g,8g not as auto.

4. Verify that the host and storage port are online: show flogi database

Cisco switch: Create FC zones

Create aliases, repeating the commands for each alias:

Enter the device-alias database

Enter the device-alias name aliname pwwn wwpn

Where aliname is the name of the new alias and wwpn is the WWPN of device.

After creating the aliases, commit the changes:

Enter the cmd: exit

Enter the cmd: device-alias commit

Create the zone:

zone name zname vsan vid, where zname is the name of the new zone and vid is the VSAN ID.

Assign aliases to the zone, repeating the cmd for each alias:

-member devices-alias aliname , where aliname is the name of alias that you are going to assign to

the zone.

Exit configuration mode.

exit.

For ex:

Switch# config t

Switch(config)# device-alias database

Switch(config-device-alias database)# device-alias name x pwwn 21:01:00:e0:8b:2e:80:93

Switch(config-device-alias-db)# end

Switch# config –t

Switch(config)# device-alias commit

Switch(config)# zone name myzone vsan 2

Switch(config-zone)# member device-alias x

Switch(config-zone)# exit

Switch#

Page 35: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 35/104

Create and activate zoneset.

Create a zoneset:

Switch# config t

Switch(config)# zoneset name zoneset1 vsan2

Switch(config-zoneset)# member myzone

Switch(config-zoneset)# end

Switch# config t

Switch(config)# zoneset activate name zoneset1 vsan2

Switch(config)#

Save the configuration

Switch# copy running-config startup-config

Posted 15th September 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

11th August 2012

Cifs TOP command

How to get user cifs client information by using "cifstop" cmd

Page 36: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 36/104

The cifs top command is used to display the cifs clients activity based on the number of different

criteria. It can display which clients are generating large amounts of load, as well as help identify

clients that may be behaving suspiciously.

This command relies on data collected when the cifs.per_client_stats.enable option is “on”, so it

must be used in conjunction with that option. Administrator should be aware that there is overhead

associated with collecting the per-client stats. This overhead may noticeably affect the storage system

performance.

Options

-s <sort> specifies how the client stats are to be sorted. Possible values of <sort>. Are ops, read,

writes, iops and suspicious. These values may be abbreviated to the first character, and the default is

ops. They are interpreted as follows.

ops sort by number of operations per second of any type.

suspicious sort by the number of “suspicipus” events sent per second by each client. “suspicious”

events are any of the following which are typical of the patterns seen when viruses or other badly

behaved software or users are attacking a system.

For ex:

Cifs top –n 3 –s w

If vfiler volumes are licensed, the per-user statistics are only available when in a vfiler context. This

means the cifs top command must be invoked in a vfiler context.

For ex:

System> vfiler run vfiler0 cifs top.

Posted 11th August 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

11th August 2012

NFS: Network File System

NFS is a widely used protocol for sharing files across networks. It is designed to be stateless to allow

for easy recovery in the event of server failure.

As a file server, the storage system provides services that include mount daemon (mountd) , Network

lock Manager(nlm_main), Network file system daemon(nfsd), status monitor(sm_l_main), quota

daemon (rquot_l_main), and portmap or rpcbind. Each of these services is required for a successful

operation of an NFS process.

By updating the /etc/fstab file for persistent mounting of the file system across reboots. And we can

mount by running the automounter services that mounts the file system on demand and unmounts the

file if they are not accessed within a few minutes.

NFS: Network File System

Page 37: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 37/104

What does it mean for a protocol to be stateful or stateless?

If a protocol is stateless, it means that it does not require that the server maintain any session state

between messages; instead, all session states are maintained by the client. With a stateless protocol,

each request from client to server must contain all of the information necessary to understand the

request and cannot take advantage of any stored context on the server. Although NFS is considered a

stateless protocol in theory, it is not stateless in practice.

NIS: Network information service : Provide a simple network lookup service consisting of database

and processes. Its purpose is to provide information that has to be known throughout the network, to

all machines on the network. Information likely to be distributed by NIS is:

1. Login names/passwords/home directories (/etc/passwd)

2. Group information(/etc/group)

3. Host names and IP numbers(/etc/hosts)

Some of the command in netapp storage for troubleshooting the NIS.

1. Ypcat mapname: prints all of the values in the NIS map, which is provided.

2. Ypgroup : Displays the group file entries that have been locally cached from the NIS server.

3. Ypmatch key mapname: prints every value in the NIS map mapname whose key matches one of the

keys given.

4. Ypwhich : prints the name of the current NIS server if NIS is enabled.

Some cmd for the troubleshooting the NFS

1. Keep “options nfs.export.auto –update in on mode” so that /etc/exports file is automatically updated

when volume is created, renamed, destroyed.

2. exportfs : Display all current exports in memory.

3. exportfs –p [options] path : Adds exports to the /etc/export file and in memory.

4. exportfs –r :reloads only exports from /etc/exports files

5. exportfs –uav :unexports all export

6. exportfs –u [path]: unexport a specific export

7. exportfs -z [path]: unexports an export and removes it from /etc/exports

8. exportfs -s pathname : To verify the actual path to which a volume is exported.

9. exportfs -q pathname : Display export options per file system path

NOTE: Be careful not to export resources with the -anon option. If NFS is licensed on the storage

system, and you specify exports with the -anon option, everyone is able to mount the resource and

could cause the security risk.

WAFL credential cache

The wafl credential cache(WCC) contains the cached user mappings for the unix user identities(UID

andGID) to windows identities (SID for users and groups).After a UNIX-to-Windows user mapping is

performed (including group membership) the result are stored in the WCC.

The wcc command does not look in the WCC , but performs a current user mapping operation and

display the result . This command is useful for troubleshooting user mapping issues.

NOTE: the cifs.trace_login options must be enabled.

To troubleshoot the NFS performance issues some data collections cmd.

1. nfsstat: display statistical information about NFS and remote procedure call(RPC) for storage system

.

Syntax: nfstat <interval> [ip_address |name] {-h , -l , -z , -c , -t, -d, -C}

Can display interval or continuous display of statistics summary.

Perclient stats can be collected and display via nfs.per_client_stats.enable

If optional IP address or host name is specified with –h option, that client’s statistic are displayed.

nfsstat output with –d: The nfsstat -d command display reply cache statistics as well as incoming

messages, include allocated mbufs. This diagnostic option allow for debugging of all NFS-related

traffic on the network.

2. NFS mount Monitoring: nfs mountd traces enables tracing of denied mount request against the

storage system.

Enable option only during debug session as there is a possibility of numerous syslog hits during DOS

attacks.

Enter the following cmd

Options nfs .mounted.trace on

Page 38: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 38/104

Posted 11th August 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

2nd August 2012

TYPES OF VMWARE DATASTORES

An introduction to storage in virtual infrastructure

Vmware ESX supports three type of storage configuration when connecting to the shared storage

array:

VMFS: virtual machine files system Datastore.

NAS: Network attached storage Datastore.

RDM: Raw device mapping Datastore.

The shared storage is required for the HA (high –availability), DRS (distributed resource scheduler),

Vmotion and fault tolerance.

The 80/20 rule :

This rule is well known rule when we design virtual data center. This 80/20 rule means that the 80% of

all system virtualized are of consolidation efforts. The remaining 20% of the system are classified as

the business critical application. Although these applications can be virtualized successfully, they tend

to be deployed on shared storage pools but in what we refer to as isolated dataset.

THE CHARACTERISTICS OF CONSOLIDATION DATASETS

Consolidation datasets have the following characteristics:

• The VMs do not require application-specific backup and restore agents.

• The dataset is the largest in terms of the number of VMs and potentially the total amount of storage

-addressed.

• Individually, each VM might not address a large dataset or have demanding IOP requirements; --

however, the collective whole might be considerable.

• These datasets are ideally served by large, shared, policy-driven storage pools (or datastores).

THE CHARACTERISTICS OF ISOLATED DATASETS (FOR BUSINESS-CRITICAL APPLICATIONS)

Isolated datasets have the following characteristics:

• The VMs require application-specific backup and restore agents.

TYPES OF VMWARE DATASTORES

Page 39: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 39/104

• Each individual VM might address a large amount of storage and/or have high I/O requirements.

• Storage design and planning apply in the same way as with physical servers.

• These datasets are ideally served by individual, high-performing, nonshared datastores.

Consolidated datasets work well with Network File System (NFS) datastores because this design

provides greater flexibility in terms of capacity than SAN datastores when managing hundreds or

thousands of VMs. Isolated datasets run well on all storage protocols; however, some tools or

applications might have restrictions around compatibility with NFS and/or VMFS.

Unless your data center is globally unique, the evolution of your data center from physical to virtual will

follow the 80/20 rule. In addition, the native multiprotocol capabilities of NetApp and VMware will allow

you to virtualize more systems more quickly and easily than you could with a traditional storage array

platform.

VMFS DATASTORES

The Vmware VMFS is high-performance clustered file system that provides datastores, which are

shared storage pools. VMFS Datastore can be configured with logical unit numbers (LUN) accessed

by FC, iSCSI and FCoE. VMFS allows traditionally LUN accessed simultaneously by every ESX server

in cluster.

Applications traditionally require storage considerations to make sure their performance can be

virtualized and served by VMFS. With these types of deployments, NetApp recommends deploying the

virtual disks on a datastore that is connected to all nodes in a cluster but is only accessed by a single

VM.

This storage design can be challenging in the area of performance monitoring and scaling. Because

shared datastores serve the aggregated I/O demands of multiple VMs, this architecture doesn’t

natively allow a storage array to identify the I/O load generated by an individual VM.

SPANNED VMFS DATASTORES

Vmware provides the ability of VMFS extents to concatenate multiple LUN into a single logical

Datastore, which is referred as a spanned datastore. Although the spanned Datastore can overcome

the 2TB lun size limit, but it will affect the performance of the lun, because each size lun have the

capacity to handle the I/Ops.

NetApp does not recommend the spanned datstores.

NFS DATASTORE.

vSphere allows customer to leverage enterprise-class NFS array to provide datastores to concurrent

access to all of the node in ESX cluster. The access method is very similar to one with VMFS.

Deploying VMware with the NetApp advanced NFS results in a high-performance, easy-to-manage

Implementation that provides VM-to-datastore ratios that cannot be accomplished with other storage

protocols such as FC. This architecture can result in a 10x increase in datastore density with a

correlating reduction in the number of datastores. With NFS, the virtual infrastructure receives

operational savings because there are fewer storage pools to provision, manage, back up, replicate,

and so on.

SAN RAW DEVICE MAPPING

ESX gives VMs direct access to LUN for specific –use case such as P2V clustering or storage vendor

management tool, this type of access is called raw device mapping and it support FC,iSCSI,FCoE

protocol, In this design the ESX act as a connection proxy between the VM and storage array. RDM

provides direct LUN access to the host so that they can achieve high individual disk I/O performance

and can be easily monitored for the disk performance.

RDM LUNS ON NETAPP

RDM is available in two modes that are physical and virtual. Both mode support the key vmware

features such as vmotion and can be used in both HA and DRS cluster.

Page 40: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 40/104

NetApp enhances the use of RDMs by providing array-based LUN-level thin provisioning, production-

use data deduplication, advanced integration components such as SnapDrive, application-specific

Snapshot backups with the SnapManager for applications suite, and FlexClone zero-cost cloning of

RDM-based datasets.

Note: VMs running MSCS must use the path selection policy of Most Recently Used (MRU). Round

Robin is documented by VMware as unsupported for RDM LUNs for MSCS.

Datastore supported features

Capability/Feature FC/FCoE iSCSI NFS

Format VMFS or RDM VMFS or RDM NetApp WAFL

Maximum numbers ofDatastore or LUNs 256 256 64Maximum Datastoresize 64TB 64TB 16TB or 100TB*

Maximum LUN/NAS filesystem size

2TB minus512bytes

2TB minus 512bytes 16TB or 100TB*

Optimal queue depthper LUN/file system 64 64 N/A

Available link speeds4 and 8Gb FCand 10GbE 1 and 10GbE 1 and 10GbE

*100TB requires 64 bit aggregates.

Posted 2nd August 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

19th July 2012

Fc switch configuration of Brocade switch

Fc switch configuration of Brocade switch

Page 41: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 41/104

To configure the SAN environment in NetApp first check by inter-operability matrix tool that each and

every component is supported by NetApp check whether the switch, Ontap version, switch firmware

version, the OS version, is supported by NetApp or not.

Data Ontap support the given fabric.

Single –fabric

Multifabric

Direct-attached

Mixed

Switch should have unique domain id. NetApp recommend starting the domain id from 10, because

some devices already reserve some ids, so it is recommended to start the domain id from 10 to

escape the conflict. And it there are two fabrics then gives the odd numbers to the 2nd fabric.

Zoning:

Advantage of zoning

Zoning reduces the number of paths between a host and logical unit number (LUN).

Zoning keeps the primary and the secondary path in different zones.

Zoning improves the security by limiting the access between nodes.

Zoning increases the reliability by isolating the problems.

Zoning also reduces the cross talks between the host initiators and the hba.

The two method of zoning

Port zoning: zoning done by grouping physical switch ports.

WWN: worldwide Name zoning the zones are created by the wwpn(worldwide port name) or

wwnn(worldwide node name).

Brocade switch only uses the hardware switch enforcement and cisco switch uses both the hardware

and software switch enforcement, means the in hardware enforcement everywhere will be used as

WWN name format and it is highly performance output and if we use the software enforcement mixed

WWPN and WWNN are used which does not give performance as hardware enforcement method.

FC Zoning recommendations by NetApp

Use WWPN zoning.

When there is 4 or more than 4 server then do the zoning.

Limit zone size

Use single –Initiator zoning.

Configuring the Brocade FC switch.

1. Perform the initial configuration steps.

2. Upgrade firmware if needed.

3. Assign the domain ID.

4. Assign the port speed.

5. Verify the configuration.

6. Verify the host and storage connectivity.

7. Create the FC zones.

8. Save and backup the configuration.

9. Obtain the technical support.

Brocade Switch: Perform the initial Configuration steps.

Configure the Management Interface

Give the IP address, subnet mask, Gateway address.

Give the Host name of the switch

Give the administration password

The default access for the brocade switch is login: admin and password: password

Page 42: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 42/104

It is best practice to use the same version of firmware or Fabric OS in the entire SAN switch

By giving the “version” cmd we can check the current version on the switch.

And by giving the “FirmwareDownlad” cmd we can use for the Brocade switch upgradation.

By default the Domain ID of the switch is 1 you should change the ID as per the best practice of the

brocade documentation.

Step to change the ID of the switch.

1. Enter the switchDisable cmd.

2. Enter the configure cmd

3. At the Fabric parameter prompt enter “yes”.

4. At the Domain prompt enter the Domain ID.

5. And at all the other prompt press Enter and let it is the default.

6. Use the switchEnable cmd to enable the switch.

After you assign the Domain Id you should assign the port speed to avoid negotiation error.

The cmd to configure the port speed is portCfgSpeed [slot/]port, speed

For ex:

Brocade>portCfgSpeed 0, 2

Use the swithcShow cmd to verify the switch configuration.

Check the switch Domain ID.

Check the port speed , the port speed is 1G, 2G,4G,8G and the negotiation speed is N1, N2, N4, N8.

Now the last step is to create the Alias and the zone.

1. Create alias by aliCreate cmd

For ex:

aliCreate WinA, 21:00:00:00:1b:32:1a:8b:1c

2. Create Zones by zoneCreate cmd

For ex:

zoneCreate WinpriA, “winA;Fas01A”

3. create configuration cfgCreate cmd

For ex:

cfgCreate Wincfg, “WinpriA;WinpriB”

4. store the configuration by cfgSave cmd

5. Activate the configuration by cfgEnable cmd.

We can use the supportSave cmd to retrieve the support data. This cmd will generate the support

log files and you can save this support files to some server by giving the server IP.

Posted 19th July 2012 by vipulvajpayee

0 Add a comment

Page 43: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 43/104

Enter your comment...

Comment as: Google Account

Publish Preview

7th July 2012

NetApp performance Data Collection Command & Tools(NFS)

The following Data Ontap tools can be used to collect the performance data:

Sysstat, nfsstat, nfs.mounted.trace, netstat, ifstat, nfs_hist, stats, statit, netdiag and wafl_susp, pktt

And from client side .

Ethereal, netapp-top.pl, perfstat, sio.

nfsstat : Display statistical information about NFS and remote procedure call (RPC) for storage

system. Per client stats can be collected and displayed via nfs.per_client_stats.enable

nfsstat options

nfsstat –h : Display per-client statistics since last zeroed.

nfsstat –l : Display list of clients whose statistics were collected on per-client basis.

nfsstat –z: zeroes current cumulative and per-client statistics

nfsstat –c: includes reply cache statistics.

nfsstat –t: Display incoming messages in addition to reply cache statistics.

nfsstat –C: Dislpay number and type of NFS v2, v3 requests received by all FlexCache volumes.

nfsstat –d : this diagnostic option allows for debugging of all NFS-related traffic on the network. this

cmd is most commonly used option used to decode export and mountd problem.

NFS Mounting monitoring

NFS mountd traces enables tracing of denied mount requests against the storage system

-nfs.mountd.trace

-Enable option only during session as there is a possibility of numerous syslog hits during DOS

attacks.

This option should only be enabled during a debug session.

“options nfs.mountd.trace on”

nfs_hist command Overview

Advanced command that display NFS delay time distributions

Syntax nfs_hist [-z]

-z reinitializes delay distributions so that subsequent use displays information about messages

processed since distributions were zeroed.

This is advanced mode command.

In takeover mode this display combined delay distributions for the live storage system and failed

system.

This command is good to understand that how the system is working when one is

NetApp performance Data Collection Command &Tools(NFS)

Page 44: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 44/104

attempting to understand NFS performance issue .

pktt overview

It is Data ontap utility for packet capture and captures data for further analysis by support personal

Syntax: pktts start <if>|all [-d dir] [-m pklen] [-b bsize] [-i ipaddr –i…]

For ex:

pktt start fa3 –d / -s 100m –b 128k

This starts capturing traffic on the “fas3” interface , writing to a file called “/fa3.trc”, which will be

allowed to grow to a maximum size of 100MB with a 128KB buffer.

Reading the packet trace.

You can refer www.tcpdump.org [http://www.tcpdump.org/] , www.ethereal.com

[http://www.ethereal.com/]

Netapp-top.pl Script Overview

A perl script , downloadable from the NOW site and this display top NFS Clients currently most active

for the storage system.

-Assists in identifying problematic clients by providing per-client NFS operation statistics.

-Use on Solaris, Linux or other versions of UNIX

-Patches also available for unresolved hostnames and for the limitation of intervals recalculation.

To use it

-Download netapp-top.pl script to your UNIX home directory

-Run it from the shell prompt , specifying the storage system you want.

Recommended statistics to collect

From the client

# nfsstat –z (zero the NFS statistics at the client)

# netstat –I (network statistics before the tests)

-Mount the storage system volume with rsize, wsize =32768

Syntax

# mount –o rsize=32768,wsize=32768 storagesystem:/,export><mountpoint>

# cd <mountpoint>

# nfsstat –m (output of the mountpoints and the mount flags)

Time the mkfile command

# time mkfile 1g test (write test)

Time the dd command

# time dd if=/<mountpoint>/test of=/test (read test)

Time the copy command

#time cp test test1 (read and write test)

Verify nfsstat output

nfsstat –c

check the following parameter;

1. Timeout>5% request timing out before the server can answer them.

2. Badxid~timeout server slo. Check nfsstat –m

3. Badxid~0 and timeouts>3% Packets lost in the network , check netstat. If this numbet is the

same as bad calls the network is congested.

4. Retrans. May indicate network or routing problem if retransmit>5%

sio utility

Overview

-Acronym for simulated I/O

_general-purpose load generator

-Allow for different block size read and write ops

Page 45: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 45/104

-Performs synchronous I/Os to the specified file(s)

-collects basic statistics

Syntax

sio Read% Rand% Blk_Size File_Size Seconds Thread Filename [Filename ]

Posted 7th July 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

28th June 2012

Root volume migration procedure:

1. vol restrict newroot

2. vol copy start -S root newroot or ndmpcopy /vol/vol0 /vol/newroot

3. vol online newroot

4. vol options newroot root # aggr options root is from maintenance mode...when done here

the aggregate containing newroot will become root since it has the root volume .

5. vol status # confirm newroot is "diskroot"

6. reboot

7. vol status # confirm newroot is now root

8. vol offline root

9. vol destroy root

10. vol rename newroot root

### fix cifs shares and nfs exports if needed...

Also, ssl in filerview doesn't usually survive a root vol move...just setup again after reboot

secureadmin disable sssl

secureadmin setup -f ssl

secureadmin enable ssl

Posted 28th June 2012 by vipulvajpayee

NetApp_Root volume migration procedure

0 Add a comment

Page 46: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 46/104

Enter your comment...

Comment as: Google Account

Publish Preview

28th June 2012

NETAPP_STORAGE_SAVING_Concept [http://www.blogger.com/blogger.g?

blogID=256263257655771281]

Physical & Effective Storage View: The Physical storage view summarizes how storage is

allocated while the effective storage view projects the effects of applying NetApp storageefficiency features to usable and RAID capacities. The effective storage view provides an

estimate of what it takes to implement the current capacity in a traditional storage environmentwithout any NetApp storage efficiency features. For example, Physical Used Capacity

represents the current raw storage used by applications. Effective Used Capacity representshow much traditional storage you would have needed without using Dedupe, FlexClone, and

Snapshot technologies.

NETAPP_STORAGE_SAVING_CONCEPT

Page 47: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 47/104

# ThinProvisioning Savings## Dedup + FlexClone Savings

### SnapShot Savings#### RAID-DP Savings

* Areas with no change are marked in grey text

[http://www.blogger.com/blogger.g?blogID=256263257655771281]

RAID-DP Savings Calculations: Raid-DP savings are based on comparing the cost of

deploying a traditional mirroring solution to achieve the similar level of data protection andperformance. For example: consider a simple RAID-DP aggregate of 9 disks (7 data disks +

2 RAID-DP disks)

A total of 14 disks are required to provide similar protection in a mirroring implementation.

Page 48: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 48/104

As illustrated above, deploying RAID-DP results in a savings of 14-9 = 5 disks.

[http://www.blogger.com/blogger.g?blogID=256263257655771281]

Snapshots Savings Calculations: Snapshots savings are calculated based on the benefits

of not having to take frequent full backups. Most customers take a full volume copy every 7days and incremental backups in between.

Snapshot Savings = (Total Volume Copies x Volume Size) - (physical space consumed bySnapshots)

where Total Volume Copies = 1 + ((# of SnapShots) / (estimate number of incrementalbackups before a full backup))

Default value for estimate number of incremental backups before a full backup= 7, but can beuser adjustable to reflect a different full backup schedule.

For example:Volume Size = 100 GB

Snapshot Sizes:

Day 1 - 1 GBDay 2 - 3 GBDay 3 - 5 GBDay 4 - 2 GBDay 5 - 4 GBDay 6 - 7 GBDay 7 - 3 GBDay 8 - 6 GBDay 9 - 8 GB

Total Snapshot Size for 9 days = 39 GBTotal Volume Copies = 1 + ((# of snapshots) / 7) = 1 + 9/7 = 1 + 1 = 2Total Snapshot Savings = ((Total Volume Copies) x Volume Size) - Sum of all Snapshot Sizes= (2 x 100 GB) - 39 GB = 161 GB[http://www.blogger.com/blogger.g?blogID=256263257655771281]

Dedupe Saving Calculation: The output from the DF -sh command provides the Dedupe

savings calculation. We summarize across all the volumes that have Dedupe enabled. Forexample:

toaster1> df -sh

Filesystem used saved %saved

/vol/densevol/ 37GB 363GB 91%

Used = Used space in the volume

Saved = Saved space in the volume

% savings = saved / (used + saved) * 100Total Data in the volume = used + saved

[http://www.blogger.com/blogger.g?blogID=256263257655771281]

FlexClone Saving Calculation: A FlexClone volume is a writable, point-in-time, clone of a

parent FlexVol volume. FlexClone volumes share blocks with the parent volume, and only

occupy additional storage for changed or added content. FlexClone space saving is

computed as apparent space used by FlexClone volumes less actual physical space

consumed by those volumes. The shared space between the clone and its parent is not

reported in AutoSupport. So FlexClone savings are estimated by comparing aggregate and

volume usage.

Page 49: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 49/104

File and LUN clones are supported starting from Data ONTAP 7.3.1. These savings are rolled

up into savings reported by deduplication and are not reported in the FlexClone savings

section. FlexClone savings will display "NONE" as a result. [http://www.blogger.com/blogger.g?

blogID=256263257655771281]

Thin Provisioning Saving Calculation: Thin Provisioning supports efficient pooling of

storage for sparsely populated volumes and is enabled on a volume by setting "space-reserve" to none. Thin Provisioning saving is the difference between the apparent space

available for each volume and actual space available in the aggregate.

[http://www.blogger.com/blogger.g?blogID=256263257655771281]

SnapVault Saving Calculation: A single SnapVault secondary system can be a backup

target for multiple SnapVault primary systems. Therefore, SnapVault savings are calculated at

the SnapVault secondary system level. It is based on the benefits of not having to take

frequent full backups on the primary SnapVault servers.

Total SnapVault Savings: = Savings from not having to take full backup on attached

SnapVault Primary systems - Sum of all Snapshot overheads

where the formula for calculating SnapVault savings is the same formula used for calculating

Snapshot savings.

Posted 28th June 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

24th June 2012

NFS Troubleshooting

As you recall the setting up NFS on clients and a storage system involves:

1. Licensing NFS on the storage system.

2. Configuring and starting the NFS services.

3. Exporting the file system on the storage system.

4. Mounting file systems on clients.

NFS Troubleshooting

Page 50: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 50/104

Steps To configure NFS:

Step1: licensing NFS on storage system.

license add command

Step2: configure the NFS service:

Set the version of NFS to use.

Set the transport protocol (TCP,User Datagram Protocol, or UDP)

Nfs on command.

Step3: Exporting file system to the storage system:

Updating /etc/exports file as needed.

Running the exportfs command

Filerview administration tool

Step 4: Mounting file system on clients

Running mount command

Updating /etc/fstab

During the troubleshooting the NFS we need to investigate on storage system, client, Network.

Storage system:

1 Verify NFS service

-NFS licensed

-NFS properly configured

-Interface properly configured.

2. Verify export

-exportfs –v

-/etc/exports

Hostname to IP resolution

Can you turn hostnames into IP addresses?

- If not, look at:

. nsswitch.conf

. hosts file

. resolv.conf

. On the storage system , DNS or NIS options

. Changing the order of DNS or NIS servers

. Consider circumventing DNS/NIS by temporarily entering hosts into the hosts file.

Remember:

-Data ontap caches NIS maps in slave mode.

-Data ONTAP caches DNS.

The nsswitch.conf file is the place to start when troubleshooting name-resolution issues. Make sure

that you are using the name services you intend to be using. If that file is correct, move to the services

listed: files= /etc/hosts, DNS=/etc/resolv.conf , NIS= domainname and ypwhich for starters.

Remember, there are several options in Data ONTAP used to configure and manage DNS:

dns.cache.enable – used to enable/disable DNS name –resolution caching.

dns.domainname – the storage system DNS domain name.

dns.enable – enable /disable DNS name resolution.

dns.update.enable – used to dynamically update the storage system ‘A’ record.

dns.update.ttl – time to live for a dynamically inserted ‘A’ record.

One troubleshooting method when dealing with name resolution problem is to enter

hostname/addresses in the /etc/hosts file to the storage system or hosts, thereby eliminating the

external name resolution services.

Remember that Network information service (NIS) maps in slave mode are cache as well as DNS. You

can flush the DNS cache at any time by entering the “dns flush” command.

Page 51: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 51/104

Client system:

-ping, dig, host, getnet

-ypwhich, ypcat, domainanme

- showmount –e|-a

- /etc/init.d/autofs start|stop

-nfsstat –m

Check:

- /etc/nsswitch.conf

- /etc/vfstab or /etc/fstab

- /etc/resolv.conf

- /etc/mtab

Dig – Use dig (domain information groper) to gather information from the DNS servers.

Host – A simple utility for performing DNS lookups. It is normally used to convert names to IP

addresses and vice versa.

Getent- gets a list of entries from the administrative databases. For exp:

# getent passwd

Or

# getent hosts v210- inst

The yp* commands are used to examine the NIS configuration.

ypwhich – rename the name of the NIS server that supplies the NIS name services to the NIS client.

ypcat mapname- prints the values of all keys from the NIS database specified by mapname.

domainname- show or sets the system NIS domain name.

showmount- Queries the mount daemon on a remote host for information about the state of the NFS

server on that machine

Autofs – controls the operation of the automount daemons.

nfsstat –display statistical information about the NFS and remote procedure call(RPC) interface to the

kernel.

Check for the ports they must be open on clients and storage systems.

Portmap TCP 111

Nfsd TCP 2049

Mountd TCP 4046

Hard Mount and soft mount

Unix client can mount either:

Soft:- clients try to mount export a few times and then return an error.

Hard:- clients will continue to send out mount request indefinitely until the server responds.

Automounter:- Using the NFS automounter feature on NFS clients generates an automount map file

on the NFS server that contains entries for every file system that is permanently exported for NFS

access, after creating and resolving conflicts in the file , this map file can be copied to NFS clients or

the Network information service (NIS) server to use as input to the automounter configuration.

Network:

The “cleaner” the better

-Matched parameters all the way through

-Not saturated (no quality of service in Ethernet)

-Auto versus half/full duplex.

Use TCP instead of UDP

-TCP is acknowledged

-UDP is not acknowledged

Are there firewalls (network or clients) in the way?

-Remember RPC ports should not be blocked.

Posted 24th June 2012 by vipulvajpayee

0 Add a comment

Page 52: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 52/104

Enter your comment...

Comment as: Google Account

Publish Preview

24th June 2012

How to hide the folder in NetApp from user who doesn’t have access to some folder by ABE

options.

Access Based Enumeration

Hello friend lots of people are not aware of the ABE feature of the NetApp well it is one of the intresting

feature of cifs used in NetApp.

We know that we can put user level permission on the qtree and then on the respective folders but

some time most of the storage administrator want that those user who have access to their folder

should be able to see their folder only in qtree , they should be not able to see those folders where

they do not have access , because there is even chance of leakage of information by the folder name

also.

This type of security feature can be enable on NetApp by the enabling the ABE on the NetApp they

work only for cifs not for nfs.

Enable/Disable ABE through the NetApp Storage CLI

To enable ABE on an existing share:

FAS1> cifs shares -change <sharename> -accessbasedenum

To disable ABE on an existing share:

FAS1> cifs shares -change <sharename> -noaccessbasedenum

To create a share with ABE enabled:

FAS1> cifs shares -add <sharename> <path> -accessbasedenum

After enabling the ABE on some shares you need to logoff and logon then you can see effect.

For Example: Refer the below step

1. We will use a share called DATA, located at /vol/DATA.

SERVER> Net use T: \\FAS1\DATA

2. At the root of the share, make a folder called \Software.

SERVER> MKDIR T:\SOFTWARE

3. Underneath \SOFTWARE, create three directories: FilerView, SnapManager, and NDA.

SERVER> MKDIR T:\SOFTWARE\FilerView

SERVER> MKDIR T:\SOFTWARE\SnapManager

SERVER> MKDIR T:\SOFTWARE\NDA

4. We have two users which were previously created in Active Directory, Fred and Wilma.

. SERVER> Start Explorer, go to drive T:, select properties on each of the folders specified

and assign the following permissions.

How to hide the folder in NetApp from user whodoesn’t have access to some folder by ABE options.

Page 53: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 53/104

Create Folder Assign Fred Assign Wilma

\FilerView Full Control Full Control

\SnapManager Full Control Full Control

\NDA No AccessRequires the following as a

minimum:List Folder/ReadData,ReadExtended Attributes,Read Permission

6. Disconnect from drive T:

SERVER> Net use T: /delete /yes

7. Map Fred to the DATA share

SERVER> From the desktop, double click on the DEMO.MSC shortcut.

This will allow you to remotely connect to the VISTA workstation.

On the left colume of the MSC, expand ‘Remote Desktop’. Double-click on ‘Connect as Fred’

Once connect, click start, run, cmd.

8. VISTA> net use T: \\FAS1\data

9. Open the SOFTWARE folder.

10. Fred will see all three sub-folders even though he doesn’t have access rights to the NDA

folder.

11. Verify this by clicking on each sub-folder.

12. VISTA> Logoff Fred

13. Connect Wilma.

SERVER> From the desktop, double click on the DEMO.MSC shortcut.

This will allow you to remotely connect to the VISTA workstation.

On the left colume of the MSC, expand ‘Remote Desktop’. Double-click on ‘Connect as Wilma’

Once connect, click start, run, cmd.

VISTA> net use T: \\FAS1\data

14. Open the SOFTWARE folder.

Notice Wilma can also see all folders.

15. Verify Wilma has access to each folder by clicking on each folders name

16. Enable Access Based Enumeration

FAS1> cifs shares –change data –accessbasedenum

17. Wilma can still access all three folders, as she was given permission.

18. VISTA> Logoff Wilma

19. Reconnect Fred to the DATA share.

SERVER> From the desktop, double click on the DEMO.MSC shortcut.

This will allow you to remotely connect to the VISTA workstation.

On the left colume of the MSC, expand ‘Remote Desktop’. Double-click on ‘Connect as Fred’

Once connect, click start, run, cmd.

VISTA> net use t: \\FAS1\data

20. Notice Fred now can only see the folders he has access to.

21. VISTA> Logoff Fred

Posted 24th June 2012 by vipulvajpayee

2 View comments

lock folder 20 September 2012 23:30

it is a bit difficulty to understand your instruction to hide my folder, plz help me.

Reply

Page 54: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 54/104

Replies

Reply

Enter your comment...

Comment as: Google Account

Publish Preview

vipul 26 September 2012 09:51

what's difficult to understand.just enable the ABE options apply the permissions on userand test that's all.

7th June 2012

Flex clone volumes got created by running the SMSQL(issue resolution).

Hello friend today I need to tell you some thing about a small issue which I faced recently with snap

manager for sql.

As I was checking my volumes on the filer I observed that some of the clone volumes got created of my

those volumes which were having the sql database lun and those lun were used for the snapmanager

for sql.

Those clone volumes naming convention CL_SD_(original volume name) .

So I contacted the NetApp technical team regardin this issue and solution which they gave me was

mention belwo.

The SMSQL verification process creates clones of the databases so that it can do verification of them

from a consistent snapshot with the logs. So it creates a clone of the volume that usually consists of

the naming convention CL_SD_(original volume name) and mounts the volumes to do the verification.

Typically we would expect those to be unmounted at completion of the verification and removed.

However, since that does not always occur and can sometimes happen due to a hang or slow

response in the Host all that needs to be done is offline the Volumes and Delete them after.

So friends if you all face such type of issue no need to panic just do those volumes offline and delete

them.

Flex clone volumes got created by running theSMSQL(issue resolution)

Page 55: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 55/104

Posted 7th June 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

24th May 2012

CAS

What is CAS: it is called Content Addressed Storage.

As we know that data when ages means become old they become fixed, rarely any changes happen

on them but they are used by users and applications. These data are called content data or fixed

content.

Initially these types of data were stored on tapes; some companies keep them on production storage

for so many years. I have seen some companies that they are keeping their 10-11 years old data on

their production environment, they told sometimes these data are required and then lots of manual

works is required to take these data from the tape or any other storage media. So they keep those

data on their production environment. So if we see they are spending lots of money for their data

which are rarely used.

Now here comes the CAS (content Addressed Storage) into the picture.

CAS (Content Addressed Storage)

CAS is an object-based system that has been purposely built for storing fixed content data. It is

designed for secure online storage and retrieval of fixed content. Unlike file ‑level and block ‑level data

access that use file names and the physical location of data for storage and retrieval, CAS stores user

data and its attributes as separate objects. The stored object is assigned a globally unique address

known as a content address (CA). This address is derived from the object’s binary representation.

CAS provides an optimized and centrally managed storage solution that can support single-instance

storage (SiS) to eliminate multiple copies of the same data.

Types of Data

Now let us understand what types of data are called fixed content data. As we know that lots of data is

created day by day by companies, some data which required frequent changes like online data. Some

data that typically changes but allowed to change when require for ex: bill of materials and designed

data. And other type of data is fixed content which are not allowed to change like x-ray data, and other

types of data which are kept same as it is for some specific period of time due to government

regulations and legal obligations like emails, web pages and digital media.

Features and Benefits of CAS

CAS has emerged as an alternative to tape and optical solutions because it over comes many of their

CAS (content Addressed Storage)

Page 56: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 56/104

obvious deficiencies. CAS also meets the demand to improve data accessibility and to properly

protect, dispose of, and ensure service‑level agreements for archived data. The features and benefits

of CAS include the following:

Content authenticity: It assures the genuineness of stored content. This is achieved by generating

a unique content address and automating the process of continuously checking and recalculating the

content address for stored objects. Content authenticity is assured because the address assigned to

each piece of fixed content is as unique as a fingerprint. Every time an object is read; CAS uses a

hashing algorithm to recalculate the object’s content address as a validation step and compares the

result to its original content address. If the object fails validation, it is rebuilt from its mirrored copy.

Content integrity: Refers to the assurance that the stored content has not been altered. Use of

hashing algorithm for content authenticity also ensures content integrity in CAS. If the fixed content is

altered, CAS assigns a new address to the altered content, rather than overwrite the original fixed

content, providing an audit trail and maintaining the fixed content in its original state. As an integral

part of maintaining data integrity and audit trail capabilities, CAS supports parity RAID protection in

addition to mirroring. Every object in a CAS system is systematically checked in the background. Over

time, every object is tested, guaranteeing content integrity even in the case of hardware failure,

random error, or attempts to alter the content with malicious intent.

Location independence: CAS uses a unique identifier that applications can leverage to retrieve

data rather than a centralized directory, path names, or URLs. Using a content address to access fixed

content makes the physical location of the data irrelevant to the application requesting the data.

Therefore the location from which the data is accessed is transparent to the application. This yields

complete content mobility to applications across locations.

Single-instance storage (SiS): The unique signature is used to guarantee the storage of only a

single instance of an object. This signature is derived from the binary representation of the object. At

write time, the CAS system is polled to see if it already has an object with the same signature. If the

object is already on the system, it is not stored, rather only a pointer to that object is created. SiS

simplifies storage resource management tasks, especially when handling hundreds of terabytes of

fixed content.

Retention enforcement: Protecting and retaining data objects is a core requirement of an archive

system. CAS creates two immutable components: a data object and a metaobject for every object

stored. The meta‑object stores object’s attributes and data handling policies. For systems that

support object‑ retention capabilities, the retention policies are enforced until the policies expire.

Record-level protection and disposition: All fixed content is stored in CAS once and is backed

up with a protection scheme. The array is com‑posed of one or more storage clusters. Some CAS

architectures provide an extra level of protection by replicating the content onto arrays located at a

different location. The disposition of records also follows the stringent guidelines established by

regulators for shredding and disposing of data in electronic formats.

Technology independence: The CAS system interface is impervious to technology changes. As

long as the application server is able to map the original content address the data remains accessible.

Although hardware changes are inevitable, the goal of CAS hardware vendors is to ensure

compatibility across platforms.

Fast record retrieval: CAS maintains all content on disks that provide sub second “time to first byte”

(200 ms–400 ms) in a single cluster. Random disk access in CAS enables fast record retrieval.

CAS Architecture

The CAS architecture is. A client accesses the CAS‑Based storage over a LAN through the server

that runs the CAS API (application programming interface). The CAS API is responsible for performing

functions that enable an application to store and retrieve the data.

CAS architecture is a Redundant Array of Independent Nodes (RAIN). It contains storage nodes and

access nodes networked as a cluster by using a private LAN that is internal to it. The internal LAN can

be reconfigured automatically to detect the configuration changes such as the addition of storage or

access nodes. Clients access the CAS on a separate LAN, which is used for interconnecting clients

and servers to the CAS. The nodes are configured with low ‑ cost, high‑ capacity ATA HDDs. These

nodes run an operating system with special software that implements the features and functionality

required in a CAS system.

When the cluster is installed, the nodes are configured with a “role” defining the functionality they

provide to the cluster. A node can be configured as a storage node, an access node, or a dual ‑role

Page 57: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 57/104

node. Storage nodes store and protect data objects. They are sometimes referred to as back-end

nodes. Access nodes provide connectivity to application servers through the customer’s LAN. They

establish connectivity through a private LAN to the storage nodes in the cluster. The number of

access nodes is determined by the amount of user required throughput from the cluster. If a node is

configured solely as an “access node,” its disk space cannot be used to store data objects. This

configuration is generally found in older installations of CAS. Storage and retrieval requests are sent

to the access node via the customer’s LAN. Dual-role nodes provide both storage and access node

capabilities. This node configuration is more typical than a pure access node configuration.

Posted 24th May 2012 by vipulvajpayee

0 Add a comment

2nd May 2012

Cifs troubleshooting:

Cifs client access

The flow of communications from cifs client to a storage appliance in a multiprotocol

environment.

1. PC request access to the data.

2. Storage appliance check with DC for the authentication.

3.. DC reply: authentication or guest.

4. If guest access is denied unless cifs. guest-account is set.

5. Storage appliance maps the NT account to unix username.

6. Storage appliance compares NT account info with share ACL.

7. Storage appliance compare NT account info with file ACL or UNIX username with file permissions.

8. If users have access to the share and the file, the storage appliance grants the access.

Phase 1: PC request access to the data.

Potential issue

1. Network failed or slow.

2. Client not authenticated to DC.

3. Client not able to find the storage appliance.

The below are the cmd by which you can figure it out.

Filer>ifstat

Filer>netdiag

Filer> ping the DNS or by username.

Client>tracert

Filer>nbtstat(if using WINS(windows internet name services)).

Phase 2: Storage appliance check with DC for the authentication.

Potential issue:

Domain controller communicates and trust across multiple domains can fail.

Cmd:

Filer> cifs testdc

Filer>options cifs.trace_login is set to on.(note: to prevent the deluge of the console and log

messages , Netapp recommend the toggling this options off after the troubleshooting is complete.)

Phase 3: DC reply: authentication or guest

Potential issue:

Authenticated result is not what was expected, Need to check the details of mapping.

Cmd:

Filer> wcc -s username

Cifs troubleshooting (NetApp)

Page 58: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 58/104

Filer> options cifs.trace_login on

Phase 4:if guest account is denied unless

Potential issue

Guest access is denied

Cmd:

Filer>options cifs .guest_account

Phase 5:stoeage appliance maps NT account to unix username

Potential issue:

Account does not map, or UNIX username does not exit.

Cmd:

Check /etc/psswd file

Check /etc/usermap.cfg file

nis info (if using nis)

filer>options nis.group_update_schedule

Phase6: Storage appliance compare NT account info with share ACL.

Potential issue:

User does not have access to the share.

Cmd:

Filer> cifs shares

Client>computer management(wind 2000)

Phase7: storage appliance compare NT account info with file ACL, or unix username with file

permission.

Potential issue:

User does not have access to file.

Cmd:

Filer> qtree status( check security style)

Win clent> Explorer

Unix client >ls –l

Posted 2nd May 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

Page 59: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 59/104

24th April 2012

3PAR

As we know that virtualization is catching the market very fast and because of that hardware is getting

virtualized and so the different vendor’s products are also making their technology to adopt the virtual

environment to survive in the market. As we know the server market is very much affected by this

virtualization technology and somewhat of storage market is getting affected by virtualization

technology,

In almost every vendor or partner event I have seen that there is lot of discussion about the

virtualization and they will show that how their new product or existing product is capable of adopting

the virtual environment, as we all know the EMC, NetApp product are already adopted the virtual

environment and saved their market value. And now the 3PAR is also working hard to adopt the virtual

environment.

So today I will be discussing about the 3PAR storage and their solution to this virtual environment. We

will see that what 3PAR is having in their pocket for customer so that they can sell their storage in

virtual environment.

3PAR storage has two major products line the InServ T-Class and F-Class storage array. As we all

know that the 3PAR is acquired by HP Company.

Based on the InSpire architecture, 3PAR storage array are widely installed in medium to large

enterprise customers. InSpire provides a highly scalable, yet tremendously flexible, storage

architecture that fits quite well into virtual server environments. Fine-grained virtualization divides

physical disks into granular “chunklets”, which can be dynamically assigned (and later re-assigned) to

virtual volumes of different quality of service (QoS) levels. This enables 3PAR to efficiently satisfy

varying user-defined performance, availability and cost levels, and to adapt quickly and non –

disruptively to evolving application workloads.

One of the 3PAR good storage intelligence technologies which are keeping the 3PAR separate from

other storage vendors is their “THIN PERSISTENCE”.

This thin persistence helps users to reclaim the unused space resulting from deleted data within the

volume. This is really a good and intelligence work of 3PAR storage as I have seen in NetApp and

other storage vendors that once the some data is added in the thin volumes that much space is taken

from the aggr or from storage pool but when we delete some data from the thin volumes that particular

amount of space does not get added back to the aggr or storage pool, and that what the thin

persistence do. So it helps user to reclaim back their unused space from the deleted data within the

thin volumes.

Another good technology of the 3PAR storage is thin copy reclamation. When a copy of a volume

needs to be made, using either 3PAR virtual copy (non-duplicative copy –on-write snapshots) or 3PAR

Remote copy (thin provisioning- aware, data replication for DR purpose), 3PAR thin copy reclamation

recovers any unused space, such as that associated with deleted data, during the copy operation.

This has an especially big impact in streamlining copies of fat volumes, which tend to have a lot of

wasted space that would otherwise be replicated in each successive copy.

The other interesting technology of 3PAR is the thin conversion, which applies a fat –to –thin

conversion capability during the migration process. All of 3PAR thin technologies take advantage of

hardware –based functionality built in to the 3PAR Gen3 ASIC to ensure that the “capacity thinning”

operations take place without impacting application performance of user response times.

3PAR recovery Manager for VMware vSphere comes in. Packaged in InForm software along with the

3PAR Management Plug-In for VMware vCentre , 3PAR recovery Manager enables users to create

3PAR

Page 60: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 60/104

hundreds of space-efficient , point- in-time snapshots online in an InServ array, without disrupting the

performance of other running VMs. The product allows users to snap and recover everything from VM

images snap and recover everything from VMImages to specific virtual server files and directories.

Posted 24th April 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

17th April 2012

Fpolicy

Fpolicy: FPolicy is an infrastructure component of Data ONTAP that enables partner applications

connected to your storage systems to monitor and set file access permissions.

FPolicy determines how the storage system handles requests from individual client systems for

operations such as create, open, rename, and delete. The storage system maintains a set of properties

for FPolicy, including the policy name and whether that policy is active. You can set these properties for

FPolicy using the storage system console commands.

File Screening on NAS

Run the following commands to enable the file screening on NAS box to prevent the copying of EXE,

JPG, MP3, MP4, PST, AVI, and DAT.

1. Create the file screening policy

fpolicy create <Policy Name> <Policy Type>

E.g. fpolicy create techm screen

2. Add the extensions for scan

fpolicy ext[ension] {exc[lude]|inc[lude]} [add|remove|set|reset|show] <PolicyName> [<ext>[,

<ext>]]

E.g. Fpolicy ext inc add techm jpg,exe,dat

3. fpolicy options <PolicyName> required [on|off]

E.g. fpolicy options techm required on

4. Enable the policy

fpolicy enable <PolicyName> [-f]

E.g. fpolicy enable techm –f

5. Enable the File screening Monitor when users try to write files to the NAS.

Fpolicy in NetApp

Page 61: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 61/104

fpolicy mon[itor] [add|remove|set] <PolicyName> [-p {nfs|cifs}] -f

op_spec[,op_spec,...,op_spec]

E.g. fpolicy mon add techm –p cifs –f write

After applying all the above commands you can see the below results using the below command.

:Fpolicy show techm

What is Serverless FPolicy, why would I want it and how does it work?

· Normally a file policy has an external server to administer the policy

A typical sequence would be to create a policy, configure it, then set up an fpolicy server to

administer the policy. As user requests come to the filer, those that fit the criteria of the

policy cause the filer to notify the FPolicy server. For example, a quotas policy would cause

the filer to notify the FPolicy server when a user did something that reserved or freed disk

space.

But, as its name suggests, Serverless FPolicy involves creating a policy with the expectation

of not connecting a server to administer the policy.

· When would someone use Serverless FPolicy?

Serverless FPolicy is used as a "poor man's" file blocking policy. It may not have features or

flexibility but it costs nothing and has excellent performance. If you simply want to prevent

users from soaking up disk space with their MP3 music files for example, Serverless FPolicy

may be perfect for you.

· How does it work?

Conceptually, the policy is created and configured. Then the policy's option required is

turned on. Because user access requires an FPolicy server to validate their request, and

because there is no server, 100% of the user requests which fall under this policy will be

rejected.

· Can you give me an example showing how to set it up?

Let's say you want to prevent users from putting MP3 files onto the filer. Note that this

example only works for CIFS users because NFS does not do a formal "file create" operation.

First, create the policy.

filer> fpolicy create MP3Blocker screen

Now configure the policy. Set the extension list to "MP3". Set the operations monitored to

"create" and "rename". This will block both creation of MP3 files and the more sneaky

method of copying the MP3 under a different name and then renaming it once it is in place.

Set the "required" option and enable the policy. Optionally, you can restrict the policy to

certain volumes.

filer> fpolicy ext inc set mp3blocker mp3

Page 62: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 62/104

filer> fpolicy monitor set mp3blocker -p cifs create,rename

filer> fpolicy options mp3blocker required on

filer> fpolicy volume include set mp3blocker vol0,vol1

filer> fpolicy enable mp3blocker -f

· Any further useful pointers you can give?

o Note that the fpolicy monitor command was provided initially in Ontap 7.1

o In older releases it is not so simple to set the list of operations controlled by a policy.

Basically, you'll need to go into "advanced" mode on the filer console and directly insert

values into the filer's registry.

(http://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb10098)]. But note that

"registry set" is not supported for vfilers so you are just plain out of luck using the

registry-hack procedure to set up a Serverless FPolicy for a vfiler.

Fpolicy flow chart:

Page 63: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 63/104

Posted 17th April 2012 by vipulvajpayee

0 Add a comment

Page 64: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 64/104

Enter your comment...

Comment as: Google Account

Publish Preview

11th April 2012

NetApp FAS 2240 Storage showing PCM FAULT Amber LED

ISSUE:

1. PCM Fault LED is showing Amber

Symptom:

Amber LED is glowing on NetApp 2240 storage chassis as well as PCM Fault LED is glowing on the

rear

side of respective controller.

Step By Step Resolution

1. At the Data ONTAP CLI prompt, issue the “halt –s” command. Please note that this will cause an HA

takeover in an HA configuration.

2. Wait for the controller to shut down .

3. On the console port (not RSH, SSH, etc.), issue a “^G”(ctrl G)to switch the console to the SP.

4. Log in to the SP.

5. At the SP prompt, issue the “system power on” command.

6. Issue a “ D̂”(ctrl D)to get back to the Data ONTAP console.

7. Once the BIOS / Loader have booted, issue the “boot_ontap” command.

8. Issue Giveback after Takeover.

Note: For HA configurations, the partner will also need to go through the 7-step process above.

I hope this information will be very helpful for whosoever visit my blog, and you will save lot of time ,

which i wasted on calling net app technical center.

Posted 11th April 2012 by vipulvajpayee

NetApp FAS 2240 Storage showing PCM FAULTAmber LED

0 Add a comment

Page 65: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 65/104

Enter your comment...

Comment as: Google Account

Publish Preview

11th April 2012

Some interesting cmd of netapp.

mkfile : this cmd is used to make the file on netapp, we can create file on any volume of netapp by

this cmd .

the below is the example of this cmd.

vipul1*> mkfile 10kb /vol/vipul/vipultree/uk

vipul1*> ls /vol/vipul/vipultree

uk

coverletter.txt

Data Migration Clariion to VNX using SAN Copy.pdf

Data Migration Clariion to VNX using SAN Copy1.pdf

Data Migration Clariion to VNX using SAN Copy2.pdf

rdfile : rdfile cmd is used to read the content of the file , so by this cmd we can see the content of any

file in any volume.

For example:

vipul1*> rdfile /etc/uk

Vipul

dd: this cmd is used to copy the content of file data from on file to another, this cmd can be used in

case of your ndmpcopy is not working.

For example:

vipul1*> dd if=/vol/vol0/etc/uk of=/vol/vol0/etc/vk

vipul1*> rdfile /etc/vk

Vipul

ls: this cmd is used to list the content of the directory .

for example.

vipul1*> ls /vol/vipul/vipultree

uk

coverletter.txt

Data Migration Clariion to VNX using SAN Copy.pdf

Data Migration Clariion to VNX using SAN Copy1.pdf

Some interesting cmd of netapp

Page 66: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 66/104

Data Migration Clariion to VNX using SAN Copy2.pdf

mv : this cmd is used to replace the old file to the new file this work in same volume but not between

different volume

for example.

vipul1*> mv /vol/vol0/etc/vk /vol/vol0/etc/uk

vm_stat: gives the output of the wafl space allocation, gives stats of the wafl

for example.

vipul1*> vm_stat

System

Total Pages in System: 130816

Total Pages Allocated: 130567

Total Free Pages: 249

Non-WAFL Free Pages: 0

WAFL Free Pages: 249

WAFL

Pages From WAFL: 8867

Pages Returned To WAFL: 2668

Failures while stealing from WAFL: 0

Times Pages stolen immediately: 8867

Free Pages in WAFL: 7427

Free buffers in WAFL: 74278

WAFL recycled bufs: 3661

Sleep/Wakes

Times thread slept for pages: 60

Times woken up for pages: 60

Times PTE is alloced while sleeping: 0

Hseg

<8k <16k <64k <512k <2MB <8MB <16MB big chunks bytes

alloc 0 237 167 107 2 0 0 1 514 43069440

active 0 3 0 0 0 0 1 0 4 9359360

backup 0 0 0 0 0 0 0 0 0 0

Buffers MemoryPool

1 portmap

1 portmap

rm: this cmd is used to delete a file from the qtree.

For example.

vipul1*> ls /vol/vipul/vipultree

uk

coverletter.txt

vipul1*> rm /vol/vipul/vipultree/uk

vipul1*> ls /vol/vipul/vipultree

coverletter.txt

filersio: this cmd is used for the testing purpose , you can run this cmd and test your filer

performance and to see if there is any issue.

For example.

vipul1*> filersio asyncio_active 50 -r 50 4 0 10m 60 5 /vol/vol0/filersio.test

- create -print_stats 5

filersio: workload initiated asynchronously. Results will be displayed on the

console after completion

vipul1*> filersio: starting workload asyncio_active, instance 0

Read I/Os Avg. read Max. read Write I/Os Avg. write

M ax write

latency(ms) latency(ms) latency(ms)

l atency(ms)

Page 67: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 67/104

16898 0 149 16926 1 8

21

8610 0 22 8571 3

2 641

5910 0 966 5715 5

2 760

11449 0 17 11431 2 2

500

11368 0 65 11426 2

2 321

Wed Apr 11 18:00:00 IST [kern.uptime.filer:info]: 6:00pm up 2:41 0 NFS ops,

0 CIFS ops, 0 HTTP ops, 0 FCP ops, 98 iSCSI

ops

14116 0 18 13952 1

1 151

8363 0 11 8580 2

2 699

18068 0 31 17934 1

2 780

10279 0 24 10292 2

1 180

5690 0 15 5653 1 1

399

Statistics for active_active model, instance 0

Running for 61s

Total read latency(ms) 31531

Read I/Os 113608

Avg. read IOPS 1862

Avg. read latency(ms) 0

Max read latency(ms) 966

Total write latency(ms) 275993

Write I/Os 113392

Avg. write IOPS 1858

Avg. write latency(ms) 2

Max write latency(ms) 3450

filersio: instance 0: workload completed successfully

hammer: this cmd again is good cmd for thesting the performance of your filer but it utilize the cpu

power so much. It actually hammer the filer and record its performance. This cmd should be run under

some netapp expert, don’t run it simply , because this is dangerous cmd and can create any panic to

your filer.

For ecample;

vipul1*> hammer

usage: hammer [abort|pause|restart|status|

[-f]<# Runs><fileName><# BlocksInFile> (<# Runs> == -1 runs hammer forever)|

fill <writeSize> (use all available disk space)]

vipul1*> hammer -f 5 /vol/vol0/hammer.txt 400

vipul1*> Wed Apr 11 18:08:18 IST [blacksmith:warning]: blacksmith #0: Starting work.

Wed Apr 11 18:08:25 IST [blacksmith:info]: blacksmith #0: No errors detected. Stopping work

getXXbyYY: this cmd is very useful cmd to find out the information of user, host etc.. this cmd is used

to take out the information from the filer , just look its sub cmd and then you will come to know that how

useful this cmd is.

vipul1*> getXXbyYY help

usage: getXXbyYY <sub-command> <name>

Where sub-command is one of

gethostbyname_r - Resolves host name to IP address from configured DNS server, same as nslookup

gethostbyaddr_r - Retrieves IP address for host name from configured DNS server, same as reverse

Page 68: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 68/104

lookup

netgrp - Checks group membership for given host from LDAP/Files/NIS

getspwbyname_r - Displays user information using shadow file

getpwbyname_r - Displays user information including encrypted password from LDAP/Files/NIS

getpwbyuid_r - Same as above however you provide uid in this command rather than user name

getgrbyname - Displays group name and gid from LDAP/Files/NIS

getgrbygid - Same as above however you provide gid in this command rather than group name

getgrlist - Shows given user's gid from LDAP/Files/NIS

For more information, try 'man na_getXXbyYY'

vipul1*> getXXbyYY gethostbyname_r root

host entry for root not found: Host not found (authoritative)

vipul1*> getXXbyYY gethostbyname_r vipul

name: vipul

aliases:

IPv4 addresses: 192.168.1.14

All the above cmd will only run either in diag mode or in advanced mode but will not run in normal

mode.

Hope this blog will help you to play with some of the netapp hidden cmd.

Posted 11th April 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

9th April 2012

Fractional reserve:

Fractional reserve is the volume options which reserve the space inside the volume for the snapshot

overwrites. By default it is 100%, but by autodelete functionality NetApp recommend to keep the

fractional reserve to =0, as soon as the first snapshot copy is created the NetApp automatically

creates the fractional reserve and start using it only when whole volume space gets filled. This

reserved space is used only when the volume is 100% full.

The amount of reserve space can be seen by –r options in df cmd.

Fractional reserve & space reserve

Page 69: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 69/104

Snap reserve:

Snap reserve is the space reserved for storing the snap shot copies as the snapshot copies also

required space to get stored so they use the snap reserve space, by default it is20% of the volume

space. As the snap reserve space gets filled it automatically start using the space from the volume. So

we can say snap reserve is actually the logical separation space from the volume.

Space reclamation:

Space reclamation is the process of reclaiming back the space of the lun on storage side. For exp

suppose we have filled the 100 gb lun with 50 gb of the data so the storage usage of lun in host side

can be seen as 50% utilized and in storage side also it will show 50% utilized , but when we delete the

data from the lun for exp we deleted all the 50 gb of data from the lun , the storage utilization on host

side will be shown as 0% utilized but from storage end it will show 50% utilized , so we use the space

reclamation to reclaim back the free space on storage side. Snapdrive work good on reclaiming back

the data from storage end.

Lun reservation:

LUN reservation (not to be confused with SCSI2 or 3 logical unit locking reservations) determines

when space for the LUN is reserved or allocated from the volume. With reservations enabled (default)

the space is subtracted from the volume total when the LUN is created. For example, if a 20GB LUN is

created in a volume having 80GB of free space, the free space will go to 60GB free space at the time

the LUN is created even though no writes have been performed to the LUN. If reservations are

disabled, space is first taken out of the volume as writes to the LUN are performed. If the 20GB LUN

was created without LUN space reservation enabled, the free space in the volume would remain at

80GB and would only go down as data was written to the LUN.

Posted 9th April 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

3rd April 2012

iSCSI

iscsi is a protocol that runs on top of standard TCP/IP networks. Iscsi uses the Ethernet cable to

communicate to the hosts, so it is cheaper than FC protocol, because FC cable are costlier than the

Ethernet cable.

In iscsi you should be clear about the initiator and target terms, you should be knowing that what is

initiator is and what is target is.

iSCSI

Page 70: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 70/104

Initiators and targets

Initiator which initiates the service or initiator which initiates the conversation between your host

computer and storage device means the Ethernet port of the host is initiator and the target port is

which accept the services means the storage Ethernet port are the target ports.

IQN

One more thing to understand is the iqn number each iscsi port has its own iqn number iscsi initiator

service in host automatically creates iqn number and iscsi target in storage has its own iqn number ,

so if you change the hostname of the storage may be the iqn number of that storage will get change.

The conclusion is that the iscsi ports have their own iqn number and they are unique.

DataDomain or Domain

In a basic iSCSI SAN, a storage array advertises its SCSI LUNs to the net-work (the targets), and

clients run an iSCSI driver (the initiators) that looks for those LUNs. In a larger setup with, say, fifty or

more clients or storage devices or both, you probably don’t want every client to see every storage

device. It makes sense to block off what each host can see and which storage devices they have the

potential of using. This is accomplished by registering the names of the initiators and targets in a

central location, and then pairing them into groups. A logical grouping, called a data domain, partitions

the registered initiators and targets into more manageable group.

Posted 3rd April 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

2nd April 2012

Setting up a Brocade Switch.

Step1: setting up the username and password

The default username is admin and the default password of the brocade switch is password

Step 1: Name the switch by “switchName” cmd

Step2: Set the Domain ID.

To set the domain id you need to first disable the switch so when you disable the switch all the port

Setting up a Brocade Switch.

Page 71: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 71/104

which are showing the green light will turn to the amber light.

The cmd to disable the switch is “switchDisable”

And then enter the “configure”cmd to configure the switch.

When you enter the configure cmd you will be asked number of questions so you need to put “yes” in

“Fabric parameters” and rest set default and set domain id in domain id section like if you need to set

the domain id as 1 then put 1 in front of the domain id section.

Step3: enable the switch with its new domin id.

Enter the “switchEnable” cmd after you did the configuring of the switch after you enter the

switchEnable cmd the switch will reboot.

Then after reboot enter the “logout”cmd to logout from the switch.

Posted 2nd April 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

2nd April 2012

How to create Zone in Brocade switch.

TerminologyHBA - Host Bus Adapter, w hich in this case, refers to the Fibre Channel Card. In LAN netw orking, it’s analogous to an Ethernet card.

WWN - World Wide Name, a unique 8-byte number identifying the HBA. In Ethernet netw orking, it’s analogous to the MAC address.

FC Zone - Fibre Channel Zone, a partitioned subset of the fabric. Members of a zone are allow ed to communicate w ith each other, but devices are

not allow ed to communicate across zones. An FC Zone is loosely analogous to a VLAN.

Steps to Zone Brocade Switch

1. Plug in the FC Connector into an open port on the sw itch.

2. Login to the server and verify the HBA connection. It should see the sw itch but not the storage device.

3. Login to the Brocade Sw itch GUI interface. Y ou’ll need Java enabled on your brow ser.

4. Check the Brocade Sw itch Port.

1. On the visual depiction of the sw itch, click on the port w here you plugged in the FC connector.

2. The Port Administration Services screen should pop up. Y ou’ll need to enable the pop-up.

3. Verify that the Port Status is “Online”. Note the port number.

4. Close the Port Administration Services screen.

How to create Zone in Brocade switch.

Page 72: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 72/104

5. Find the WWN of your new device

1. Navigate back to the original GUI page.

2. Select Zone Admin, an icon on the bottom left of the screen. It looks like tw o squares and a rectangle.

3. Expand the Ports & Attaching Devices under the Member Selection List.

4. Expand the appropriate port number. Note the attached WWN.

6. Create a new alias for this device

1. Click New Alias button

2. Follow menu instructions

7. Add the appropriate WWN to the alias

1. Select your new device name from the Name drop dow n menu

2. Expand the WWNs under Member Selection List

3. Highlight the appropriate WWN

4. Select Add Member

8. Add the alias to the appropriate zone

1. Select the Zone tab

2. Select the appropriate zone from the Name drop dow n menu

3. Select the appropriate alias from the Member Selection List

4. Click Add Member

9. Ensure that the zone is in Zone Config in the Zone Config tab

10. Save your changes by selecting ZoningActions -> Enable Config

11. Login back in to the server to verify. It should now see the storage devices.

Posted 2nd April 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

2nd April 2012

Snap vault

Snapvault is a heterogeneous disk –to-disk backup solutions for Netapp filers and systems with other

OS (solaris, HP-UX, AIX, windows, and linux) . in event of data loss or corruption on a filer, backed up

NetApp SnapVault

Page 73: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 73/104

data can be restored from the snap vault secondary storage system with less downtime and less of the

uncertainly associated with conventional tape backup and restore operations . snapshot technology is

used for the snap vault operation.

Snap vault can be done beteween the two system either netapp filer to filer or from unix/win servers to

netapp filer. While doing the snapvault from server to filer we need to install the snapvault agent on

server.the snapvault agent name is ossv(open system snap vault).

Snapvault technology works on snap vault client and snapvault server model means the client is

whose data is to be backedup and the snapvault server is where the client data will get backed up.

Snapvault works on snapshot technology first the baseline transfer will happen and then the

incremental backup will happen. The snapshot of the data will get backedup and the retention of the

backed data is also simple, we need to mount the backup volume via nfs or cifs and then copy the

data.

Snapvault reguired two licenses one is for the primary site and one is for the secondary site.

Steps to configure the snapvault between the two netapp storage .

Step 1. Add the license on primary filer and secondary filer.

Filer1> license add xxxxxxx

Filer2> license add xxxxxxx

Step 2. Enable the snapvault on the primary filer and do the entry on the primary filer of secondary

filer.

Filer1> options snapvault.enable on

Filer1> options snapvault.access host=filer2

Step 3. Enable the snapvault on the secondary filer and do the entry on the secondary filer of primary

filer.

Filer2> options snapvault.enable on

Filer2> options snapvault.access host=filer1

Now let the destination volume name is vipuldest where all the backups are done on filer2 and the

source volume name is vipulsource whose backup is to be taken on filer1 , so filer2> /vol/vipuldest

and for the primary filer filer1>/vol/vipulsource/qtree1.

Step 4. We need to disable the snapshot schedule on the destination volume. Snap vault will manage

the destination snapshot schedule.

Filer2> snap sched vipuldest 0 0 0

Step 5. Do the initial baseline backup

Filer2> snapvault start –S filer1:/vol/vipulsource/qtree1 filer2:/vol/vipuldest/qtree1

Step 6. Creating the schedule for the snapvault backup on source and destination filer

On source we will create less number of retention schedules and destination we can create more

number of retention schedules.

On source we will create 2 hourly, 2daily, 2weekly and on destination we will create 6 hourly, 14daily, 6

weekly.

Note: the snapshot name should be prefixed by “sv_”

Filer1> snapvault snap sched vipulsource sv_hourly 2@0-22

Filer1> snapvault snap sched vipulsource sv_hourly 2@23

Filer1> snapvault snap sched vipulsource sv_weekly 2@21@sun.

Step 7. Make the schedule on the destinations.

Filer2> snapvault snap sched vipuldest sv_hourly 6@0-22

Filer2> snapvault snap sched vipuldest sv_hourly 14@23@sun-fri

Filer2> snapvault snap sched vipuldest sv_weekly 6@23@sun.

To check the status use the snapvault status cmd either on source or on destinations.

Posted 2nd April 2012 by vipulvajpayee

0 Add a comment

Page 74: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 74/104

Enter your comment...

Comment as: Google Account

Publish Preview

9th March 2012

Brocade FOS Up gradation Steps.1. Backing up the existing switch configuration to the local server by cmd “configupload”.

For Example:

switch1:admin> configupload

Server Name or IP Address [host]: 1.2.3.4

User Name [user]: matty

File Name [config.txt]: switch1_config.txt

Protocol (RSHD or FTP) [rshd]: ftp

Password:

upload complete

For download put configdownload.

2. Tag all the fiber cable and unplug it from the switch.

3. Download the FOS version 6.1.0c from the brocade site.

4. Take the putty of the switch.

5. Enter the version cmd to check the current version.

6. Enter the nsShow cmd to check the entire device directly connected to the switch.

7. Enter the nsAllShow cmd to check the entire connected device to the fabric.

8. Enter the fabricshow cmd to check all switches in the fabric

9. Enter the Firmwareshow cmd to chek the primary and secondary partion.

10. Enter the Firmwaredownload cmd to download the new version.

For example:

switch:admin> firmwaredownload

Server Name or IP Address: 10.22.127.127

FTP User Name: JohnDoe

File Name: /release.plist

FTP Password:

You can run firmwaredownloadstatus to get the status

Brocade FOS Up gradation Steps

Page 75: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 75/104

of this command.

This command will cause the switch to reset and will

require that existing telnet, secure telnet or SSH

sessions be restarted.

Do you want to continue [Y]: y

11. Enter the firmwaredownloadstatus cmd to check the current status of the upgrade process.

12. Enter the firmwareshow cmd to check the primary and secondary partition version.

13. Plugin all the fiber cable and check the connectivity.

14. Enter the nsShow cmd to check the entire device directly connected to the switch.

15. Enter the nsAllShow cmd to check the entire connected device to the fabric.

16. Enter the fabricshow cmd to check all switches in the fabric.

Backing up Brocade switch configurationsBrocade switches have become one of the most widely deployed componets in most Storage AreaNetworks (SANs). One thing that has led to Brocade’s success is their robust CLI, which allow you toview and modify almost every aspect of their switch. This includes zoning configurations, SNMPattributes, domain ids, switch names and network addresses, etc. All of this configuration information isnecessary for the switch to function properly, and should be periodically backed up to allow speedyrecovery when disaster hits.Each Brocade switch comes with the “configUpload” and “configDownload” commands to back up aswitch configuration to a remote system, or to restore a configuration from a remote system.ConfigUplaod has two modes of oepration: interactive mode and automatic mode. To use theinteractive mode to upload a config from a switch named switch1 to an ftp server with the IP address1.2.3.4, configUpload can be run to walk you through backing up the configuration:switch1:admin> configuploadServer Name or IP Address [host]: 1.2.3.4User Name [user]: mattyFile Name [config.txt]: switch1_config.txtProtocol (RSHD or FTP) [rshd]: ftpPassword:upload completeAfter the configuration is uploaded, you will have a text file with you switches configuration on theremove server:$ ls -l sw*-rw-r--r-- 1 matty other 7342 Jul 7 09:15 switch1_config.txtTo restore a configuration, you can use the configDownload command. Both of these commands allowthe paramters to be passed as arguments to the script, so they are ideal for automation (there is abackup script on the Brocade support site that can be used to automate configuration backups)

Testing and Restoring the Firmware version.

1.Enter the firmwaredownload –s cmd and set the auto commit options to “no”.

For example:

switch:admin> firmwaredownload -s

Server Name or IP Address: 10.32.220.100

FTP User Name: william

File Name: /pub/v5.1.0/release.plist

FTP Password:

Do Auto-Commit after Reboot [Y]: n

Reboot system after download [N]: y

Page 76: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 76/104

Firmware is being downloaded to the switch. This step may take up to 30

minutes.

Checking system settings for firmwaredownload...

2. Restore back to the olderversion by “firmwareRestore” cmd.

3. Then enter the “firmwarecommit” to commit the olderversion.

Checking to be done after the upgradation activity on switch side.

firmwareShow Displays the current firmware level on the switch. For SilkWorm directors this

command displays the firmware loaded on both partitions (primary and

secondary) for both CPs. Brocade reco mmends that you maintain the same

firmware level on both partitions of each CP within the SilkWorm director.

nsShow (Optional) Displays all devices directly connected to the switch that have logged

into the Name Server. Make sure the number of attached devices after the

firmware download is exactly the same as the number of attached devices prior to

the firmware download.

nsAllShow (Optional) Displays all connected devices to a fabric. Make sure the number of

attached devices after the firmware down load is exactly the same as the number

of attached devices prior to the firmware download.

fabricShow (Optional) Displays all switches in a fabric. Make sure the number of switches in

the fabric after the firmware download is exactly the same as the number of

attached devices prior to the firmware download.

Posted 9th March 2012 by vipulvajpayee

0 Add a comment

Page 77: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 77/104

Enter your comment...

Comment as: Google Account

Publish Preview

10th February 2012

Some Basic Information of fabric switches …

Types of Fabric architecture

1. Cascade

2. Mesh

Partial Mesh

Full Mesh

3. Ring

4. Core-edge

Port type

N node port, connect to N or F port using point to point link

F fabric port, point to point attachment to an N port

E expansion port, connect to another FC switch using an ISL(inter switch link)

FL Fabric Loop port, connect to NL port or port through a FC-AL

G Generic port, Function as any port

L loop port, capable of arbitrated loop functions.

Where these port are.

The ports on Host or storage Filer that connects to the fabric is called N port.

The port on switch that connects to the host or storage filer is called the F port.

The E port connects one switch to another switch.

The G port are unused but when they are used and when they are plugin they automatically get

adjusted them to F port or E port.

Names Space

Name server is type of server which manages all the port information and it helps to communicate

between the wwn of port to wwn of node and keeps the address of each port and from which device

that port is.

Zoning

Two types of Zoning

Port based zoning

Node name base zoning

Some Basic Information of fabric switches

Page 78: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 78/104

Most preferred type of zoning is WWPN (worldwide port name based zoning) because it is more

flexible.

Posted 10th February 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

10th February 2012

NETWORKER

As we know that backup on tape is very slow so most of IT manager is looking for some software which

can increase the performance of backup on tape, so that they can backup data fast and retrieve it

back with good speed as we know that Symantec is the leading company in selling their backup

solutions

Now EMC is also giving the tough fight to Symantec by their backup product such as networker and

Avamar, DataDomain

As I don’t know much about backup and I don’t have much of hands on on any backup software, so as

I was going through the white paper of networker, I thought of discussing it and writing something on it

As the survey done by many IT product based company they found that in 2012 lots of company is

going to spend their money in buying the backup solutions , so that means backup software sales is

going to increase this year.

Now let’s talk about the networker solution given by the EMC.

Networker is a backup and recovery solution.

The below given is the Networker definition given by EMC and ESG group.

EMC NetWorker is a well-known and trusted backup and recovery solution that centralizes,

automates, and accelerates data protection across the IT environment. It enables organizations to

leverage a common platform for backup and recovery of heterogeneous data while keeping business

applications online. Networker operates in diverse computing environments including multiple

operating systems; SAN, NAS, and DAS disk storage environments; tape drives and libraries; and

cloud storage. It protects critical business applications including databases, messaging environments,

ERP systems, content management systems, and virtual server environments.

Now EMC networker software comes integrated with some of good EMC software such as Data

Domain, DPA (data protection advisor), VADP (vmware vstorage API for data protection),

DDBoost(Data Domain Boost).

Let’s see each features functionality and definition.(these definition is given by EMC and ESGgroup).

Networker a good backup & recovery solutionfrom EMC

Page 79: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 79/104

EMC NetWorker Clone-controlled replication with Data Domain Boost. A key feature available

with the integration of NetWorker and DD Boost is clone-controlled replication. Through the NetWorker

GUI, administrators can create, control, monitor, and catalog backup clones using network-efficient

Data Domain Replicator software. NetWorker also enables administrators to move backup images to a

central location where they can be cloned to tape, consolidating tape operations. With NetWorker

wizard-based clone-controlled replication, administrators can schedule Data Domain Replicator

operations, track save sets, set retention policies, monitor the local and remote replicas available for

recovery, and schedule cloning automatically. It also takes advantage of Data Domain’s deduplication,

compression, and high-speed replication to reduce data amounts and speed cloning resulting in

improved performance and reduced network bandwidth requirements.

EMC Data Protection Advisor. DPA provides unified monitoring, analysis, alerting, and reporting

across the data protection environment. It collects information about data protection automatically to

inform IT decisions and help administrator’s correct problems and meet SLAs. The software’s single,

integrated view brings simplicity to a complex environment, reduces risk, and helps IT work more

effectively. DPA takes volumes of disparate data and turns it into actionable knowledge, enabling

organizations to reduce costs by more efficiently managing people, processes, and equipment.

Integration with VMware vStorage APIs for Data Protection (VADP). NetWorker supports VADP,

VMware’s recommended off-host protection mechanism that replaces VMware Consolidated Backup

(VCB). VADP improves performance by eliminating temporary storage of snapshots and enabling

support for Change Block Tracking (CBT) as well as improving network utilization and reducing

management overhead. NetWorker communicates with VMware vCenter to auto-discover and display

a visual map of the virtual environment, streamlining administrative tasks dramatically.

EMC Data Domain. Data Domain systems deduplicate data inline during the backup process.

Deduplication reduces the amount of disk storage needed to retain and protect data by ratios of 10-

30x and greater, making disk a cost-effective alternative to tape. Deduplicated data can be stored

onsite for immediate restores enabling longer-term retention on disk. NetWorker not only can use Data

Domain systems as disk targets, but also can leverage Data Domain Boost (DD Boost) software to

achieve faster and more efficient data protection. DD Boost increases performance by distributing

portions of the deduplication process to the NetWorker storage nodes and/or application modules so

that only unique, compressed data segments are sent to the Data Domain system.

DD Boost also provides visibility into Data Domain system information, and it enables NetWorker to

control replication between multiple Data Domain systems while maintaining a single point of

management for tracking all backups and duplicate copies.

As we know that in today’s world data keeps growing with continuous rate and managing and backing

up data is getting complicated day by day, even after virtualization came in picture it saved us by

buying more Hardware but in other hand made the backing up & managing data more complicated.

And I hope that networker is good solution for the above problem.

Even we all are very well aware of that after buying the product the next thing comes in picture is

support , and according to my knowledge and some customer review I can say that Symantec is not

good in support providing, they keep you waiting for long to provide the proper solutions.

Well in other hand EMC is very good in support, they don’t make customer wait long for their support

help.

Well want to know more about the Networker and want to know how the customer feels after using this

product.

Please visit http://www.enterprisestrategygroup.com/2012/01/field-audit-emc-networker/

[http://www.enterprisestrategygroup.com/2012/01/field-audit-emc-networker/]

In above given link you can get good LAB report about NETWORKER software by ESGgroup and

even you can get customer report who had deployed this software in their huge Environment.

Page 80: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 80/104

Posted 10th February 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

1 View comments

Anonymous 30 July 2012 14:43

I disagree. Networker is a the worst of all the backup software I have ever used. The menus areconfusing, options are in 3 places and it has a complete lack of reporting. Jobs start and hang withNO logging and NO reason why. There is a complete lack of reporting (why would I want to know mystaging completed successfully?).

Avoid it at all costs.

Reply

8th February 2012

pNFS Overview"Parallel storage based on pNFS is the next evolution beyond clustered NFS storage and the best

way for the industry to solve storage and I/O performance bottlenecks. Panasas was the first to

identify the need for a production grade, standard parallel file system and has unprecedented

experience in deploying commercial parallel storage solutions."

IntroductionHigh-performance data centers have been aggressively moving toward parallel technologies likeclustered computing and multi-core processors. While this increased use of parallelism[http://www.panasas.com/blog/parallelism-goes-mainstream] overcomes the vast majority of computationalbottlenecks, it shifts the performance bottlenecks to the storage I/O system. To ensure that computeclusters deliver the maximum performance, storage systems must be optimized for parallelism. Theindustry standard Network Attached Storage (NAS) architecture has serious performance bottlenecksand management challenges when implemented in conjunction with large scale, high performance

pNFS(parallel Network File System)

Page 81: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 81/104

compute clusters.Panasas® ActiveStor™ [http://www.panasas.com/products/activestor] parallel storage takes a verydifferent approach by allowing compute clients to read and write directly to the storage, entirelyeliminating filer head bottlenecks and allowing single file system capacity and performance to scalelinearly to extreme levels using a proprietary protocol called DirectFlow®[http://www.panasas.com/products/panfs/network-protocols] . Panasas has actively shared its coreknowledge with a consortium of storage industry technology leaders to create an industry standardprotocol which will eventually replace the need for DirectFlow. This protocol, called parallel NFS(pNFS) is now an optional extension of the NFS v4.1 standard.

NFS ChallengesIn order to understand how pNFS works it is first necessary to understand what takes place in a typicalNFS architecture when a client attempts to access a file. Traditional NFS architecture consists of a filerhead placed in front of disk drives and exporting a file system via NFS. When large numbers of clientswant to access the data, or if the data set grows too large, the NFS server quickly becomes thebottleneck and significantly impacts system performance because the NFS server sits in the data pathbetween the client computer and the physical storage devices.

pNFS PerformancepNFS removes the performance bottleneck in traditional NAS systems by allowing the compute clientsto read and write data directly and in parallel, to and from the physical storage devices. The NFSserver is used only to control metadata and coordinate access, allowing incredibly fast access to verylarge data sets from many clients.When a client wants to access a file it first queries the metadata server which provides it with a map ofwhere to find the data and with credentials regarding its rights to read, modify, and write the data.Once the client has those two components, it communicates directly to the storage devices whenaccessing the data. With traditional NFS every bit of data flows through the NFS server – with pNFSthe NFS server is removed from the primary data path allowing free and fast access to data. All theadvantages of NFS are maintained but bottlenecks are removed and data can be accessed in parallelallowing for very fast throughput rates; system capacity can be easily scaled without impacting overallperformance.

Page 82: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 82/104

pNFS eliminates the performance bottleneck of traditional NAS

The future for pNFSIt is anticipated that pNFS will begin to be widely deployed in standard Linux distributions by 2012. TheHPC market will be the first market to adopt the pNFS standard as it provides substantial performancebenefits, especially for the technical compute market. However, simply supporting pNFS will notguarantee the high performance that Panasas currently delivers on its ActiveStor storage systems withDirectFlow. When it comes to matching the pNFS protocol with the back-end storage architecture anddelivering the most performance, the Object layout in pNFS has many advantages and Panasas will be

ideally situated to continue to deliver the highest parallel performance in the industry.

Posted 8th February 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

8th February 2012 MULTI PROTOCOL FILE SYSTEM

Page 83: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 83/104

MULTI PROTOCOL FILE SYSTEM

MPFS: In computing MPFS - Multi Protocol File System is a multi-path network filesystem

[http://en.wikipedia.org/wiki/NAS] technology developed by EMC. MPFS is intended to allow hundreds to

thousands of client computer nodes to access shared computer data with higher performance than

conventional NAS [http://en.wikipedia.org/wiki/NAS] file sharing protocols such as NFS

[http://en.wikipedia.org/wiki/Network_File_System_%28protocol%29] .

Application

MPFS technology is intended for HPC (High Performance Computing) environments. In these

environments many computer nodes require concurrent access to data sets. This technology can be

used to store and access data for grid computing [http://en.wikipedia.org/wiki/Grid_computing] , where the

individual computing power of many systems is combined to perform a single process. Example uses

include processing geological data, voice recognition datasets, and modal processing. Virtualized

computing environments will also benefit from high performance shared storage.

Benefits:

MPFS provides a 3-4X performance increase over conventional NAS.No modifications to the client application need to be made to leverage this technology.Files can be shared between NAS clients with or without NFS simultaneously.NAS data can be accessed at speeds limited only by the storage device.Reduced file system and protocol overhead

How it worksMPFS consists of an agent on the client system and a compatible NAS storage system. The clientagent splits the data and meta data for the file being requested. This is done using an FMP (FileMapping Protocol) Requests for the data and its location is sent over conventional NFS to the NASsystem. Data is sent and retrieved directly from the storage device via iSCSI[http://en.wikipedia.org/wiki/ISCSI] or Fibre Channel [http://en.wikipedia.org/wiki/Fibre_Channel] . Retrievingthe data directly from the storage device increases performance by eliminating the file system andprotocol overhead associated with NFS or CIFS.

Posted 8th February 2012 by vipulvajpayee

0 Add a comment

4th February 2012

Deduplication is a technology to control the rate of data growth. The average UNIX or windows disk

volumes have thousands or even millions of duplicate data objects. Which consumed lots of valuable

space, by eliminating redundant data objects and referencing just the original object huge benefit is

obtained through storage space efficiencies.

There is lots of question asked by customer that how much deduplication data is support by their

system.

MAXIMUM FLEXIBLE VOLUME SIZE

The maximum flexible volume size limitation for deduplication varies based on the platform (this

number depends primarily on the amount of system memory). When this limit is reached, writes to the

volume fail just as they would with any other volume after it is full. This could be important to consider

if the flexible volumes are ever moved to a different platform with a smaller maximum flexible volume

size. Table 5 shows the maximum usable flexible volume size limits (including any snap reserve space)

for the different NetApp storage system platforms. For versions of Data ONTAP prior to 7.3.1, if a

volume ever gets larger than this limit and is later shrunk to a smaller size, deduplication cannot be

enabled on that volume

Deduplication

Page 84: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 84/104

The maximum shared data limit per volume for deduplication is 16TB, regardless of the platform type.

Once this limit is reached, there is no more deduplication of data in the volume, but writes to the

volume continue to work successfully until the volume is completely full.

Table 6 shows the maximum total data limit per deduplicated volume for each platform. This is the

maximum amount of data that can be stored in a deduplicated volume. This limit is equal to the

maximum volume size plus the maximum shared data limit. For example, in an R200 system that can

have a deduplicated volume of up to 4TB in size, 20TB of data can be stored; that is 4TB + 16TB = 20

TB.

Next important question asked by customer is that how of the actual space saving they will get by

running deduplication.

There is a tool called SSET tool , this tool can calculate and tell u the actual space saving of the data ,

but it works up to 2TB max. you can download this tool from now.netapp.com

1. Volume deduplication overhead - for each volume with deduplication enabled, up to 2% of the

logical amount of data written to that volume will be required in order to store volume dedupe

metadata.

Page 85: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 85/104

2. Aggregate deduplication overhead - for each aggregate that contains any volumes with dedupe

enabled, up to 4% of the logical amount of data contained in all of those volumes with dedupe enabled

will be required in order to store the aggregate dedupe metadata. For example, if 100GB of data is to

be deduplicated within a single volume, then there should be 2GB worth of available space within the

volume and 4GB of space available within the aggregate. As a second example, consider a 2TB

aggregate with 4 volumes each 400GB’s in size within the aggregate where three volumes are to be

deduplicated, with 100GB of data, 200GB of data and 300GB of data respectively. The volumes will

need 2GB, 4GB, and 6GB of space within the respective volumes; and, the aggregate will need a total

of 24GB ((4% of 100GB) + (4% of 200GB) + (4%of 300GB) = 4+8+12 = 24GB) of space available

within the aggregate. Note: The amount of space required for deduplication metadata is dependent

on the amount of data being deduplicated within the volumes, and not the size of the volumes or the

aggregate.

For deduplication to provide the most benefit when used in conjunction with Snapshot

copies, consider the following best practices:

1. Run deduplication before creating new Snapshot copies.

2. Remove unnecessary Snapshot copies maintained in deduplicated volumes.

3. If possible, reduce the retention time of Snapshot copies maintained in deduplicated volumes.

4. Schedule deduplication only after significant new data has been written to the volume.

5. Configure appropriate reserve space for the Snapshot copies.

6. If the space used by Snapshot copies grows to more than 100%, it causes df –s to report incorrect

results, because some space from the active file system is being taken away by Snapshot, and

therefore actual savings from deduplication aren’t reported.

Some Facts About Deduplication.

Deduplication consumes system resources and can alter the data layout on disk. Due to the

application’s I/O pattern and the effect of deduplication on the data layout, the read and write I/O

performance can vary. The space savings and the performance impact depend on the application and

the data contents.

NetApp recommends that the performance impact due to deduplication be carefully considered and

measured in a test setup and taken into sizing considerations before deploying deduplication in

performance-sensitive solutions. For information about the impact of deduplication on other

applications, contact the specialists at NetApp for their advice and test results of your particular

application with deduplication.

If there is a small amount of new data, run deduplication infrequently, because there’s no benefit in

running it frequently in such a case, and it consumes system resources. How often you run it depends

on the rate of change of the data in the flexible volume.

The more concurrent deduplication processes you’re running, the more system resources are

consumed.

Given the previous two items, the best option is to do one of the following:

- Use the auto mode so that deduplication runs only when significant additional data has been written

to each flexible volume. (This tends to naturally spread out when deduplication runs.)

- Stagger the deduplication schedule for the flexible volumes so that it runs on alternative days,

reducing the possibility of running too many concurrent sessions.

- Run deduplication manually.

If Snapshot copies are required, run deduplication before creating them to minimize the amount of

data before the data gets locked in to the copies. (Make sure that deduplication has completed before

creating the copy.) If a Snapshot copy is created on a flexible volume before deduplication has a

chance to complete on that flexible volume, this could result in lower space savings.

For deduplication to run properly, you need to leave some free space for the deduplication

metadata.

IMPACT ON THE SYSTEM DURING THE DEDUPLICATION PROCESS

The deduplication operation runs as a low-priority background process on the system. However, it can

still affect the performance of user I/O and other applications running on the system. The number of

deduplication processes that are running and the phase that each process is running in can cause

performance impacts to other applications running on the system (up to eight deduplication processes

Page 86: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 86/104

can actively run at any time on a system). Here are some observations about running deduplication on

a

FAS3050 system: With eight deduplication processes running, and no other processes running,

deduplication uses 15% of the CPU in its least invasive phase, and nearly all of the available CPU in

its most invasive phase. When one deduplication process is running, there is a 0% to 15%

performance degradation on other applications. With eight deduplication processes running, there

may be a 15% to more than 50% performance penalty on other applications running on the system.

[]

Posted 4th February 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

23rd January 2012

Hello Friend as I told you that I will be writing the separate blog on how to install or configure the new

netapp storage system.

Once you power on the storage system it will start the boot process and then it will say press Ctrl+c for

the maintenance mode , don’t press it and then it will start booting , during booting it will ask for the

initial data which you need to put. I will discuss all this things below.

Steps

1. Please enter the new hostname .

You can name this host whatever you wish (for example, host1).

2. Do you want to configure interface groups?

You can type either y or n at this prompt.

If you type “yes” Then you are Prompted to enter additional configuration information for each of the

interface groups.

These prompts are:

• Number of interface groups to configure.

• Name of interface group..

Initial configuration of NetApp filer

Page 87: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 87/104

• Is interface_group_name a single [s], multi [m] or a lacp

[l] interface group?

• Number of links for interface_group_name.

• Name of link for interface_group_name.

If you have additional links, you should also enter their names here.

• IP address for interface_group_name.

• Netmask for interface_group_name.

• Should interface group interface_group_name take over a

partner interface group during failover?

• Media type for interface_group_name.

If you typed “no” Directed to the next prompt.

3. Please enter the IP address for Network Interface e0a

Enter the correct IP address for the network interface that connects the storage system to your

network (for example, 192.168.1.1).

4. Please enter the netmask for Network Interface e0a.

After entering the IP address, you need to enter the netmask for your network (for example,

255.255.255.0).

5. Should interface e0a take over a partner IP address during failover?

Type either “y” or “n” at this prompt.

If you type “yes” Then you are...

Prompted to enter the address or interface name to be taken over by e0a.

Note: If you type y, you must already have purchased a license for controller failover to

enable this function.

If you type “no” then Directed to the next prompt.

6. Please enter media type for e0a (100tx-fd, tp-fd, 100tx, tp, auto

(10/100/1000))

Enter the media type that this interface should use.

7. Please enter flow control for e0a {none, receive, send, full} [full]

Enter the flow control option that this interface should use.

Setting up your storage system for using native disk shelves

8. Do you want e0a to support jumbo frames? [n]

Specify whether you want this interface to support jumbo frames.

9. Continue to enter network parameter values for each network interface when prompted.

10. Would you like to continue setup through the Web interface?

If you enter “yes”

Continue setup with the Setup Wizard in a Web browser.

If you type “no”

Continue to use the command-line interface.

Proceed to the next step.

11. Please enter the name or IP address of the default gateway.

Enter the primary gateway that is used to route outbound network traffic.

13. Please enter the name or IP address for administrative host.

The administration host is given root access to the storage system's /etc files for system

administration.

To allow /etc root access to all NFS clients enter RETURN below.

Attention: If you change the name or IP address of an administration host on a storage system

that has already been set up and configured, the /etc/exports files will be overwritten on system

reboot.

14. Please enter the IP address for (name of admin host).

Enter the IP address of the administration host you specified earlier (for example, 192.175.4.1).

Note: The name listed here is the name of the host entered in the previous step.

15. Please enter timezone

GMT is the default setting. Select a valid value for your time zone and enter it here.

16. Where is the filer located?

This is the actual physical location where the storage system resides (for example, Bldg. 4, Floor

2, Room 216) .

Page 88: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 88/104

17. What language will be used for multiprotocol files?

Enter the language.

18. Enter the root directory for HTTP files

This is the root directory for the files that the storage system will serve through HTTP or HTTPS.

19. Do you want to run DNS resolver?

If you type y at this prompt, you need the DNS domain name and associated IP address.

20. Do you want to run NIS client?

If you type y at this prompt, you will be prompted to enter the name of the NIS domain and the

NIS servers.

When you have finished with the NIS prompts, you see an advisory message regarding

AutoSupport and you are prompted to continue.

21. Would you like to configure the BMC LAN interface ?

If you have a BMC installed in your system and you want to use it, type y at the prompt and enter

the BMC values you collected.

22. Would you like to configure the RLM LAN interface ?

If you have an RLM installed in your system and you want to use it, type y at the prompt and

enter the RLM values you collected.

23. Do you want to configure the Shelf Alternate Control Path Management

interface for SAS shelves ?

If you are planning to attach DS4243 disk shelves to your system, type y at the prompt and enter

the ACP values you collected.

24. Setting the administrative (root) password for new_system_name...

New password:

Retype new password:

Enter the new root password.

25. When setup is complete, to transfer the information you've entered to the storage

system, enter

the following command, as directed by the prompt on the screen.

reboot

Attention: If you do not enter reboot, the information you entered does not take effect and

is

lost.

Posted 23rd January 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

Page 89: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 89/104

23rd January 2012

Hello Friend today I want to write about the project which I got of NetApp tech refresh from Fas2020

to Fas 2040

With one disk shelf DS4243 and the mistake which I did and the things which I learned by those

mistake

I want to share it. So you all can learn from those mistakes.

The Task: I need to replace the Fas2020 fully populated with its internal disk, with Fas2040 dual

controller and then add one new diskshelf DS4243 half populated. And I want to move all the internal

disk from Fas2020 (single controller) to the Fas 2040 (dual controller) , and then I need to move all

the volumes from one controller to the other and there was three volumes with luns which I need to

remap.

There was one Aggr named Aggr0 with 4 volmes and in which 3 volumes were having luns, and as I

told that all the volumes were 97% filled. Why I am mentioning about volumes 97% filled because I

really faced lot of problem while migrating it from one filer to another.

I was told to give all the fas 2020 internal disk to fas2040 one controller and the new disk shelf disk to

the other controller.

I hope you all understood the the task which I got , (friend my English is not so good so I am really

sorry if any of you are not able to understand what I wrote).

Solution: I will explain each thing Phase by Phase .

Phase 1: First I Thought of upgrading the Fas2020 of equivalent version of Fas2040.

So I upgraded the ontap version from Fas7.3.3 to 7.3.6

After upgrading I checked every thing was fine and then I halted the system.

Phase 2: unplugged the wire connection of Fas2020 and then un mounted the Fas2020 and then

mounted the new Fas2040 and then mounted the Disk shelf DS4243.

And then removed all the disk from Fas2020 and inserted it to Fas2040.

Then cabled Fas2040 and the disk shelf , first power the diskshelf and then set it shelf id and

rebooted it.

Then I booted the Fas2040 in maintaince mode and assigned all the internal disk of Fas2020 to one

of the controller of Fas2040.

I hope you know how to assign the disk to new controller. Well I will write the cmd also.

“Disk reassign –S old system id –d new system id”

So now the all the old fas2020 disk got assigned to the new Fas2040 .

And then halted the system to return from the maintaince mode.

And then again booted the system and all the configuration and every thing came as it is to the new

controller and all the volumes and aggr was detected by the new system the host name also came as

it is.

Then we attached the new disk shelf to the Fas2040 and then assigned the all the new disk to the

other controller.

And configured the new controller as we do the initial configuration. I hope you all know how to do the

initial configuration. I don’t want to mention it here as I will be writing the new blog to how do the initial

configuration of the new filer and will mention each and every option.

So coming back , as I did the initial configuration of the new filer , installed the licenses .

Phase 3: Data migration Phase .

Created all the 4 volumes of same size to the new filer

Created the snapmirror relation ship between two filer because we were doing the data migration by

snapmirror.

But the snap mirror was not working , it was giving the error of “ The snapmirror is misconfigured or

the source volume may be busy”

We were not able to understand this error,

So what we did is we changed the “options snapmirror.access legacy” to the “options

snapmirror.access host=ipaddress of the source in destination filer and ipaddress of

destination in source filer”.

Tech refresh From Fas2020 To Fas2040

Page 90: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 90/104

And then after doing this we did again the snapmirror and then the snapmirror of one volume

happened which was of 2 gb .( it got finished within no time.)

Then we tried to do the snapmirror of 2nd volume which was of 343 gb, and again we got the same

error which we was getting before.

Here we struggled lot to figure out the problem, but we found later and resolved it. As I was mentioning

before that volume were 97% filled, and there was around no space leftout in aggr also. So because

of that that volume was no able to take out its initial snapshot, because of no space,( may be that is

the problem ).

So what we did we executed one cmd for that volume“vol options /vol/vol_name

fractional_reserve 0”

And after running that cmd for that volume we again tried to manaually take the snapshot of that

volume and that snapshot got created.

So again we created the snapmirror relationship and this time the transferring started happening.

Same thing we did for the other volume and even the snapmirror started for the 3rd volume and so all

the 4 volume got transferd from one filer to the other.

Phase 4: mapping the new transferd volumes and lun.

As I told you that three lun was there in three volumes so we need to map those lun back.

And these lun were given to the Vmware so these lun were the datastore in the vmware.

We broke the snapmirror relationship and then made the source volume offline and then made the

destination volume online .

Created the igroup and mapped the Igroup .

But when we tried to search that disk from vmware side we were not able to find those disk. ( again

new problem arose)

Then we checked the FCP connection by cmd “fcadmin config” we showed the both the port were

down so we checked the FC cabling were done properly or not, so we tried to manually bring the port

up.

it gave the error start the fcp services so we tried to start the fcp services but the services was not

starting so what we did we checked the license of both the filer they were different so we made it to

same license on both the filer.(cluster was enabled)

So again we tried to start the services and then we got the error that fcp is misconfigured and fcp cf

mode is misconfigured in its partner.

Then we checked “fcp show cfmode”

and found that one filer is in “single_image” and one is in “standby”

so we went to the advanced mode and changed the setting by typing the

cmd “fcp set cfmode Single_image” on the filer where it was set to “standby” mode.

And then by setting it we again run the fcp services it got started and then the port came online and

then these lun were visible to vmware.

Thanks the problem got resolved.

And project got completed successfully,

Posted 23rd January 2012 by vipulvajpayee

0 Add a comment

Page 91: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 91/104

Enter your comment...

Comment as: Google Account

Publish Preview

17th January 2012

Please find the below procedure to download the latest disk firmware from NetApp now

site.

1. Log in to NetApp now site. “now.netapp.com”

2. Go to the Download tab.

3. Under Download tab you will find the Firmware tab

4. Under Firmware tab you will find the Disk Drive & Firmware matrix tab click on that tab.

Procedure to upgrade the Disk firmware.

Note 1: Schedule the disk firmware update during times of minimal usage of the filer as this

activity is intrusive to the normal processing of disk I/O.

Note 2: Since updating the disk firmware involves spinning the drive down and back up

again, it is possible that a volume can become offline if upgrading 2 or more of the same

drive types in the same RAID group. This is because when you upgrade disk firmware, you

are upgrading the disk firmware of all the same drive types in parallel. When encountering

a situation like this, it is best to schedule a maintenance window to upgrade the disk

firmware for all disks.

Option #1 - Upgrade the Disk Firmware in the Background

1. Check or set the raid.background_disk_fw_update option to on

To Check: options raid.background_disk_fw_update.enable

To Set: option raid.background_disk_fw_update.enable on

2. Place the new disk firmware File into the /etc/disk_fw directory

The system will recognize the new available version and will non-disruptively upgrade all the disks

requiring a firmware update to that new version in the background.

Option #2 – Upgrade the disk firmware during a system reboot

1. Check or set the raid.background_disk_fw_update option to OFF

To Check: options raid.background_disk_fw_update.enable

To Set: option raid.background_disk_fw_update.enable off

procedure to download & upgrade the netapp diskfirmware .

Page 92: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 92/104

2. Place the new disk firmware File into the /etc/disk_fw directory

Schedule a time to perform a system reboot. Note that the reboot will take longer as the reboot

process will be suspended while the disk firmware is upgraded. Once that is complete the

system will continue booting into Data ONTAP.

Option #3 – Upgrade the disk firmware manually

1. Check or set the raid.background_disk_fw_update option to OFF

To Check: options raid.background_disk_fw_update.enable

To Set: option raid.background_disk_fw_update.enable off

2. Place the new disk firmware file into the /etc/disk_fw directory

3. Issue the disk_fw_update command

The above given are the three procedure for upgrading the disk firmware.

Posted 17th January 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

17th January 2012

Manually Failover activity in NetApp Metro cluster Environment

Today I want to write about the manually performing the takeover and failback activity in netapp metro

cluster environment.

In metro cluster environment the takeover activity does not work just by giving the cmd cf takeover

cmd.

Takeover process.

We need to manually fail the ISL link.

Manually Failover activity in NetApp Metro clusterEnvironment

Page 93: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 93/104

We need to give the “cf forcetakeover –d”. cmd.

Giveback process.

aggr status –r : Validate that you can access the remote storage. If remote shelves don’t show up,

check connectivity

partner : Go into partner mode on the surviving node.

aggr status –r: Determine which aggregates are at the surviving site and which aggregates are at

the disaster site by entering the command at the left.

Aggregates at the disaster site show plexes that are in a failed state with an out-of-date status.

Aggregates at the Surviving site show the plex online.

Note : If aggregates at the disaster site are online, take them offline by entering the following

command for each online aggregate: aggr offline disaster_aggr (disaster_aggr is the name of the

aggregate at the disaster site).

Note :( An error message appears if the aggregate is already offline.)

Recreate the mirrored aggregates by entering the following command for each aggregate that was

split: “aggr mirror aggr_name -v disaster_aggr” (aggr_name is the aggregate on the surviving

site’s node.disaster_aggr is the aggregate on the disaster site’s node. The aggr_name aggregate

rejoins the disaster_aggr aggregate to reestablish the MetroCluster configuration. Caution: Make sure

that resynchronization is complete on each aggregate before attempting the following step).

Partner (Return to the command prompt of the remote node).

Cf giveback (The node at the disaster site reboots).

Step by step Procedure

DescriptionTo test Disaster Recovery, you must restrict access to the disaster site node to prevent the node fromresuming service. If you do not, you risk the possibility of data corruption.ProcedureAccess to the disaster site note can be restricted in the following ways:

Turn off the power to the disaster site node.

OrUse "manual fencing" (Disconnect VI interconnects and fiber channel cables).

However, both of these solutions require physical access to the disaster site node. It is not alwayspossible (or practical) for testing purposes.Proceed with the steps below for "fencing" the fabric MetroCluster without power loss and to testDisaster Recovery without physical access.

Note: Site A is the takeover site. Site B is the disaster site.

Takeover procedure

1. Stop ISL connections between sites.

Connect on both fabric MetroCluster switches on site A and block all ISL ports. Retrieve the ISLport number.

SITEA02:admin> switchshowswitchName: SITEA02switchType: 34.0switchState: OnlineswitchMode: NativeswitchRole: PrincipalswitchDomain: 2switchId: fffc02switchWwn: 10:00:00:05:1e:05:ca:b1

Page 94: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 94/104

zoning: OFFswitchBeacon: OFF

Area Port Media Speed State Proto===================================== 0 0 id N4 Online F-Port 21:00:00:1b:32:1f:ff:66 1 1 id N4 Online F-Port 50:0a:09:82:00:01:d7:40 2 2 id N4 Online F-Port 50:0a:09:80:00:01:d7:40 3 3 id N4 No_Light 4 4 id N4 No_Light 5 5 id N2 Online L-Port 28 public 6 6 id N2 Online L-Port 28 public 7 7 id N2 Online L-Port 28 public 8 8 id N4 Online Online LE E-Port 10:00:00:05:1e:05:d0:39 "SITEB02" (downstream) 9 9 id N4 No_Light 10 10 id N4 No_Light 11 11 id N4 No_Light 12 12 id N4 No_Light 13 13 id N2 Online L-Port 28 public 14 14 id N2 Online L-Port 28 public 15 15 id N4 No_Light

Check fabric before blocking the ISL port.

SITEA02:admin> fabricshowSwitch ID Worldwide Name Enet IP Addr FC IP Addr Name-------------------------------------------------------------------------1: fffc01 10:00:00:05:1e:05:d0:39 44.55.104.20 0.0.0.0 "SITEB02"2: fffc02 10:00:00:05:1e:05:ca:b1 44.55.104.10 0.0.0.0 >"SITEA02" The Fabric has 2 switches

Disable the ISL port.

SITEA02:admin> portdisable 8 Check split of the fabric.ss Check split of the fabric.

SITEA02:admin> fabricshowSwitch ID Worldwide Name Enet IP Addr FC IP Addr Name-----------------------------------------------------------------------2: fffc02 10:00:00:05:1e:05:ca:b1 44.55.104.10 0.0.0.0 >"SITEA02"10:00:00:05:1e:05:d2:90 44.55.104.11 0.0.0.0 >"SITEA03"Do the same thing on the second switch.

SITEA03:admin> switchshowswitchName: SITEA03switchType: 34.0switchState: OnlineswitchMode: NativeswitchRole: PrincipalswitchDomain: 4switchId: fffc04switchWwn: 10:00:00:05:1e:05:d2:90zoning: OFFswitchBeacon: OFF

Area Port Media Speed State Proto===================================== 0 0 id N4 Online F-Port 21:01:00:1b:32:3f:ff:66 1 1 id N4 Online F-Port 50:0a:09:83:00:01:d7:40 2 2 id N4 Online F-Port 50:0a:09:81:00:01:d7:40 3 3 id N4 No_Light 4 4 id N4 No_Light 5 5 id N2 Online L-Port 28 public 6 6 id N2 Online L-Port 28 public 7 7 id N2 Online L-Port 28 public 8 8 id N4 Online LE E-Port 10:00:00:05:1e:05:d1:c3 "SITEB03" (downstream)

Page 95: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 95/104

9 9 id N4 No_Light 10 10 id N4 No_Light 11 11 id N4 No_Light 12 12 id N4 No_Light 13 13 id N2 Online L-Port 28 public 14 14 id N2 Online L-Port 28 public 15 15 id N4 No_Light

SITEA03:admin> fabricshowSwitch ID Worldwide Name Enet IP Addr FC IP Addr Name----------------------------------------------------------------------- 3: fffc03 10:00:00:05:1e:05:d1:c3 44.55.104.21 0.0.0.0 "SITEB03" 4: fffc04 10:00:00:05:1e:05:d2:90 44.55.104.11 0.0.0.0 >"SITEA03"

The Fabric has 2 switches

SITEA03:admin> portdisable 8SITEA03:admin> fabricshowSwitch ID Worldwide Name Enet IP Addr FC IP Addr Name----------------------------------------------------------------------- 4: fffc04 10:00:00:05:1e:05:d2:90 44.55.104.11 0.0.0.0 >"SITEA03"

Check the NetApp controller console for disks missing.

Tue Feb 5 16:21:37 CET [NetAppSiteA: raid.config.spare.disk.missing:info]: Spare DiskSITEB03:6.23 Shelf 1 Bay 7 [NETAPP X276_FAL9E288F10 NA02] S/N [DH07P7803V7L] ismissing.

2. Check all aggregates are split.

NetAppSiteA> aggr status -rAggregate aggr0 (online, raid_dp, mirror degraded) (block checksums)Plex /aggr0/plex0 (online, normal, active, pool0)RAID group /aggr0/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)---------------------------------------------------------------------------------------dparity SITEA03:5.16 0b 1 0 FC:B 0 FCAL 10000 272000/557056000 280104/573653840parity SITEA02:5.32 0c 2 0 FC:A 0 FCAL 10000 272000/557056000 280104/573653840data SITEA03:6.16 0d 1 0 FC:B 0 FCAL 10000 272000/557056000 280104/573653840

Plex /aggr0/plex1 (offline, failed, inactive, pool1)RAID group /aggr0/plex1/rg0 (partial)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)---------------------------------------------------------------------------------dparity FAILED N/A 272000/557056000parity FAILED N/A 272000/557056000data FAILED N/A 272000/557056000Raid group is missing 3 disks.NetAppSiteB> aggr status -rAggregate aggr0 (online, raid_dp, mirror degraded) (block checksums)Plex /aggr0/plex0 (online, normal, active, pool0)RAID group /aggr0/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)------------------------------------------------------------------------------------------dparity SITEB03:13.17 0d 1 1 FC:B 0 FCAL 10000 272000/557056000280104/573653840parity SITEB03:13.32 0b 2 0 FC:B 0 FCAL 10000 272000/557056000 280104/573653840data SITEB02:14.16 0a 1 0 FC:A 0 FCAL 10000 272000/557056000 280104/573653840

Plex /aggr0/plex1 (offline, failed, inactive, pool1)RAID group /aggr0/plex1/rg0 (partial)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)--------------------------------------------------------------------------------------dparity FAILED N/A 72000/557056000

Page 96: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 96/104

parity FAILED N/A 72000/557056000data FAILED N/A 72000/557056000Raid group is missing 3 disks.

3. Connect on the Remote LAN Management (RLM) console on site B. Stop and power off theNetApp controller.

NetAppSiteB> haltBoot Loader version 1.2.3Copyright (C) 2000,2001,2002,2003 Broadcom Corporation.Portions Copyright (C) 2002-2006 NetApp Inc.

CPU Type: Dual Core AMD Opteron(tm) Processor 265LOADER>

Power off the NetApp controller.

LOADER>Ctrl-dRLM NetAppSiteB> system power offThis will cause a dirty shutdown of your appliance. Continue? [y/n]

RLM NetAppSiteB> system power statusPower supply 1 status: Present: yes Turned on by Agent: no Output power: no Input power: yes Fault: noPower supply 2 status: Present: yes Turned on by Agent: no Output power: no Input power: yes Fault: no

4. Now you can test Disaster Recovery.NetAppSiteA> cf forcetakeover -d----NetAppSiteA(takeover)>

NetAppSiteA(takeover)> aggr status -vAggr State Status Optionsaggr0 online raid_dp, aggr root, diskroot, nosnap=off, mirror degraded raidtype=raid_dp, raidsize=16, ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off, snapshot_autodelete=on, lost_write_protect=on Volumes: vol0

Plex /aggr0/plex0: online, normal, active RAID group /aggr0/plex0/rg0: normal

Plex /aggr0/plex1: offline, failed, inactive

NetAppSiteB/NetAppSiteA> aggr status -vAggr State Status Optionsaggr0 online raid_dp, aggr root, diskroot, nosnap=off, raidtype=raid_dp, raidsize=16, ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off, snapshot_autodelete=on, lost_write_protect=on

Page 97: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 97/104

Volumes: vol0

Plex /aggr0/plex1: online, normal, active RAID group /aggr0/plex1/rg0: normal

Giveback procedure

5. After testing Disaster Recovery, unblock all ISL ports.SITEA03:admin> portenable 8

Wait awhile (Fabric initialization)

SITEA03: admin> fabricshowSwitch ID Worldwide Name Enet IP Addr FC IP Addr Name-----------------------------------------------------------------------------------------------------------3: fffc03 10:00:00:05:1e:05:d1:c3 44.55.104.21 0.0.0.0 SITEB03"4: fffc04 10:00:00:05:1e:05:d2:90 44.55.104.11 0.0.0.0 SITEA03"The Fabric has 2 switches

SITEA02:admin> portenable 8

Wait awhile (Fabric initialization)

SITEA02:admin> fabricshowSwitch ID Worldwide Name Enet IP Addr FC IP Addr Name-------------------------------------------------------------------------1: fffc01 10:00:00:05:1e:05:d0:39 44.55.104.20 0.0.0.0 "SITEB02"2: fffc02 10:00:00:05:1e:05:ca:b1 44.55.104.10 0.0.0.0 >"SITEA02"The Fabric has 2 switches

6. Synchronize all aggregates.NetAppSiteB/NetAppSiteA> aggr status -v Aggr State Status Options aggr0(1) failed raid_dp, aggr diskroot, raidtype=raid_dp, out-of-date raidsize=16, resyncsnaptime=60, lost_write_protect=off Volumes: Plex /aggr0(1)/plex0: offline, normal, out-of-date RAID group /aggr0(1)/plex0/rg0: normal Plex /aggr0(1)/plex1: offline, failed, out-of-date

aggr0 online raid_dp, aggr root, diskroot, nosnap=off, raidtype=raid_dp, raidsize=16, ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off, snapshot_autodelete=on, lost_write_protect=on Volumes: vol0

Plex /aggr0/plex1: online, normal, active RAID group /aggr0/plex1/rg0: normal

Launch aggregate mirror for each one.

NetAppSiteB/NetAppSiteA> aggr mirror aggr0 –v aggr0(1)

Wait awhile for all aggregates to synchronize.

NetAppSiteB/NetAppSiteA: raid.mirror.resync.done:notice]: /aggr0: resynchronization completed in0:03.36

NetAppSiteB/NetAppSiteA> aggr mirror aggr0 -v aggr0(1) Aggr State Status Options aggr0 online raid_dp, aggr root, diskroot, nosnap=off, mirrored raidtype=raid_dp, raidsize=16, ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60,

Page 98: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 98/104

fs_size_fixed=off, snapshot_autodelete=on, lost_write_protect=on Volumes: vol0

Plex /aggr0/plex1: online, normal, active RAID group /aggr0/plex1/rg0: normal

Plex /aggr0/plex3: online, normal, active RAID group /aggr0/plex3/rg0: normal

7. After re-synchronization is done, power on and boot the NetApp controller on site B.RLM NetAppSiteB> system power onRLM NetAppSiteB> system consoleType Ctrl-D to exit.

Boot Loader version 1.2.3Copyright (C) 2000,2001,2002,2003 Broadcom Corporation.Portions Copyright (C) 2002-2006 NetApp Inc.

NetApp Release 7.2.3: Sat Oct 20 17:27:02 PDT 2007Copyright (c) 1992-2007 NetApp, Inc.Starting boot on Tue Feb 5 15:37:40 GMT 2008Tue Feb 5 15:38:31 GMT [ses.giveback.wait:info]: Enclosure Services will be unavailable while waitingfor giveback.Press Ctrl-C for Maintenance menu to release disks.Waiting for giveback

8. On site A, execute cf giveback

NetAppSiteA(takeover)> cf statusNetAppSiteA has taken over NetAppSiteB.NetAppSiteB is ready for giveback.

NetAppSiteA(takeover)> cf givebackplease make sure you have rejoined your aggr before giveback.Do you wish to continue [y/n] ?? y

NetAppSiteA> cf statusTue Feb 5 16:41:00 CET [NetAppSiteA: monitor.globalStatus.ok:info]: The system's global statusis normal.Cluster enabled, NetAppSiteB is up.

Posted 17th January 2012 by vipulvajpayee

Page 99: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 99/104

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

12th January 2012

Some Netapp terms.. [http://unixfoo.blogspot.com/2008/11/some-netapp-

terms.html]Snapshot - is a technology that helps to create point-in-time copies of file systems, which youcan use to protect data as a backup from a single file to a complete disaster. Snapshot can betaken on a up & running filesystem and takes copies in less than a second, regardless of volume

size or level of activity netapp storage.

Snapmirror - is a data replication software provided by netapp. This mirrors & replicates dataacross global network at high speeds. Snapmirror can be synchronous & asynchronous and can

be performed on qtree and volume.

Snapvault - is a heterogeneous disk-to-disk backup solution for Netapp filers and systems withother OS (Solaris, HP-UX, AIX, Windows, and Linux). In event of data loss or corruption on afiler, backed up data can be restored from the SnapVault secondary storage system with lessdowntime and less of the uncertainty associated with conventional tape backup and restore

operations.

Flexclone - is a technology that is a based on Netapp snapshot. This can creates true clonesof volumes - instantly replicated the data - without requiring additional storage space. Flexclone

works on data volumes and FCP iSCSI luns.

Flexcache - is the caching layer which can store cache copies of flexvolumes acrossdatacenters. FlexCache automatically replicates and serves hot data sets anywhere in your

infrastructure using local caching volumes.

Deduplication - is a technology to control the rate of data growth. The average UNIX orWindows disk volume contains thousands or even millions of duplicate data objects. As data iscreated, distributed, backed up, and archived, duplicate data objects are stored unabatedacross all storage tiers. The end result is inefficient utilization of data storage resources. Byeliminating redundant data objects and referencing just the original object, huge benefit isobtained through storage space efficiencies - which results in "Cost benefit" and "Management

benefit".

Autosupport : Netapp OS [Data ONTAP] is capable of sending automated notification toCustomer Support at Network Appliance and/or to other designated addressees in certainsituations. The notification is called autosupport and it contains useful information to help them

solve or recognize problems quickly and proactively.

Some Netapp terms..

Page 100: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 100/104

Operations Manager - is a Web-based UI of DataFabric Manager. It is used for day-to-day

monitoring, alerting, and reporting on storage and netapp infrastructure.

Posted 12th January 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

12th January 2012

Important Application of Isilon storage .

Hello Friend today I want to share some knowledge of Isilon storage, as I was going through some

study material of Isilon storage I found some of its application quite interesting.

Isilon is Storage Company; recently it is acquired by EMC which is known name in Storage world.

Isilon storage box runs on “OneFS “operating system.

Major Application of OneFS are given below.

1. Smartpools

2. InsightIQ

3. SmartConnect

4. SnapshotIQ

5. SyncIQ

6. Smartquota

7. SmartLock

I will explain each application, so that you all can understand about these applications

nicely.

SmartPools: enables storage administrators to aggregate and consolidate applications – providing

workflow isolation, higher utilization, and independent scalability – with a single point of management.

Features

• Align data automatically with proper storage tier

• Manage data through a single point

• Set file-level granular policies

Key Benefits

• Easily deploy a single file system to span multiple tiers of performance and capacity

• Automatically adapt storage to business data and application workflows over time

Important Application of Isilon storage.

Page 101: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 101/104

• Align the business value of data with optimal storage performance and cost.

InsightIQ provides performance and file system analytics to optimize applications, correlate workflow

and network events, and accurately forecast future storage needs.

Features

• Scalable virtual appliance

• Rich visualizations and interactions

• Elaborate drill-downs and breakouts

Key Benefits

• Performance introspection

• File system analytics

• Gain actionable insight into detailed performance and capacity trends

• Optimize workflows and applications

• Reduce inefficiencies and costs

SyncIQ: delivers extremely fast replication of files across multiple Isilon IQ clusters over the WAN and

LAN.

SyncIQ: is the industry's only policy-based file replication system designed exclusively for scale-out

storage, combining a rich set of policies for creating and scheduling storage replication jobs with

innovative bandwidth throttling and cluster utilization capabilities. Replication policies can be set at the

cluster, directory or file levels, and replication jobs can be conveniently scheduled or run on-demand -

maximizing network and storage resource efficiency and availability.

SnapshotIQ is a data protection feature of Isilon’s OneFS operating system. It reliably distributes an

unlimited number of snapshots across multiple, highly available Isilon IQ clustered storage nodes.

Features

• Unlimited snapshots

• Snapshots of changed files only

• Integrated with Windows ShadowCopy service

Key Benefits

• Data protection

• Simple management

• Efficiency

SmartQuotas allows administrators to control and limit storage usage across their organization, and

provision a single pool of Isilon clustered storage to best meet the unique storage challenges faced by

their organizations.

Features

• Notification and scheduled reporting

• Thin provisioning

• Configurability

Key Benefits

• Simplicity

• Scalability

• Flexibility

• Efficiency

Page 102: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 102/104

SmartConnect enables client connection load balancing and dynamic NFS failover and failback of

client connections across storage nodes to provide optimal utilization of the cluster resources.

Features

• Client connection load balancing

• Dynamic NFS failover and failback

• Performance rebalance

Key Benefits

• Optimize performance

• High availability

• Scalability

SmartLock provides policy-based retention and protection against accidental deletions. Because

SmartLock is a software-based approach to Write Once Read Many (WORM) solution, you can store

SmartLock-protected data alongside other data types with no effect on performance or availability,

and without the added cost of purchasing and maintaining specialty WORM-capable hardware.

Features

• Stored at the directory level

• Set defaults retention times once

• Tight integration with Isilon OneFS

Key Benefits

• Protect critical data

• Guarantee retention

• Efficient and cost effective.

Posted 12th January 2012 by vipulvajpayee

Enter your comment...

Comment as: Google Account

Publish Preview

0 Add a comment

Page 103: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 103/104

11th January 2012

My Experience of Installing the FC Disk Shelf (DSmk14) for FAS3020 filer.

Hello Friends, I want to share my experience of installing the disk shelf for one of my customer as it

was

My first disk shelf call, and I completely messed up that call and I have really a bad experience of that

call. So I want to share it with you all.

How to find out that your filer is in Hardware base disk ownership or software base disk

ownership.

1. Check in auto support message of the filer there you will find the SANOWN disable below disk

information.

2. If you forgot to check the auto support message then you can type some cmd like disk show –v

Or disk show -o if these cmd does not work then your filer is using the hardware base disk

ownership.

What to keep in mind in Hard ware base disk ownership.

1. If you have one filer then it is not a big problem to connect the new disk shelf, just you have to keep

the knowledge of looping.

2. If your filer is in cluster then you have to be clear that to which filer you want to give the ownership of

new add disk.

3. When you are clear about that then connect the FC cable from the ESHA module of disk shelf to the

filer to whom you want to give the ownership.

4. Because in Hardware base disk ownership which filer is connected with the ESHA module get the

ownership of newly added disk.(separate loop)

5. Second thing keep in mind that if you are adding the newly added disk shelf in existing loop then the

ownership of that newly added disk will go to the filer which is having the ownership of whole loop.

6. Keep in mind once if the ownership is given to any filer in cluster then it is not possible to change the

ownership without rebooting.(in Hardware base disk ownership).

Posted 11th January 2012 by vipulvajpayee

My Experience of Installing the FC Disk Shelf(DSmk14) for FAS3020 filer.

0 Add a comment

Page 104: Netapp and Others

1/21/13 vipulvajpayee blog

vipulvajpayeestorage.blogspot.com 104/104

Enter your comment...

Comment as: Google Account

Publish Preview