NetApp downgrade firmware

Downgrading firmware on a NetApp SAN

If you have just reclaimed a shelf from a NetApp SAN that you would like to use with an older head, you will notice that the drives are not recognized. This is due the the fact that DataOntap upgrades the drives automatically when you plug them in to an updated shelf, but it won’t downgrade or even recognize correctly drives that come from an higher level revision.

Downgrading drives while keeping the contents is actually impossible.

Actually downgrating the firmware on NetApp SAN drives

Chances are that you don’t really need to downgrade the firmware on the drives, and you can just skip to the next section.

If you are sure you need to downgrade the drives, here are the basic steps:

  • Get a linux box, with a qlogic HBA, and cables that can attach to the shelf that has the drives to downgrade
  • Make sure only the drives that you want to downgrade are in the shelf
  • Make sure the proper disk qualification package is on the filer (if not, download the Disk Qualification Package as a zip file from: http://now.netapp.com/NOW/download/tools/diskqual/ and extract it to the /etc directory of the NetApp)
  • Download all current disk firmware from http://now.netapp.com/NOW/download/tools/diskfw/
  • Get the right firmware for your disk (the new you just downloaded, or an old one, if you need to downgrade) — the old firmware is already on the root volume of the netapp
  • Use the proper firmware upgrade tool from your manufacturer to flash the firmware from the Linux box

Wiping labels on NetApp SAN drives

If you simply can’t get the old filer head to recognize the drives that had new labels, the only viable solution to get the drives to work is to reconnect the shelf to the old filer head that was running a newer firmware.

Erasing labels on NetApp SAN drives

  • Boot into maintenance mode (CTRL+C at boot and then Option 5)
  • list the drives: label summary
  • erase the labels: label wipe 4.23 where 4.23 is the drive number to wipe
  • exit maintenance mode: halt

Chances are that this will still not allow the older filer to see the drives properly. The next step always works: zero the drives.

Zeroing spares on NetApp filer

  • Boot into maintenance mode (CTRL+C at boot and then Option 5)
  • list the drives: label summary
  • force the drives to become spares: label makespare 4.23 where 4.23 is the drive number
  • exit maintenance mode and boot:
    > halt
    ok boot
  • zero the spare drives: drive zero spares
  • remove the shelf or the drives from the new filer, and you can now put them back into the old filer, as they will be recognized just fine.

For more information about our SAN support, look at NetApp SAN.


NetApp route add default gateway

NetApp SAN default gateway setup

DataOntap is a FreeBSD based operating system built by NetApp. However, most of the command line interface commands differ from the usual FreeBSD commands. When a new NetApp installation is performed, or a NetApp migration is needed, typically the IP address needs to be changed, as well as the default gateway. The first step before changing the network configuraiton is to check if the current configuration, and capture it in case you need to back out of the migration. The following paragraphs show how to check existing configuration, and how to set the new gateway. NetApp SAN

Show NetApp SAN network config

To print the current network config, run:
ifconfig -a

To set a new network IP, run:
ifconfig e0 192.168.1.2 netmask 255.255.255.0

Where e0 is your network interface name, and 192.168.1.2 is the new IP of the NetApp.

Show NetApp SAN route config

To print the current routes, run:
route -ns

Setup NetApp SAN default route

Delete NetApp SAN current default route

route delete default

Add NetApp SAN new default route

route add 0.0.0.0 IP_OF_DEFAULT_GW 1
For example, if the fedault gateway is 192.168.1.1:
route add 0.0.0.0 192.168.1.1 1
For more information about our SAN support, look at NetApp SAN.


Netapp Automation for DB2 9.7 (or Oracle)

Problem

You have one or more NetApp storage systems (F960 or later series), running Data ONTAP® 7G (or later). You would like to take advantage of the snapshot capabilities, to facilitate the database backup process. However, you don’t want to use the default root login for the automated logins, nor do you want to use the unsecure rsh, as these options would violate corporate security policies (especially if you have a compliance commitment to ISO 27002, PCI or HIPAA).

Solution

Create a restricted users that has only login access and the ability to manage snapshots:

  • Setup ssh on the filer:secureadmin setup ssh (it is recommended that you select long keys when you are asked 1024 and 768 for ssh v1 – ssh1 shouldn’t be enabled anyway – 2048 for ssh2).
  • Start ssh on the filer: secureadmin enable ssh2 (at this point you should be able to log in to the filer with ssh as root with your admin password)
  • Create group / role / user:
    useradmin user add snapuser -g Users
    useradmin role add snaps -c "Snapshot Manager" -a cli-snap*,login-ssh,login-telnet
    useradmin group add cli-snapshot-group -r snaps
    useradmin user modify snapuser -f -g cli-snapshot-group
    useradmin user list snapuser

    The last command allows you to check your work, and the output should like:
    Name: snapuser
    Info:
    Rid: 131075
    Groups: cli-snapshot-group
    Full Name:
    Allowed Capabilities: cli-snap*,login-ssh,login-telnet
    Password min/max age in days: 0/4294967295
    Status: enabled
  • Put your public keys in the authorized keys file on the filer:/etc/sshd/snapuser/.ssh/authorized_keys (typically you do that by mounting the filer root volume on one of your AIX boxes – any OS that can mount the root volume should work).
  • At this point you are ready to test by logging in via ssh to the snapuser account. Keep in mind that before you can successfully log in, you have to log out from the NetApp.

References


NFS Locking Problem on Net App connected IBM Domino Server 8.5

Problem

One of our IBM Domino servers all of a sudden decided that it couldn’t get an exclusive lock on its database files any longer. The databases happened to be on a NetApp head, and even after rebooting the server, the locking problem would persist. I other words, Domino’s nfslock was failing. As a result Domino wouldn’t start, and would send out errors to the domino startup log file similar to:

“Directory Assistance failed opening Primary Domino Directory names.nsf, error: This database is currently in use by another process”

Solution

Turns out that the NetApp lock table is sensitive to the server name and not the IP address. As a result, a lock from domino12.domain.com isn’t the same as a lock from domino12. To make matters worse, a Red Hat Linux machine might present itself either way depending on the config file details (even the order of the short name vs long name in the hosts file matters).

How to confirm the problem? On the NetApp, list the locks with

lock status -f

Now that you know the client name under witch the lock shows up, you can clear the lock with

priv set advanced
sm_mon -l clientname
priv set

Finally, you can check the the lock is gone with:

lock status -f

References


iSCSI target on NetApp for AIX 6.1 and VIOS 2.1

Problem

iSCSI is much more flexible and easy to manage then a fibre channel (FC) infrastructure. There is a performance penalty of course, but in many cases the performance penalty is well worth the management benefits. This article will show how to set up the NetApp as a target and the AIX or VIOS host as the client for iSCSI LUNs.

Solution

Setting up the NetApp

iscsi nodename
igroup create -i -t aix aixgroup iqn.1992-08.com.ibm:aix-rtp.00000000
lun create -s 200g /vol/iscsi/aix1
lun map /vol/test/aix.lun aixgroup
lun show -g aix

Setting up the AIX host

oslevel -r
lslpp -l | grep -i iscsi
echo "10.10.1.10 3260 iqn.1992-08.com.netapp:sn.00000000" > /etc/iscsi/targets
lsdev -C | grep iscsi
lsattr -El iscsi0
chdev -l iscsi0 -a initiator_name=iqn.1992-08.com.ibm:aix-rtp.00000000
lsdev -Cc disk
cfgmgr -l iscsi0
lsdev -Cc disk

References


NetApp bonded VLAN configuration

Problem

To maximize the benefit from the multi-port adapters on a NetApp, it is best to bond the ports together (some vendors refer to this as “trunk groups”). Then over the new bonded trunk, the various networks can be assigned as VLANs, maximizing the network throughput for each LAN the NetApp needs to communicate with.

Solution

In this example, I will show how to bond two interfaces together, and create three VLANs:

vif create multi vif0 e9a e9b
vlan create -g vif0 200
vlan add vif0 201 202
ifconfig vif0-200 10.10.0.25 netmask 255.255.0.0
ifconfig vif0-201 10.20.0.25 netmask 255.255.0.0
ifconfig vif0-202 10.30.0.25 netmask 255.255.0.0
savecore

References