Search

Google

Wednesday, March 11, 2009

Introduction

This document is intended to supplement existing documentation on how to configure, test, and monitor a disk heartbeat device and network in HACMP/ES V 5.x. This feature is new in V5.1, and it provides another alternative for non-ip based heartbeats. The intent of this document is to provide step-by-step directions as they are currently sketchy in the HACMP v5.1 pubs. This will hopefully clarify several misconceptions that have been brought to my attention.

This example consists of a two-node cluster (nodes GT40 & SL55) with shared ESS vpath devices. If more than two nodes exist in your cluster, you will need N number or non-ip heartbeat networks. Where N represents the number of nodes in the cluster. (i.e. three node cluster requires 3 non-ip heartbeat networks). This creates a heartbeat ring.

It’s worth noting that one should not confuse concurrent volume groups with concurrent resource groups. And note, there is a difference between concurrent volume groups and enhanced concurrent volume groups. A concurrent resource group is one which may be active on more than one node at a type. A concurrent volume group also shares the characteristic that it may be active on more than one node at a time. This is also true for an enhanced concurrent VG; however, in a non-concurrent resource group, the enhanced concurrent VG, while it may be active and not have a SCSI reserve residing on the disk, it’s data is only normally accessed by one system at a time.


Pre-Reqs

In this document, it is assumed that the shared storage devices are already made available and configured to AIX, and that the proper levels of RSCT and HACMP are already installed. Since utilizing enhanced-concurrent volume groups, it is also necessary to make sure that bos.clvm.enh is installed. This is not normally installed as part of a HACMP installation via the installp command.

Disk Heartbeat Details

This provides the ability to use existing shared disks, regardless of disk type, to provide a serial network like heartbeat path. A benefit of this is that one need not dedicate the integrated serial ports for HACMP heartbeats (if supported on the subject systems) or purchase an 8-port asynchronous adapter.

This feature utilizes a special area on the disk previously reserved for “Concurrent Capable” volume groups (traditionally only for SSA disks). Since AIX 5.2 dropped support for the SSA concurrent volume groups, this fit makes it available for use. This also means that the disk chosen for serial heartbeat can be part of a data volume group. (Note Performance Concerns below)

The disk heart beating code went into the 2.2.1.30 version of RSCT. Some recommended APARs bring that to 2.2.1.31. If you've got that level installed, and HACMP 5.1, you can use disk heart beating. The relevant file to look for is /usr/sbin/rsct/bin/hats_diskhb_nim. Though it is supported mainly through RSCT, we recommend AIX 5.2 when utilizing disk heartbeat.

To use disk heartbeats, no node can issue a SCSI reserve for the disk. This is because both nodes using it for heart beating must be able to read and write to that disk. It is sufficient that the disk be in an enhanced concurrent volume group to meet this requirement. (It should also be possible to use a disk that is in no volume group for disk heart beating. RSCT certainly won't care; but HACMP SMIT panels may not be particularly helpful in setting this up.)

Now, in HACMP 5.1 with AIX 5.1, enhanced concurrent mode volume groups can be used only in concurrent (or "online on all available nodes") resource groups. This means that disk heart beating is useful only to people running concurrent configurations, or who can allocate such a volume group/disk (which is certainly possible, though perhaps an expensive approach). In other words, at HACMP 5.1 and AIX 5.1, typical HACMP clusters (with a server and idle standby) will require an additional concurrent resource group with a disk in an enhanced concurrent VG dedicated for heartbeat use. At AIX 5.2, disk heartbeats can exist on an enhanced concurrent VG that resides in a non-concurrent resource group. At AIX 5.2, one may also use the fast disk takeover feature in non-concurrent resource groups with enhanced concurrent volume groups. With HACMP 5.1 and AIX 5.2, enhanced concurrent mode volume groups can be used in serial access configurations for fast disk takeover, along with disk heart beating. (AIX 5.2 requires RSCT 2.3.1.0 or later) That is, the facility becomes usable to the average customer, without committment of additional resource, since disk heart beating can occur on a volume group used for ordinary filesystem and logical volume activity.

Performance Concerns with Disk Heart Beating

Most modern disks take somewhere around 15 milliseconds to service an IO request, which means that they can't do much more than 60 seeks per second. The sectors used for disk heart beating are part of the VGDA, which is at the outer edge of the disk, and may not be near the application data. This means that every time a disk heart beat is done, a seek will have to be done. Disk heart beating will typically (with the default parameters) require four (4) seeks per second. That is each of two nodes will write to the disk and read from the disk once/second, for a total of 4 IOPS. So, if possible, a disk should be selected as a heart beat path that does not normally do more than about 50 seeks per second. The filemon tool can be used to monitor the seek activity on a disk.

In cases where a disk must be used for heart beating that already has a high seek rate, it may be necessary to change the heart beat timing parameters to prevent long write delays from being seen as a failure.

The above cautions as stated apply to JBOD configurations, and should be modified based on the technology of the disk subsystem:

  • If the disk used for heart beating is in a controller that provides large amounts of cache - such as the ESS - the number of seeks per second can be much larger
  • If the disk used for heart beating is part of a RAID set without a caching front end controller, the disk may be able to support fewer seeks, due to the extra activity required by RAID operations

Pros & Cons of using Disk Heart Beating

Pros:

1. No additional hardware needed.

2. Easier to span greater distances.

3. No loss in usable storage space and can use existing data volume groups.

4. Uses enhanced concurrent vgs which also allows for fast-disk takeover.

Cons:

1. Must be aware of the devices diskhb uses and administer devices properly*

2. Lose the forced down option of stopping cluster services because of enhanced concurrent vg usage.

*I have had a customer delete all their disk definitions and run cfgmgr again to clean up number holes in their device definition list. When they did, obviously , the device names did not come back in the same order as they were before. So the diskhb device assigned to HACMP, was no longer valid as a different device was configured using the old device name and it was not part of an enhanced concurrent vg. Hence diskhb no longer worked, and since the customer did not monitor their cluster either, they were unaware that the diskhb no longer worked.


Configuring Disk Heartbeat

As mentioned previously, disk heartbeat utilizes enhanced-concurrent volume groups. If starting with a new configuration of disks, you will want to create enhanced-concurrent volume groups, either manually, or by utilizing C-SPOC. My example shows using C-SPOC which is the best practice to use here.


If you plan to use an existing volume group for disk heartbeats that is not enhanced concurrent, then you will have to convert them to such using the chvg command. We recommend that the VG be active on only one node, and that the application not be running when making this change run chvg –C vgname to change the VG to enhanced concurrent mode. Vary it off, then run importvg –L vgname on the other node to make it aware that the vg is now enhanced concurrent capable. If using this method, you can skip to the “Creating Disk Heartbeat Devices and Network” section of this document.


Disk and VG Preparation

To be able to use C-SPOC successfully, it is required that some basic IP based topology already exists, and that the storage devices have their PVIDs in both system’s ODMs. This can be verified by running lspv on each system. If a PVID does not exist on each system, it is necessary to run chdev -l -a pv=yes on each system. This will allow C-SPOC to match up the device(s) as known shared storage devices.

In this example, vpath0 on GT40 is the same virtual disk as vpath3 on SL55.

Use C-SPOC to create an Enhanced Concurrent volume group. In the following example, since vpath devices are being used, the following smit screen paths were used.

smitty cl_adminàGo to HACMP Concurrent Logical Volume Managementà Concurrent Volume Groupsà Create a Concurrent Volume Group with Data Path Devices and press Enter

Choose the appropriate nodes, and then choose the appropriate shared storage devices based on pvids (vpath0 and vpath3 in this example). Choose a name for the VG , desired PP size, make sure that Enhanced Concurrent Mode is set to true and press Enter. (enhconcvg in this example). This will create the shared enhanced-concurrent vg needed for our disk heartbeat. .


It’s a good idea to verify via lspv once this has completed to make sure the device and vg is show appropriately as follows:

GT40#/ lspv
vpath0 000a7f5af78e0cf4 enhconcvg

SL55#/lspv
vpath3 000a7f5af78e0cf4 enhconcvg


Creating Disk Heartbeat Devices and Network

There are two different ways to do this. Since we have already created the enhanced concurrent vg, we can use the discovery method (1) and let HA find it for us. Or we can do this manually via the Pre-defined devices method (2). Following is an example of each.

1) Creating via Discover Method: (See Note)

Enter smitty hacmpàExtended ConfigurationàDiscover HACMP-related Information from Configured NodesàPress Enter

This will run automatically and create a clip_config file that contains the information it has discovered. Once completed, go back to the Extended Configuration menu and chose:

Extended Topology ConfigurationàConfigure HACMP Communication Interfaces/DevicesàAdd Communication Interfaces/DevicesàAdd Discovered Communication Interface and DevicesàCommunication Devices àChoose appropriate devices (ex. vpath0 and vpath3)

Select Point-to-Point Pair of Discovered Communication Devices to Add

Move cursor to desired item and press F7. Use arrow keys to scroll.
ONE OR MORE items can be selected.
Press Enter AFTER making all selections.

# Node Device Device Path Pvid
> nodeGT40 vpath0 /dev/vpath0 000a7f5af78
> nodeSL55 vpath3 /dev/vpath3 000a7f5af78


Note:
Base HA 5.1 appears to have a problem when using the Discovered Devices this method. If you get this error: "ERROR: Invalid node name 000a7f5af78e0cf4".
Then you will need apar IY51594. Otherwise you will have to create via the Pre-Defined Devices method. Once corrected, this section will be completed

2) Creating via Pre-Defined Devices Method

When using this method, it is necessary to create a diskhb network first, then assign the disk-node pair devices to the network. Create the diskhb network as follows:


smitty hacmp à Extended Configuration à Extended Topology Configuration àConfigure HACMP Networks àAdd a Network to the HACMP cluster à choose diskhb à Enter desired network name (ex. disknet1)--press Enter


smitty hacmp à Extended Configuration à Extended Topology Configuration à Configure HACMP Communication Interfaces/Devices à Add Communication Interfaces/Devicesà Add Pre-Defined Communication Interfaces and Devices à
Communication Devices
à Choose your diskhb Network Name à

Add a Communication Device

Type or select values in entry fields.

Press Enter AFTER making all desired changes.

[Entry Fields]

* Device Name [GT40_hboverdisk]

* Network Type diskhb

* Network Name disknet1

* Device Path [/dev/vpath0]

* Node Name [GT40]


For Device Name, that is a unique name you can chose. It will show up in your topology under this name, much like serial heartbeat and ttys have in the past.

For the Device Path, you want to put in /dev/. Then choose the corresponding node for this device and device name (ex. GT40). Then press Enter.

You will repeat this process for the other node (ex. SL55) and the other device (vpath3). This will complete both devices for the diskhb network.


Testing Disk Heartbeat Connectivity

Once the device and network definitions have been created, it is a good idea to test it and make sure communications is working properly. If the volume group is varied on in normal mode on one of the nodes, the test will probably not work.

/usr/sbin/rsct/bin/dhb_read is used to test the validity of a diskhb connection. The usage of dhb_read is as follows:

dhb_read -p devicename //dump diskhb sector contents
dhb_read -p devicename -r //receive data over diskhb network
dhb_read -p devicename -t //transmit data over diskhb network



To test that disknet1, in the example configuration, can communicate from nodeB(ex. SL55) to nodeA (ex. GT40), you would run the following commands:


On nodeA, enter:

dhb_read -p rvpath0 -r

On nodeB, enter:

dhb_read -p rvpath3 -t


Note: That the device name is raw device as designated with the “r” proceeding the device name.

If the link from nodeB to nodeA is operational, both nodes will display:

Link operating normally.

You can run this again and swap which node transmits and which one receives. To make the network active, it is necessary to sync up the cluster. Since the volume group has not been added to the resource group, we will sync up once instead of twice.



Add Shared Disk as a Shared Resource

In most cases you would have your diskhb device on a shared data vg. It is necessary to add that vg into your resource group and synchronize the cluster.

smitty hacmpà Extended Configurationà Extended Resource Configuration > Extended Resource Group Configurationà Change/Show Resources and Attributes for a Resource Groupà and press Enter.

Choose the appropriate resource group, enter the new vg (enhconcvg) into the volume group list and press Enter.

Return to the top of the Extended Configuration menu and synchronize the cluster.


Monitor Disk Heartbeat

Once the cluster is up and running, you can monitor the activity of the disk (actually all) heartbeats via lssrc -ls topsvcs. An example of the output follows:

Subsystem Group PID Status

topsvcs topsvcs 32108 active

Network Name Indx Defd Mbrs St Adapter ID Group ID

disknet1 [ 3] 2 2 S 255.255.10.0 255.255.10.1

disknet1 [ 3] rvpath3 0x86cd1b02 0x86cd1b4f

HB Interval = 2 secs. Sensitivity = 4 missed beats

Missed HBs: Total: 0 Current group: 0

Packets sent : 229 ICMP 0 Errors: 0 No mbuf: 0

Packets received: 217 ICMP 0 Dropped: 0

NIM's PID: 28724

Be aware that there is a grace period for heartbeats to start processing. This is normally around 60 seconds. So if you run this command quickly after starting the cluster, you may not see anything at all until heartbeat processing is started after the grace period time has elapsed.