Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Tuesday, 22 October 2013

Setup a Two-Node Cluster with HACMP

Setup a Two-Node Cluster with HACMP


Contents

  1. Introduction
  2. Setup and Preparation
o    Storage setup
o    Network setup
  1. Installation
  1. Cluster Topology Configuration
  1. Resource Group Configuration
  1. Failover Test
  2. Disk Heartbeat Check
  3. Useful Commands
  4. »clstat« and »snmp«
  5. Related Information

1. Introduction

This article describes how to setup a two-nodes-cluster with IBM's standard cluster solution for AIX. Although the name has changed to Power HA with Version 5.5 and to Power HA System Mirror with version 7 IBM's cluster solution is still widely known as HACMP. This article refers to version 5.5.

2. Setup and Preparation

Storage setup

The reason why we create a cluster is to make an application high available. Therefore we need storage from two independent sites (read it as storage from two different datacenters). In this article we have to sites: Datacenter1 and Datacenter2. Each filesystem will be mirrored over the two sites. All storage has to be visible on both nodes.
In addition we need two (very small) LUNs for disk heartbeat. 512MB to 1G LUN size is sufficient.

Network setup

In our setup we have two nodes: barney and shakira. We need a boot address only used for cluster intercommunication, a service address, and a persistent address which is equal to the hostnames of our nodes. All cluster addresses have to be present in the /etc/hosts file on both nodes:
node1+node2# vi /etc/hosts
#### HACMP
# Boot address
172.18.1.4      barneyboot
172.18.1.6      shakiraboot
# Service/Cluster address
10.111.111.70   haservice1
# Node/Persistent address
10.111.111.4    barney
10.111.111.6    shakira
####
Don't use hyphens (-) and underscores (_) in IP labels here.

3. Installation

Installation of Prerequisite Filesets

There are some filesets needed in order to get HACMP to work which are typically not part of a standard AIX installation. Check for
  • bos.net.nfs.server
  • bos.clvm
  • rsct.compat.basic.hacmp
  • rsct.compat.clients.hacmp
If they are not installed you have to do it now:
node1+node2# smitty install_latest
  |   bos.net.nfs                                                        ALL |
  |    + 6.1.1.0  Network File System Server                                 |
  | >  + 6.1.4.0  Network File System Server                                 |
 
  |   bos.clvm                                                           ALL |
  |    + 6.1.1.1  Enhanced Concurrent Logical Volume Manager                 |
  |    + 6.1.4.0  Enhanced Concurrent Logical Volume Manager                 |
  | >  + 6.1.4.2  Enhanced Concurrent Logical Volume Manager                 |
 
  |   rsct.compat.basic                                                  ALL |
  |    + 2.5.4.0  RSCT Event Management Basic Function                       |
  | >  + 2.5.4.0  RSCT Event Management Basic Function (HACMP/ES Support)    |
  |    + 2.5.4.0  RSCT Event Management Basic Function (PSSP Support)        |
 
  |   rsct.compat.clients                                                ALL |
  |    + 2.5.4.0  RSCT Event Management Client Function                      |
  | >  + 2.5.4.0  RSCT Event Management Client Function (HACMP/ES Support)   |
  |    + 2.5.4.0  RSCT Event Management Client Function (PSSP Support)       |

Installation of HACMP Filesets

Put the HACMP filesets and the update filesets somewhere where you can access them from both nodes and run inutoc. Then install the filesets on both cluster nodes:
node1+node2# cd /path/to/bffs
node1+node2# smitty install_latest
  | > cluster.es.client                                                  ALL |
  |    + 5.5.0.0  ES Client Libraries                                        |
  |    + 5.5.0.4  ES Client Libraries                                        |
  |    + 5.5.0.0  ES Client Runtime                                          |
  |    + 5.5.0.5  ES Client Runtime                                          |
  |    + 5.5.0.0  ES Client Utilities                                        |
  |    + 5.5.0.5  ES Client Utilities                                        |
  |    + 5.5.0.0  ES Communication Infrastructure                            |
  |    + 5.5.0.5  ES Communication Infrastructure                            |
  |    + 5.5.0.0  Web based Smit                                             |
  |    + 5.5.0.5  Web based Smit                                             |
  | > cluster.es.server                                                  ALL |
  |    + 5.5.0.0  ES Base Server Runtime                                     |
  |    + 5.5.0.6  ES Base Server Runtime                                     |
  |    + 5.5.0.0  ES Server Diags                                            |
  |    + 5.5.0.5  ES Server Diags                                            |
  |    + 5.5.0.0  ES Server Events                                           |
  |    + 5.5.0.6  ES Server Events                                           |
  |    + 5.5.0.0  ES Server Utilities                                        |
  |    + 5.5.0.6  ES Server Utilities                                        |
  |    + 5.5.0.0  ES Cluster Simulator                                       |
  |    + 5.5.0.4  ES Cluster Simulator                                       |
  |    + 5.5.0.0  ES Cluster Test Tool                                       |
  |    + 5.5.0.3  ES Cluster Test Tool                                       |
  |    + 5.5.0.0  ES Two-Node Configuration Assistant                        |
  | > cluster.es.cfs                                                     ALL |
  |    + 5.5.0.0  ES Cluster File System Support                             |
  |    + 5.5.0.4  ES Cluster File System Support                             |
  | > cluster.es.nfs                                                     ALL |
  |    + 5.5.0.0  ES NFS Support                                             |
  |    + 5.5.0.1  ES NFS Support                                             |
  | > cluster.es.cspoc                                                   ALL |
  |    + 5.5.0.0  ES CSPOC Commands                                          |
  |    + 5.5.0.6  ES CSPOC Commands                                          |
  |    + 5.5.0.0  ES CSPOC Runtime Commands                                  |
  |    + 5.5.0.5  ES CSPOC Runtime Commands                                  |
  |    + 5.5.0.0  ES CSPOC dsh                                               |
  | > cluster.license                                                    ALL |
  |    + 5.5.0.0  HACMP Electronic License                                   |
  | > cluster.man.en_US.es                                               ALL |
  |    + 5.5.0.0  ES Man Pages - U.S. English                                |
  |    + 5.5.0.1  ES Man Pages - U.S. English                                |
Note: In the above fileset list HACMP update filesets for SP6 are included. If you installed HACMP from a base CD it's strongly recommended to update HACMP with the latest fixes. Base versions of HACMP are not known to be excessively tested.
node1+node2# cd /path/to/update
node1+node2# smitty update_all
The nodes have to be rebooted now.
node1+node2# shutdown -Fr

4. Cluster Topology Configuration

Basically the cluster configuration has to be done on only one of our nodes. Only the initial definition and startup has to be done on both nodes. Please mind the command prompt in the below commands. It indicates whether something has to be done on one node or on both nodes.

Define the Cluster

The first step is to define a cluster. This means nothing more then just define the name of our cluster
node1+node2# smitty hacmp
-> Extended Configuration
   -> Extended Topology Configuration
      -> Configure an HACMP Cluster
         -> Add/Change/Show an HACMP Cluster
 
                        Add/Change/Show an HACMP Cluster
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Cluster Name                                       [Cluster1]
 
  NOTE: HACMP must be RESTARTED
  on all nodes in order for change to take effect
We follow the advice and restart all cluster related services:
node1+node2# stopsrc -g cluster
0513-044 The clstrmgrES Subsystem was requested to stop.
node1+node2# stopsrc -s clcomdES      
0513-044 The clcomdES Subsystem was requested to stop.
node1+node2# startsrc -s clcomdES
0513-059 The clcomdES Subsystem has been started. Subsystem PID is 618753.
node1+node2# startsrc -g cluster
0513-059 The clinfoES Subsystem has been started. Subsystem PID is 618534.
0513-059 The clstrmgrES Subsystem has been started. Subsystem PID is 577620.

Define the Cluster Nodes

All steps so far we did on both nodes. But from now on we only work on one of our nodes.
We add the first node to our cluster:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Topology Configuration
      -> Configure HACMP Nodes
         -> Add a Node to the HACMP Cluster
 
                        Add a Node to the HACMP Cluster
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Node Name                                          [barney]
  Communication Path to Node                         [barneyboot]           +
and now the second one:
                        Add a Node to the HACMP Cluster
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
                                                        [Entry Fields]
 
* Node Name                                          [shakira]
  Communication Path to Node                         [shakiraboot]          +

Define Cluster Sites

We don't really use cluster sites in this example setup. But it makes sense to define cluster sites anyway. It gives you the possibility to label your storage. We will use these labels later when we create the application filesystems.
First site:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Topology Configuration
      -> Configure HACMP Sites
         -> Add a Site
 
                                   Add a Site
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Site Name                                          [Datacenter1]          +
* Site Nodes                                          barney                +
* Dominance                                          [Yes]                  +
* Backup Communications                              [none]                  
Second site:
 
                                   Add a Site
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Site Name                                          [Datacenter2]          +
* Site Nodes                                          shakira               +
* Dominance                                          [No]                   +
* Backup Communications                              [none]                 +
The home node of our service shall be barney - that's why we set the Dominance to Yes for barney and to No for shakira.

Define a Cluster Network

Before we start with the network configuration we let HACMP try to discover the topology. Automatic discovery does not always work, but it's worth a try.
node1# smitty hacmp
-> Extended Configuration
   -> Discover HACMP-related Information from Configured Nodes
The network topology is used by HACMP for the heartbeat. First we configure heartbeat over ethernet:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Topology Configuration
      -> Configure HACMP Networks
         -> Add a Network to the HACMP Cluster
 
  +--------------------------------------------------------------------------+
  |                          Select a Network Type                           |
  |                                                                          |
  | Move cursor to desired item and press Enter.                             |
  |                                                                          |
  |   # Discovery last performed: (January 30 10:02)                         |
  |   # Discovered IP-based Network Types                                    |
  |   ether                                                                  |
  |                                                                          |
  |   # Discovered Serial Device Types                                       |
  |   rs232                                                                  |
  |                                                                          |
  |   # Pre-defined IP-based Network Types                                   |
  |   XD_data                                                                |
  |   XD_ip                                                                  |
  |   atm                                                                    |
  |   ether                                                                  |
  |   fddi                                                                   |
  |   hps                                                                    |
  |   ib                                                                     |
  |   token                                                                  |
  |                                                                          |
  |   # Pre-defined Serial Device Types                                      |
  |   XD_rs232                                                               |
  |   diskhb                                                                 |
  |   rs232                                                                  |
  |   tmscsi                                                                 |
  |   tmssa                                                                  |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F8=Image                F10=Exit                Enter=Do                 |
  | /=Find                  n=Find Next                                      |
  +--------------------------------------------------------------------------+
If you trust the automatic discovery select ether under "Discovered IP-based Network Types" - if not select ether under "Pre-defined IP-based Network Types". The latter always work - so it might be the better choice. In the next screen put in the correct netmask and activate the use of IP aliases for IP takeover:
                  Add an IP-Based Network to the HACMP Cluster
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
                                                        [Entry Fields]
* Network Name                                       [net_ether_01]
* Network Type                                        ether
* Netmask                                            [255.255.255.0]         +
* Enable IP Address Takeover via IP Aliases          [Yes]                   +
  IP Address Offset for Heartbeating over IP Aliases []

Add a Communication Interface for Heartbeat

Based on the network definition we define the boot addresses as communication interface:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Topology Configuration
      -> Configure HACMP Communication Interfaces/Devices
         -> Add Communication Interfaces/Devices
            -> Add Pre-defined Communication Interfaces and Devices
               -> Communication Interfaces
 
  +--------------------------------------------------------------------------+
  |                          Select a Network Name                           |
  |                                                                          |
  | Move cursor to desired item and press Enter.                             |
  |                                                                          |
  |   ALL                                                                    |
  |   net_ether_01                                                           |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F8=Image                F10=Exit                Enter=Do                 |
  | /=Find                  n=Find Next                                      |
  +--------------------------------------------------------------------------+
Select net_ether_01 and fill the empty fields:
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* IP Label/Address                                   [barneyboot]              +
* Network Type                                        ether
* Network Name                                        net_ether_01
* Node Name                                          [barney]                  +
  Network Interface                                  [en8]
Do the same for the second node:
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* IP Label/Address                                   [shakiraboot]              +
* Network Type                                        ether
* Network Name                                        net_ether_01
* Node Name                                          [shakira]                  +
  Network Interface                                  [en8]
The network topology is setup now - time to synchronize the cluster:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Verification and Synchronization
 
                     HACMP Verification and Synchronization
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
                                                        [Entry Fields]
* Verify, Synchronize or Both                        [Both]                  +
* Automatically correct errors found during          [No]                    +
  verification?
 
* Force synchronization if verification fails?       [No]                    +
* Verify changes only?                               [No]                    +
* Logging                                            [Standard]              +

Add Persistent IP Addresses

We want to have the IPs belonging to the hostnames of our two nodes to be persistent:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Topology Configuration
      -> Configure HACMP Persistent Node IP Labels/Addresses
         -> Add a Persistent Node IP Label
 
                     Add a Persistent Node IP Label/Address
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
  
                                                        [Entry Fields]
* Node Name                                            barney
* Network Name                                       [net_ether_01]          +
* Node IP Label/Address                              [barney]                +
  Prefix Length                                      []                       #
We do the same for shakira.
We miss a default route here. Since the persistent IP is defined within HACMP there is no default route defined in the ODM. However, after a reboot the system comes up with boot and persistent address. So we define a default route on both nodes:
node1+node2# chdev -l inet0 -a route=net,-hopcount,0,,0,10.111.111.1 -P
This sets 10.111.111.1 as the default gateway. We will activate the route later with the cluster start. In normal operation you don't have to touch the default route anymore.

Storage Configuration

First we set PVIDs on every LUN we want to use for HACMP and run cfgmgr on the other node.
node1# chdev -l hdisk1 -a pv=yes
hdisk1 changed
node1# chdev -l hdisk2 -a pv=yes
hdisk2 changed
      :
      :
On node2 we have to remove the hdisks first an run cfgmgr again. Now we see the same PVIDs as on node1:
node2# rmdev -dl hdisk1
hdisk1 deleted
      :
      :
node2# cfgmgr
hdisk0          00c722bc389f170f                    rootvg          active
hdisk1          00f6418384f345d0                    None
hdisk2          00f6418384f34621                    None
hdisk3          00f6418384f3466c                    None
hdisk4          00f6418384f346b0                    None
hdisk5          00f6418384f346f2                    None
hdisk6          00f6418384f44fca                    None
hdisk7          00f6418384f45015                    None
hdisk8          00f6418384f45054                    None
hdisk9          00f6418384f4508f                    None
hdisk10         00f6418384f450ca                    None
hdisk11         00f6418384f34739                    None
hdisk12         00f6418384f450ff                    None
and we run the automatic discovery again:
node1# smitty hacmp
-> Extended Configuration
   -> Discover HACMP-related Information from Configured Nodes
Now we connect the LUNs to our cluster sites. For every LUN do the following:
node1# smitty hacmp
-> System Management (C-SPOC)
   -> HACMP Physical Volume Management
      -> Configure Disk/Site Locations for Cross-Site LVM Mirroring
         -> Add Disk/Site Definition for Cross-Site LVM Mirroring
 
   +--------------------------------------------------------------------------+
   |                                Site Names                                |
   |                                                                          |
   | Move cursor to desired item and press Enter.                             |
   |                                                                          |
   |   Datacenter1                                                            |
   |   Datacenter2                                                            |
   |                                                                          |
   | F1=Help                 F2=Refresh              F3=Cancel                |
   | F8=Image                F10=Exit                Enter=Do                 |
   | /=Find                  n=Find Next                                      |
   +--------------------------------------------------------------------------+
Select site Datacenter1 and select all LUNs located there in the next screen:
             Add Disk/Site Definition for Cross-Site LVM Mirroring
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Site Name                                           Datacenter1
* Disks PVID                                                                 +
 gives you a list of all LUNs configured for HACMP - select the ones for site Datacenter1:
 
   +--------------------------------------------------------------------------+
   |                                Disks PVID                                |
   |                                                                          |
   | Move cursor to desired item and press F7.                                |
   |     ONE OR MORE items can be selected.                                   |
   | Press Enter AFTER making all selections.                                 |
   |                                                                          |
   | > 00f6418384f345d0 ( hdisk1 on all selected nodes )                      |
   | > 00f6418384f34621 ( hdisk2 on all selected nodes )                      |
   | > 00f6418384f3466c ( hdisk3 on all selected nodes )                      |
   | > 00f6418384f346b0 ( hdisk4 on all selected nodes )                      |
   | > 00f6418384f346f2 ( hdisk5 on all selected nodes )                      |
   | > 00f6418384f34739 ( hdisk11 on all selected nodes )                     |
   |   00f6418384f44fca ( hdisk6 on all selected nodes )                      |
   |   00f6418384f45015 ( hdisk7 on all selected nodes )                      |
   |   00f6418384f45054 ( hdisk8 on all selected nodes )                      |
   |   00f6418384f4508f ( hdisk9 on all selected nodes )                      |
   |   00f6418384f450ca ( hdisk10 on all selected nodes )                     |
   |   00f6418384f450ff ( hdisk12 on all selected nodes )                     |
   |                                                                          |
   | F1=Help                 F2=Refresh              F3=Cancel                |
   | F7=Select               F8=Image                F10=Exit                 |
   | Enter=Do                /=Find                  n=Find Next              |
   +--------------------------------------------------------------------------+
We repeat the procedure for the LUNs located in site Datacenter2.

Disk Heartbeat

Two of our LUNs are dedicated to disk heartbeat. Typically you use small LUN sizes here. If you're not sure which LUNs are the heartbeat LUNs check with " bootinfo -s hdisk ".
To protect the LUNs for disk heartbeat we create volume groups for them - a separate VG for each LUN:
node1# smitty hacmp
-> System Management (C-SPOC)
   -> HACMP Concurrent Logical Volume Management
      -> Concurrent Volume Groups
         -> Create a Concurrent Volume Group
 
   +--------------------------------------------------------------------------+
   |                                Node Names                                |
   |                                                                          |
   | Move cursor to desired item and press F7.                                |
   |     ONE OR MORE items can be selected.                                   |
   | Press Enter AFTER making all selections.                                 |
   |                                                                          |
   | > barney                                                                 |
   | > shakira                                                                |
   |                                                                          |
   | F1=Help                 F2=Refresh              F3=Cancel                |
   | F7=Select               F8=Image                F10=Exit                 |
   | Enter=Do                /=Find                  n=Find Next              |
   +--------------------------------------------------------------------------+
Select both nodes.
   +--------------------------------------------------------------------------+
   |                          Physical Volume Names                           |
   |                                                                          |
   | Move cursor to desired item and press F7.                                |
   |     ONE OR MORE items can be selected.                                   |
   | Press Enter AFTER making all selections.                                 |
   |                                                                          |
   |   00f6418384f345d0 ( hdisk1 on all selected nodes )                      |
   |   00f6418384f34621 ( hdisk2 on all selected nodes )                      |
   |   00f6418384f3466c ( hdisk3 on all selected nodes )                      |
   |   00f6418384f346b0 ( hdisk4 on all selected nodes )                      |
   |   00f6418384f346f2 ( hdisk5 on all selected nodes )                      |
   |   00f6418384f34739 ( hdisk11 on all selected nodes )                     |
   |   00f6418384f44fca ( hdisk6 on all selected nodes )                      |
   |   00f6418384f45015 ( hdisk7 on all selected nodes )                      |
   |   00f6418384f45054 ( hdisk8 on all selected nodes )                      |
   |   00f6418384f4508f ( hdisk9 on all selected nodes )                      |
   |   00f6418384f450ca ( hdisk10 on all selected nodes )                     |
   |   00f6418384f450ff ( hdisk12 on all selected nodes )                     |
   |                                                                          |
   | F1=Help                 F2=Refresh              F3=Cancel                |
   | F7=Select               F8=Image                F10=Exit                 |
   | Enter=Do                /=Find                  n=Find Next              |
   +--------------------------------------------------------------------------+
We select the small LUN from Datacenter1 and fill the empty fields in the next screen:
            Create a Concurrent Volume Group with Data Path Devices
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
  Node Names                                          barney, shakira
  PVID                                                00f6418384f34739
  VOLUME GROUP name                                  [hacmp_hb1]
  Physical partition SIZE in megabytes                4                      +
  Volume group MAJOR NUMBER                          [38]                     #
  Enhanced Concurrent Mode                            true                   +
  Enable Cross-Site LVM Mirroring Verification        false                  +
 
  Warning:
  Changing the volume group major number may result
  in the command being unable to execute
  successfully on a node that does not have the
  major number currently available.  Please check
  for a commonly available major number on all nodes|
  before changing this setting.
The same procedure has to be done for the second disk heartbeat LUN. We call the second volume group " hacmp_hb2 ".
Before we go on with the disk heartbeat configuration we let HACMP discover first...
node1# smitty hacmp
-> Extended Configuration
   -> Discover HACMP-related Information from Configured Nodes
Now we are ready to configure the disk heartbeat:
node1# smitty hacmp
-> Extended Topology Configuration
   -> Configure HACMP Communication Interfaces/Devices
      -> Add Communication Interfaces/Devices
         -> Add Discovered Communication Interface and Devices
            -> Communication Devices
 
  +--------------------------------------------------------------------------+
  |  Select Point-to-Point Pair of Discovered Communication Devices to Add   |
  |                                                                          |
  | Move cursor to desired item and press F7. Use arrow keys to scroll.      |
  |     ONE OR MORE items can be selected.                                   |
  | Press Enter AFTER making all selections.                                 |
  |                                                                          |
  |   # Node                     Device   Device Path    Pvid                |
  |     barney                   tty0     /dev/tty0                          |
  |     shakira                  tty0     /dev/tty0                          |
  | >   barney                   hdisk11  /dev/hdisk11   00f6418384f34739    |
  |     barney                   hdisk12  /dev/hdisk12   00f6418384f450ff    |
  | >   shakira                  hdisk11  /dev/hdisk11   00f6418384f34739    |
  |     shakira                  hdisk12  /dev/hdisk12   00f6418384f450ff    |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F7=Select               F8=Image                F10=Exit                 |
  | Enter=Do                /=Find                  n=Find Next              |
  +--------------------------------------------------------------------------+
We choose the first pair of disks and repeat the procedure for the second pair.

5. Resource Group Configuration

Before we actually define a resource group we prepare all the resources we need:

Application Volume Groups

The first resource we need is a high available application volume group:
node1# smitty hacmp
-> System Management (C-SPOC)
   -> HACMP Logical Volume Management
      -> Shared Volume Groups
         -> Create a Shared Volume Group with Data Path Devices
 
  +--------------------------------------------------------------------------+
  |                                Node Names                                |
  |                                                                          |
  | Move cursor to desired item and press F7.                                |
  |     ONE OR MORE items can be selected.                                   |
  | Press Enter AFTER making all selections.                                 |
  |                                                                          |
  | > barney                                                                 |
  | > shakira                                                                |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F7=Select               F8=Image                F10=Exit                 |
  | Enter=Do                /=Find                  n=Find Next              |
  +--------------------------------------------------------------------------+
Select both nodes as shown in the screen above and select the hdisks you need in the next screen. Choose one set of disks from Datacenter1 and one set of disks from Datacenter2. Unfortunately in this screen the location is not indicated. In this example we just select all available disks:
 
  +--------------------------------------------------------------------------+
  |                          Physical Volume Names                           |
  |                                                                          |
  | Move cursor to desired item and press F7.                                |
  |     ONE OR MORE items can be selected.                                   |
  | Press Enter AFTER making all selections.                                 |
  |                                                                          |
  | > 00f6418384f345d0 ( hdisk1 on all selected nodes )                      |
  | > 00f6418384f34621 ( hdisk2 on all selected nodes )                      |
  | > 00f6418384f3466c ( hdisk3 on all selected nodes )                      |
  | > 00f6418384f346b0 ( hdisk4 on all selected nodes )                      |
  | > 00f6418384f346f2 ( hdisk5 on all selected nodes )                      |
  | > 00f6418384f44fca ( hdisk6 on all selected nodes )                      |
  | > 00f6418384f45015 ( hdisk7 on all selected nodes )                      |
  | > 00f6418384f45054 ( hdisk8 on all selected nodes )                      |
  | > 00f6418384f4508f ( hdisk9 on all selected nodes )                      |
  | > 00f6418384f450ca ( hdisk10 on all selected nodes )                     |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F7=Select               F8=Image                F10=Exit                 |
  | Enter=Do                /=Find                  n=Find Next              |
  +--------------------------------------------------------------------------+
The next screen asks for the type of volume group. These days scalable VGs seem to be the best choice:
  +--------------------------------------------------------------------------+
  |                            Volume Group Type                             |
  |                                                                          |
  | Move cursor to desired item and press Enter.                             |
  |                                                                          |
  |   Legacy                                                                 |
  |   Original                                                               |
  |   Big                                                                    |
  |   Scalable                                                               |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F8=Image                F10=Exit                Enter=Do                 |
  | /=Find                  n=Find Next                                      |
  +--------------------------------------------------------------------------+
After we selected disks and VG type we choose a name for the volume group:
              Create a Shared Volume Group with Data Path Devices
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
  Node Names                                          barney, shakira
  PVID                                                00f6418384f345d0 00f6>
  VOLUME GROUP name                                  [appl01vg]
  Physical partition SIZE in megabytes                128                    +
  Volume group MAJOR NUMBER                          [42]                     #
  Enable Cross-Site LVM Mirroring Verification        true                   +
 
  Warning:
  Changing the volume group major number may result
  in the command being unable to execute
  successfully on a node that does not have the
  major number currently available.  Please check
  for a commonly available major number on all nodes
  before changing this setting.
After confirming with  we are done with the VG and can go on with the

Application Server

For the application servers we first need application start and stop scripts. The scripts are usually provided by the application owners and should match at least two conditions:
  • it should be no problem to run these scripts multiple times in succession.
  • particularly the stop script should be robust, i.e. it should really be able to stop the application. If HACMP cannot unmount filesystems a manual takeover (aka resource group move) will fail.
Once the scripts are in place we can configure the application server:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Resource Configuration
      -> HACMP Extended Resources Configuration
         -> Configure HACMP Applications
            -> Configure HACMP Application Servers
               -> Add an Application Server
 
                             Add Application Server
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
                                                        [Entry Fields]
* Server Name                                        [app_srv01]
* Start Script                                       [/etc/hacmp/start_srv01]
* Stop Script                                        [/etc/hacmp/stop_srv01]
  Application Monitor Name(s)                                                +
In the above example the start/stop scripts are stored in a folder /etc/hacmp. But you can place them anywhere in the local filesystem tree. Don't place them on shared filesystems! Since the scripts are local we have to copy them over to the other node:
node1# scp -rp /etc/hacmp node2:/etc/

Cluster Service Address(es)

The cluster service address is the IP address that clients use to connect to the application. Therefore a service address moves with the resource group. You can define more than one service address per resource group. In this example we define only one service address.
Remember: we already defined the service address in /etc/hosts with the initial network setup.
node1# smitty hacmp
-> Extended Configuration
   -> Extended Resource Configuration
      -> HACMP Extended Resources Configuration
         -> Configure HACMP Service IP Labels/Addresses
            -> Add a Service IP Label/Address
 
  +--------------------------------------------------------------------------+
  |                  Select a Service IP Label/Address type                  |
  |                                                                          |
  | Move cursor to desired item and press Enter.                             |
  |                                                                          |
  |   Configurable on Multiple Nodes                                         |
  |   Bound to a Single Node                                                 |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F8=Image                F10=Exit                Enter=Do                 |
  | /=Find                  n=Find Next                                      |
  +--------------------------------------------------------------------------+
As the said before, the service address needs to move with the application - so we select "Configurable on Multiple Nodes" here.
 
    Add a Service IP Label/Address configurable on Multiple Nodes (extended)
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* IP Label/Address                                    haservice1               +
* Network Name                                        net_ether_01
  Alternate HW Address to accompany IP Label/Address []
Now we have all resources in place we finally can

Define Resource Group(s)

node1# smitty hacmp
-> Extended Configuration
   -> Extended Resource Configuration
      -> HACMP Extended Resource Group Configuration
         -> Add a Resource Group
 
                        Add a Resource Group (extended)
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Resource Group Name                                [RG_01]
 
  Inter-Site Management Policy                       [ignore]                +
* Participating Nodes from Primary Site              [barney]                +
  Participating Nodes from Secondary Site            [shakira]               +
 
  Startup Policy                                      Online On Home Node O> +
  Fallover Policy                                     Fallover To Next Prio> +
  Fallback Policy                                     Never Fallback         +
In this panel we initially define name of the resource group (RG_01 here). The policy definitions on the bottom are typical to two-node clusters. But you could choose different values here. For HACMP insiders: The above setup is the classic cascading setup.
Time again to let HACMP collect information:
node1# smitty hacmp
-> Extended Configuration
   -> Discover HACMP-related Information from Configured Nodes
Now we want to adjust some parameters of our resource group:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Resource Configuration
      -> HACMP Extended Resource Group Configuration 
         -> Change/Show Resources and Attributes for a Resource Group
 
  +--------------------------------------------------------------------------+
  |        Change/Show Resources and Attributes for a Resource Group         |
  |                                                                          |
  | Move cursor to desired item and press Enter.                             |
  |                                                                          |
  |   RG_01                                                                  |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F8=Image                F10=Exit                Enter=Do                 |
  | /=Find                  n=Find Next                                      |
  +--------------------------------------------------------------------------+
      Change/Show All Resources and Attributes for a Custom Resource Group
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
[TOP]                                                   [Entry Fields]
  Resource Group Name                                 RG_01
  Inter-site Management Policy                        ignore
  Participating Nodes from Primary Site               barney
  Participating Nodes from Secondary Site             shakira
 
  Startup Policy                                      Online On Home Node O>
  Fallover Policy                                     Fallover To Next Prio>
  Fallback Policy                                     Never Fallback
 
  Service IP Labels/Addresses                        [haservice1]            +
  Application Servers                                [app_srv01]             +
 
  Volume Groups                                      [appl01vg]              +
  Use forced varyon of volume groups, if necessary    true                   +
  Automatically Import Volume Groups                  false                  +
 
  Filesystems (empty is ALL for VGs specified)       []                      +
  Filesystems Consistency Check                       logredo                +
  Filesystems Recovery Method                         sequential             +
  Filesystems mounted before IP configured            false                  +
  Filesystems/Directories to Export                  []                      +
                                                                             +
  Filesystems/Directories to NFS Mount               []
  Network For NFS Mount                              []                      +
[MORE...10]
In the above smit panel we assign our service address and the application server we just created (http://www.unixwerk.eu/pics/closed.gifApplication Server) and set the varyon policy to forced.
Finally we synchronize the cluster to the other node:
node1# smitty hacmp
-> Extended Configuration
   -> Extended Verification and Synchronization
 
                     HACMP Verification and Synchronization
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Verify, Synchronize or Both                        [Both]                  +
* Automatically correct errors found during          [No]                    +
  verification?
 
* Force synchronization if verification fails?       [No]                    +
* Verify changes only?                               [No]                    +
* Logging                                            [Standard]              +
At this point the cluster is synchronized and in a consistent state. Both nodes have the same information about the cluster setup.

Create LVs and Filesystems for Applications

We want to use CSPOC to create the application filesystems. In order to make use of CSPOC we first start hacmp on both nodes:
node1+node2# smitty clstart
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Start now, on system restart or both                now                    +
  Start Cluster Services on these nodes              [barney]¹               +
  BROADCAST message at startup?                       true                   +
  Startup Cluster Information Daemon?                 True                   +
  Reacquire resources after forced down ?             false                  +
  Ignore verification errors?                         false                  +
  Automatically correct errors found during           Interactively          +
  cluster start?
To activate the route we defined earlier (http://www.unixwerk.eu/pics/closed.gifAdd Persistent IP Addresses) we issue the command
node1+node2# mkdev -l inet0

¹ shakira on the other node.
Once the cluster is up we go on with creating LVs and filesystems. If you don't want to use inline jfs2 logs, first a log device has to be created (if you don't do this a log LV called loglv00 will be automatically created with the first filesystem). The procedure to create a log LV is the same as for a regular filesystem with two exceptions:
  • Use jfs2log as Logical volume TYPE
  • Don't forget to format the jfs2log:
·         node1# logform /dev/applvg01_jfs2log
logform: destroy /dev/rapplvg0_jfs2log (y)?y
Refer to the next section on how to create the LV applvg01_jfs2log and remember to set the right Logical volume TYPE.
Now we are ready to create the application filesystems. The below example shows how to create one filesystem. Repeat the steps until all filesystems are setup. Remember to create a jfs2log for each volume group first (if you don't use inline logs).
node1# smitty hacmp
-> System Management (C-SPOC)
   -> HACMP Logical Volume Management
      -> Shared Logical Volumes
         -> Add a Shared Logical Volume
 
  +--------------------------------------------------------------------------+
  |                        Shared Volume Group Names                         |
  |                                                                          |
  | Move cursor to desired item and press Enter. Use arrow keys to scroll.   |
  |                                                                          |
  |   #Resource Group                         Volume Group                   |
  |    RG_01                                  appl01vg                       |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F8=Image                F10=Exit                Enter=Do                 |
  | /=Find                  n=Find Next                                      |
  +--------------------------------------------------------------------------+
Select one pair of disks from both sites - mark with :
 
  +--------------------------------------------------------------------------+
  |                          Physical Volume Names                           |
  |                                                                          |
  | Move cursor to desired item and press F7.                                |
  |     ONE OR MORE items can be selected.                                   |
  | Press Enter AFTER making all selections.                                 |
  |                                                                          |
  |   Auto-select                                                            |
  | > barney hdisk1          Datacenter1                                     |
  |   barney hdisk2          Datacenter1                                     |
  |   barney hdisk3          Datacenter1                                     |
  |   barney hdisk4          Datacenter1                                     |
  |   barney hdisk5          Datacenter1                                     |
  | > barney hdisk6          Datacenter2                                     |
  |   barney hdisk7          Datacenter2                                     |
  |   barney hdisk8          Datacenter2                                     |
  |   barney hdisk9          Datacenter2                                     |
  |   barney hdisk10         Datacenter2                                     |
  |                                                                          |
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F7=Select               F8=Image                F10=Exit                 |
  |                                                                          |
  +--------------------------------------------------------------------------+
Warning: Don't use Auto-select here - although we assigned LUNs to sites it's not guaranteed that CSPOC selects LUNs from different sites!
                          Add a Shared Logical Volume
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
  Resource Group Name                                 RG_01
  VOLUME GROUP name                                   appl01vg
  Reference node                                      barney
* Number of LOGICAL PARTITIONS                       [80]                     #
  PHYSICAL VOLUME names                               hdisk1  hdisk6
  Logical volume NAME                                [applv01]
  Logical volume TYPE                                [jfs2]                  +
  POSITION on physical volume                         middle                 +
  RANGE of physical volumes                           minimum                +
  MAXIMUM NUMBER of PHYSICAL VOLUMES                 []                       #
    to use for allocation
  Number of COPIES of each logical                    2                      +
    partition
  Mirror Write Consistency?                           active                 +
  Allocate each logical partition copy                strict                 +
   on a SEPARATE physical volume?
 
  RELOCATE the logical volume during reorganization?  yes                    +
  Logical volume LABEL                               []
  MAXIMUM NUMBER of LOGICAL PARTITIONS               [512]
  Enable BAD BLOCK relocation?                        no                     +
  SCHEDULING POLICY for reading/writing               parallel               +
    logical partition copies
  Enable WRITE VERIFY?                                no                     +
  Stripe Size?                                       [Not Striped]           +
On the just created LV we create a filesystem:
node1# smitty hacmp
-> System Management (C-SPOC)
   -> HACMP Logical Volume Management
      -> Shared File Systems
         -> Enhanced Journaled File Systems
            -> Add an Enhanced Journaled File System on a Previously Defined Logical Volume
 
  +--------------------------------------------------------------------------+
  |                           Logical Volume Names                           |
  |                                                                          |
  | Move cursor to desired item and press Enter.                             |
  |                                                                          |
  |   applv01  barney,shakira                                                |
  |                                                                          |  
  | F1=Help                 F2=Refresh              F3=Cancel                |
  | F8=Image                F10=Exit                Enter=Do                 |
  | /=Find                  n=Find Next                                      |
  +--------------------------------------------------------------------------+
 
  Add an Enhanced Journaled File System on a Previously Defined Logical Volume
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
  Node Names                                          barney,shakira
  LOGICAL VOLUME name                                 applv01
* MOUNT POINT                                        [/appl01/fs01]
  PERMISSIONS                                         read/write             +
  Mount OPTIONS                                      []                      +
  Block Size (bytes)                                  4096                   +
  Inline Log?                                         no                     +
  Inline Log size (MBytes)                           []                       #
Repeat the steps until all filesystems are setup. Our cluster is ready for use now.


Appendix


A. Failover Test

A cluster failover test is typically done in three or four phases:

1. Manual Failover

The manual failover is the most important test for a cluster configuration. This test can be invoked on one node by
node1# smitty clstop
 
                               Stop Cluster Services
 
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
 
                                                        [Entry Fields]
* Stop now, on system restart or both                 now                    +
  Stop Cluster Services on these nodes               [barney]                +
  BROADCAST cluster shutdown?                         true                   +
* Select an Action on Resource Groups                 Move Resource Groups   +
When stopping the cluster on node 1 the first thing executed is the cluster stop script. It brings down the applications and unmounts all application filesystems. If your application stop script is not able to stop all application processes some filesystems can't be unmounted and the failover fails.
When all resources are down on node 1 HACMP starts to bring up all resources on node 2. The application start script is the last thing hacmp does.
Check that your application is working properly and that all clients can connect. If so the first phase of the failover test is completed.

2. Manual Failback

Switch the resources back to the home node. Again check if everything is fine.

3. Automatic Failover

This test simulates a hardware failure on the active node. The easiest way to simulate is to issue the command
node1# halt -q
on the active node. Check that everything will be brought up on node 2.

4. Partial Hardware Failure

Sometimes only a component fails. Maybe a network switch fails or a storage system becomes unavailable. Test these scenarios to make sure that HACMP is correctly setup - and only starts a failover if needed. These tests also check if your VGs are correctly mirrored over two sites.

B. Disk Heartbeat Check

This is an example on how to check the disk heartbeat. On the first node we set the heartbeat disk to receive mode
node1# /usr/sbin/rsct/bin/dhb_read -p /dev/hdisk11  -r
DHB CLASSIC MODE
First node byte offset: 61440
Second node byte offset: 62976
Handshaking byte offset: 65024
       Test byte offset: 64512
 
Receive Mode:
Waiting for response . . .
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Link operating normally
and on the other node we set the same disk to transmit mode...
node2# /usr/sbin/rsct/bin/dhb_read -p /dev/hdisk11 -t
DHB CLASSIC MODE
First node byte offset: 61440
Second node byte offset: 62976
Handshaking byte offset: 65024
       Test byte offset: 64512
 
Transmit Mode:
Magic number = 0x87654321
Detected remote utility in receive mode.  Waiting for response . . .
Magic number = 0x87654321
Magic number = 0x87654321
Link operating normally
The last line in the above output indicates that the disk heartbeat is working properly.

C. Useful Commands

This is only a brief and selective list of commands that might be useful when working with HACMP
  • Which node is owning a resource group?
·          # /usr/sbin/cluster/utilities/clRGinfo
·          -----------------------------------------------------------------------------
·          Group Name  State    Node
·          -----------------------------------------------------------------------------
·          RG_01       ONLINE   barney
             OFFLINE  shakira
  • Move a resource group to another node
 # /usr/sbin/cluster/utilities/clRGmove -g  -n  -m
  • Stop cluster service (on current node)
 # smitty clstop
  • Start cluster service (on current node)
 # smitty clstart
  • Overview cluster state
 # /usr/sbin/cluster/clstat -a
  • Overview cluster state
·          # /usr/sbin/cluster/utilities/cllistlogs
·          /var/hacmp/log/hacmp.out
·          /var/hacmp/log/hacmp.out.1
 /var/hacmp/log/hacmp.out.2 

D. »clstat« and »snmp«

clstat and cldump rely on SNMP to be configured properly. If cldump fails with a message like this:
 
  cldump: Waiting for the Cluster SMUX peer (clstrmgrES)
  to stabilize.............
  Unable to communicate with the Cluster SMUX Peer Daemon
 
then /etc/snmpdv3.conf has to be fixed by adding a line
 
  VACM_VIEW defaultView        1.3.6.1.4.1.2.3.1.2.1.5    - included -
 
snmpd has to be restarted:

# stopsrc -s snmpd
# startsrc -s snmpd

No comments:

Post a Comment