HACMP: Cluster Commandline
Contents
Dynamic Reconfiguration (CSPOC)
Resource Group Management
Cluster Information
Cluster State
1. PowerHA Version
There is no specific command to get the PowerHA version. However,
the version of the cluster.es.server.rte fileset actually reflects the PowerHA version
# lslpp -Lqc cluster.es.server.rte | cut -d: -f3
6.1.0.3
If you don't trust the above method to determine the PowerHA
version you could also ask the cluster manager:
# lssrc -ls clstrmgrES | egrep '^local node vrmf|^cluster fix level'
local node vrmf is 6103
cluster fix level is "3"
¹ With Version 6.1 Service Pack 10 a new
command has been introduced to show the PowerHA version: halevel -s.
2. Program Paths
The path to the cluster commands are not included in the default
path. It makes sense to extend the default path to include the cluster paths:
# export PATH=$PATH:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc
Dynamic
Reconfiguration (CSPOC)
1. »cl_« versus »cli_« Commands
Most of the commands in this section are CSPOC commands found
under
/usr/es/sbin/cluster/sbin
. They
start with »cl_
«
followed by a well known AIX LVM command. As a word of warning - these are used
by the CSPOC SMIT panels and are not intended to be directly used from the
command line. However, many administrators grabbed them from SMIT via F6 and
use them directly from the command line for daily business. They happen to
work.
However with HACMP 5.5 SP1 IBM introduced an "official"
command line interface to CSPOC. You find these commands under
/usr/es/sbin/cluster/cspoc
. In contrast to the
CSPOC commands used by SMIT the official commands start with »cli_
« (mind the i )
followed by a well known AIX LVM command. The official CLI commands have been
introduced to provide a framework for batch scripts. Since these commands are
intended to be used outside SMIT they may be the safer choice than the »cl_
« commands.
The table below shows the most important LVM commands and its
corresponding CSPOC commands:
AIX
command
|
SMIT/CSPOC
command
|
"official"
CLI command
|
(/usr/sbin)
|
(/usr/es/sbin/cluster/sbin)
|
(/usr/es/sbin/cluster/cspoc)
|
chfs
|
cl_chfs
|
cli_chfs
|
chlv
|
cl_chlv
|
cli_chlv
|
chvg
|
cl_chvg
|
cli_chvg
|
crfs
|
cl_crfs
|
cli_crfs
|
extendlv
|
cl_extendlv
|
cli_extendlv
|
extendvg
|
cl_extendvg
|
cli_extendvg
|
mirrorvg
|
cl_mirrorvg
|
cli_mirrorvg
|
mklv
|
cl_mklv
|
cli_mklv
|
mklvcopy
|
cl_mklvcopy
|
cli_mklvcopy
|
mkvg
|
cl_mkvg
|
cli_mkvg
|
reducevg
|
cl_reducevg
|
cli_reducevg
|
rmfs
|
cl_rmfs
|
cli_rmfs
|
rmlv
|
cl_rmlv
|
cli_rmlv
|
rmlvcopy
|
cl_rmlvcopy
|
cli_rmlvcopy
|
syncvg
|
cl_syncvg
|
cli_syncvg
|
unmirrorvg
|
cl_unmirrorvg
|
cli_unmirrorvg
|
The syntax across the commands of one row in the above table is
similar but not identical. For more information refer to IBM's PowerHA
for AIX Cookbook, Chapter 7.4.6
2. Extend a Volume Group
One or more PVs can be added to an existing volume group. Since it
is not guaranteed that the hdisks are numbered the same way across all nodes
you need to specify a reference node with the "-R" switch:
nodeA# /usr/es/sbin/cluster/sbin/cl_extendvg -cspoc -n'nodeA,nodeB' -R'nodeA' VolumeGroup hdiskA hdiskB hdisk...
3. Reduce a Volume Group
nodeA# /usr/es/sbin/cluster/sbin/cl_reducevg -cspoc -n'nodeA,nodeB' -R'nodeA' VolumeGroup hdiskA hdiskB hdisk...
Again a reference node has to be specified.
4. Add a Filesystem to an
Existing VG
nodeA# /usr/es/sbin/cluster/sbin/cl_mklv -cspoc -n'nodeA,nodeB' -R'nodeA' -y'LVName' -t'jfs2' -c'2' -a'e' -e'x' -u'2' -s's' VolumeGroup LPs hdiskA hdiskB
It is recommended to use the most narrow upperbound possible to
keep the mirror consistant after a filesystem extension.
You could also use a map file to tell the command how to setup
your LV:
nodeA# /usr/es/sbin/cluster/sbin/cl_mklv -cspoc -n'nodeA,nodeB' -R'nodeA' -y'LVName' -t'jfs2' -m MapFile VolumeGroup LPs
nodeA# /usr/es/sbin/cluster/cspoc/cli_chlv -e x -u'2' -c'2' LVName
The format of the map file is
hdiskA:PP1
hdiskB:PP1
:
hdiskC:PP2
hdiskD:PP2
First put in all mappings for mirror copy 1, then add all mapping
for mirror copy 2. The number of entries per mirror copy has to equal the
number of LPs given at the command line. Be careful!
Then we create the filessystem on top of the just created LV...
nodeA# /usr/es/sbin/cluster/sbin/cl_crfs -cspoc -n'nodeA,nodeB' -v jfs2 -d'LVName' -m'/mountpoint' -p'rw' -a agblksize='4096' -a'logname=INLINE'
The CSPOC command automatically sets mount=false in
/etc/filesystems
. This
will mount the filesystem (in case the resource group is online).
5. Increase a Filesystem
If there are still enough free PPs in the existing Volume group
the filesystem can be extended with a standard AIX command:
nodeA# chfs -a size=512G /mountpoint
Be sure that superstrictness is set as well as that the upper
bound is set correctly.
In case you need to add new disks to the VG (1. Extend a Volume Group) in
order to be able to extend the filesystem you need to extend the underlying LV
first:
nodeA# /usr/es/sbin/cluster/sbin/cl_extendlv -R'node' -u'8' -m'MapFile' LVName LPs
nodeA# chfs -a size=512G /mountpoint
The upperbound has to be adapted to the new number of PVs. The map
file must only contain the additional mappings. You might need to increase the
maximum number of LPs for the LV first:
nodeA# /usr/es/sbin/cluster/sbin/cl_chlv -x'2048' LVName
6. Mirror a Logical Volume
# /usr/es/sbin/cluster/sbin/cl_mklvcopy -R'NODE' -e'x' -u'1' -s's' LVName 2 hdiskA hdiskB hdisk...
Since it is not guaranteed that the hdisks are numbered the same
way across all nodes you need to specify a reference node. You only need to set
Superstrictness and upperbound if not already set.
It is also possible to use a mapfile¹ to control the exact mirror
location:
# /usr/es/sbin/cluster/sbin/cl_mklvcopy -m'/root/LVNAME.map' LVName 2
With map files a reference node cannot be specified. So be sure
you work on the right node!
7. Remove a Logical Volume
Mirror
# /usr/es/sbin/cluster/sbin/cl_rmlvcopy -R'NODE' LVName 1 hdiskA hdiskB hdisk...
Again a reference node has to be specified.
8. Synchronize a Logical Volume
Mirror
# /usr/es/sbin/cluster/cspoc/cli_syncvg -P 4 -v VGNAME
9. Remove a Logical Volume
# /usr/es/sbin/cluster/sbin/cl_rmlv 'LVName'
Before you can use this command the Logical Volume has to be
brought into closed state, i.e. the filesystem has to be unmounted.
Resource Group
Management
1. List all Defined Resource
Groups
# /usr/es/sbin/cluster/utilities/cllsgrp
RG1
RG2
2. Move a Resource Group to
another Node
# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -m
It is also possible to move multiple resource groups in one go:
# /usr/es/sbin/cluster/utilities/clRGmove -g "RG1,RG2,RG3" -n NODE -m
3. Bring a Resource Group Down
# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -d
4. Bring a Resource Group Up
# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -u
Cluster
Information
1. Where Can I Find the Log
Files?
Historically the cluster's main log file could be found under
"
/tmp/hacmp.out
".
But nowadays the location is configurable. If you don't know where to look run# /usr/es/sbin/cluster/utilities/cllistlogs
/var/hacmp/log/hacmp.out
/var/hacmp/log/hacmp.out.1
/var/hacmp/log/hacmp.out.2
2. Where Can I Finde the
Application Start/Stop Scripts?
# /usr/es/sbin/cluster/utilities/cllsserv
AppSrv1 /etc/cluster/start_appsrv1 /etc/cluster/stop_appsrv1
AppSrv2 /etc/cluster/start_appsrv2 /etc/cluster/stop_appsrv2
3. Show the Configuration of a
Particular Resource Group
# /usr/es/sbin/cluster/utilities/clshowres -g RG
If you are interested in the configuration of all resource groups you can use clshowres without any option:
# /usr/es/sbin/cluster/utilities/clshowres
4. Cluster IP Configuration
# /usr/es/sbin/cluster/utilities/cllsif
Cluster State
1. Cluster State
The most widely known tool to check the cluster state is probably clstat:
# /usr/es/sbin/cluster/clstat -a
The switch "-a" forces clstat to run in terminal mode rather to than open an X window.
The same information can be obtained with
# /usr/es/sbin/cluster/utilities/cldump
2. The Cluster Manager
If for whatever reason snmp does not allow you to use clstat or cldump you can still ask the cluster manager about the state of your
cluster:
# lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = "@(#)36 1.135.1.101 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 53haes_r610 11/16/10 06:18:14"
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 11
local node vrmf is 6103
cluster fix level is "3"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 1 NodeName - barney
PgSpFree = 16743822 PvPctBusy = 3 PctTotalTimeIdle = 90.121875
DNP Values for NodeId - 2 NodeName - betty
PgSpFree = 16746872 PvPctBusy = 0 PctTotalTimeIdle = 97.221894
3. Where are the Resources
Currently Active?
# /usr/es/sbin/cluster/utilities/clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
RES_GRP_01 ONLINE barney
OFFLINE betty
RES_GRP_02 ONLINE betty
OFFLINE barney
No comments:
Post a Comment