Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Thursday, 14 November 2013

fixing broken fileset issue in HACMP

We updated HACMP 5.3 to 5.5 and is seeing lppchk output for three
commands:

# lppchk -v ==> The 5.3 versions of 3 HACMP show up as "broken"
cluster.es.cspoc.cmds
cluster.es.cspoc.dsh
cluster.es.cspoc.rte

# lslpp -l | grep cluster.es.cspoc ==> Only 5.5 versions show up

we tar's up ODM:
# cd /
# tar -cvf /tmp/odm.tar ./etc/objrepos ./usr/lib/objrepos


cluster filesets upgrade to ha 5.5, but the install gave
messages that the following filesets are broken...

cluster.es.cspoc.* 5.3


# export ODMDIR=/usr/lib/objrepos
# odmget -q "name=cluster.es.cspoc.cmds and rel=3" lpp

# lppchk -v
lppchk:  The following filesets need to be installed or corrected to
bring
         the system to a consistent state:

  bos.txt.bib.data 4.1.0.0                (not installed; requisite
fileset)
  cluster.es.cspoc.cmds 5.3.0.3           (BROKEN)
  cluster.es.cspoc.dsh 5.3.0.0            (BROKEN)
  cluster.es.cspoc.rte 5.3.0.3            (BROKEN)
# export ODMDIR=/usr/lib/objrepos
# odmget -q "lpp_name=cluster.es.cspoc.cmds and rel=3" product

product:
        lpp_name = "cluster.es.cspoc.cmds"
        comp_id = "5765-F6200"
        update = 0
        cp_flag = 273
        fesn = ""
        name = "cluster.es.cspoc"
        state = 10
        ver = 5
        rel = 3
        mod = 0
        fix = 0
        ptf = ""
        media = 3
        sceded_by = ""
        fixinfo = ""
        prereq = "*coreq cluster.es.cspoc.rte 5.3.0.0\n\
"
        description = "ES CSPOC Commands"
        supersedes = ""

product:
        lpp_name = "cluster.es.cspoc.cmds"
        comp_id = "5765-F6200"
        update = 1
        cp_flag = 289
        fesn = ""
        name = "cluster.es.cspoc"
        state = 7
        ver = 5
        rel = 3
        mod = 0
        fix = 3
        ptf = ""
        media = 3
        sceded_by = ""
        fixinfo = ""
        prereq = "*ifreq cluster.es.cspoc.rte (5.3.0.0) 5.3.0.1\n\
*ifreq cluster.es.server.diag (5.3.0.0) 5.3.0.1\n\
*ifreq cluster.es.server.rte (5.3.0.0) 5.3.0.1\n\
"
        description = "ES CSPOC Commands"
        supersedes = ""
# odmget -q "name=cluster.es.cspoc.cmds and rel=3" lpp

lpp:
        name = "cluster.es.cspoc.cmds"
        size = 0
        state = 7
        cp_flag = 273
        group = ""
        magic_letter = "I"
        ver = 5
        rel = 3
        mod = 0
        fix = 0
        description = "ES CSPOC Commands"
        lpp_id = 611

# odmdelete -q lpp_id=611 -o lpp
# odmdelete -q "lpp_name=cluster.es.cspoc.cmds and rel=3" -o product
2 objects deleted
# odmdelete -q lpp_id=611 -o lpp
1 objects deleted
# odmdelete -q lpp_id=611 -o inventory
199 objects deleted
# odmdelete -q lpp_id=611 -o history
4 objects deleted

We canclean up the lppchk -v "BROKEN" entries by doing the following:

Getting the lpp_id's:
# export ODMDIR=/usr/lib/objrepos
# odmget -q "name=cluster.es.cspoc.cmds and rel=3" lpp | grep lpp_id
        lpp_id = 611
# odmget -q "name=cluster.es.cspoc.dsh and rel=3" lpp | grep lpp_id
        lpp_id = 604
# odmget -q "name=cluster.es.cspoc.rte and rel=3" lpp | grep lpp_id
        lpp_id = 610

Deleting the 5.3 entries:
# export ODMDIR=/usr/lib/objrepos
# odmdelete -q "lpp_name=cluster.es.cspoc.cmds and rel=3" -o product
# odmdelete -q lpp_id=611 -o lpp
# odmdelete -q lpp_id=611 -o inventory
# odmdelete -q lpp_id=611 -o history
# odmdelete -q "lpp_name=cluster.es.cspoc.dsh and rel=3" -o product
# odmdelete -q lpp_id=604 -o lpp
# odmdelete -q lpp_id=604 -o inventory
# odmdelete -q lpp_id=604 -o history
# odmdelete -q "lpp_name=cluster.es.cspoc.rte and rel=3" -o product
# odmdelete -q lpp_id=610 -o lpp
# odmdelete -q lpp_id=610 -o inventory
# odmdelete -q lpp_id=610 -o history
# export ODMDIR=/etc/objrepos

That will leave you with this:
# lppchk -v
lppchk:  The following filesets need to be installed or corrected to
bring
         the system to a consistent state:

  bos.txt.bib.data 4.1.0.0                (not installed; requisite
fileset)

For that to go away, you'll need to install that from Volume 1 of your
AIX installation media.

--------------------------

I followed your procedure and got the following results.  It appears I
don?t end up with ?bos.txt.bib.data 4.1.0.0? needing to be installed.  I
did notice two of the inventory commands deleting large numbers of
objects and would like to know if that is a potential issue.  Everything
else looks great. 
 

oxxxxxxx:/te/root> export ODMDIR=/usr/lib/objrepos
oxxxxxxx:/te/root> lppchk -v
lppchk:  The following filesets need to be installed or corrected to
bring
         the system to a consistent state:

  cluster.es.cspoc.cmds 5.3.0.3           (BROKEN)
  cluster.es.cspoc.dsh 5.3.0.0            (BROKEN)
  cluster.es.cspoc.rte 5.3.0.3            (BROKEN)

oxxxxxxx:/te/root> odmget -q "name=cluster.es.cspoc.cmds and rel=3"
lpp | grep lpp_id
        lpp_id = 611
oxxxxxxx:/te/root> odmget -q "name=cluster.es.cspoc.dsh and rel=3"
lpp | grep lpp_id
        lpp_id = 604
oxxxxxxx:/te/oot> odmget -q "name=cluster.es.cspoc.rte and rel=3"
lpp | grep lpp_id
        lpp_id = 610

oxxxxxxx:/te/root> odmdelete -q "lpp_name=cluster.es.cspoc.cmds and
rel=3" -o product
0518-307 odmdelete: 0 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=611 -o lpp
0518-307 odmdelete: 1 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=611 -o inventory
0518-307 odmdelete: 199 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=611 -o history
0518-307 odmdelete: 4 objects deleted.
oxxxxxxx:/te/root> odmdelete -q "lpp_name=cluster.es.cspoc.dsh and
rel=3" -o product
0518-307 odmdelete: 0 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=604 -o lpp
0518-307 odmdelete: 1 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=604 -o inventory
0518-307 odmdelete: 3 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=604 -o history
0518-307 odmdelete: 2 objects deleted.
oxxxxxxx:/te/root> odmdelete -q "lpp_name=cluster.es.cspoc.rte and
rel=3" -o product
0518-307 odmdelete: 0 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=610 -o lpp
0518-307 odmdelete: 1 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=610 -o inventory
0518-307 odmdelete: 53 objects deleted.
oxxxxxxx:/te/root> odmdelete -q lpp_id=610 -o history
0518-307 odmdelete: 4 objects deleted.
oxxxxxxx:/te/root> export ODMDIR=/etc/objrepos
oxxxxxxx:/te/root> lppchk -v
oxxxxxxx:/te/root>

config_too_long error in HACMP

config_too_long is an indication that event processing takes longer than
the configured expected time for event execution. If the execution
time of an event exceeds a preset timer, the message config_too_long is
logged periodically. There are several reasons why a config_too_long
event may be logged:

1. A large number of resources are processed during the event. In this
case, the event processing time may be within expected tolerances,
hence normal. In this case, to not be unneccesarily alerted by these
messages, you may extend the configured time until warning as follows:

smitty hacmp
 Extended Configuration
   Extended Event Configuration
     Change/Show Time Until Warning

2. Event processing is slow due performance degradation or errors in
other components. Performance would need to be analyzed step by step.

3. Event processing hangs

In cases 1. and 2,. config_too_long messages will occur intermittently,
interleaved with output from the event scripts on nodes where events are
run. In case 3. config_too_long messages will occur without further
logging of event processing.


HACMP verification and Synchronization


Few points about HACMP verification and Synchronization which I think few have got some doubts.

 Verifying and synchronizing your HACMP cluster assures you that all resources used by HACMP areconfigured appropriately and that rules regarding resource ownership and resource takeover are in agreement across all nodes. You should verify and synchronize your cluster configuration aftermaking any change within a cluster. For example, any change to the hardware operating system, node configuration, or cluster configuration.

Whenever you configure, reconfigure, or update a cluster, run the cluster verification procedure to ensure that all nodes agree on the cluster topology, network configuration, and the ownership and takeover of HACMP resources. If the verification succeeds, the configuration can be synchronized.Synchronization takes effect immediately on an active cluster. A dynamic reconfiguration event isrun and the changes are committed to the active cluster.


Note :
 If you are using the SMIT Initialization and Standard Configuration path, synchronization automatically  follows a successful verification. If you are using the Extended Configuration path, you have more options for types of verification. If you are using the Problem Determination Tools path, you can choose whether to synchronize or not.

Typically, the log is reported to  /var/hacmp/clverify/clverify.log



Running Cluster Verification
After making a change to the cluster, you can perform cluster verification in several ways.

These methods include:

Automatic verification:
 You can automatically verify your cluster:
       Each time you start cluster services on a node
       Each time a node rejoins the cluster-
       Every 24 hours.

       By default, automatic verification is enabled to run at midnight.


Manual verification:
 Using the SMIT interface,
       you can either verify the complete configuration,
       or only the changes made since the last time the utility was run.

       Typically, you should run verification whenever you add or change anything in your
       cluster configuration. For detailed instructions, see Verifying the HACMP configuration
       using SMIT.

Automatic Verification :
 You can Disable this Automatic verification during Cluster Startup under
 Extended Configuration >> Extended Cluster Service Settings   >>>>>>>> BUT DONT DO IT IF NOT ADVICED.


Understading Verification Process

The phases of the verification and synchronization process are as follows:

Verification
Snapshot (optional)
Synchronization.


Phase one: Verification
During the verification process the default system configuration directory (DCD) is compared
with the active configuration. On an inactive cluster node, the verification process compares
the local DCD across all nodes. On an active cluster node, verification propagates a copy of
the active configuration to the joining nodes.

If a node that was once previously synchronized has a DCD that does not match the ACD of an already
active cluster node, the ACD of an active node is propagated to the joining node. This new information
does not replace the DCD of the joining nodes; it is stored in a temporary directory for the purpose
of running verification against it.

HACMP displays progress indicators as the verification is performed.

Note: When you attempt to start a node that has an invalid cluster configuration, HACMP transfers a
valid configuration database data structure to it, which may consume 1-2 MB of disk space. If the
verification phase fails, cluster services will not start.

Phase two: (Optional) Snapshot
A snapshot is only taken if a node request to start requires an updated configuration. During the
snapshot phase of verification, HACMP records the current cluster configuration to a snapshot file
- for backup purposes. HACMP names this snapshot file according to the date of the snapshot and the
name of the cluster. Only one snapshot is created per day. If a snapshot file exists and its filename
contains the current date, it will not be overwritten.

This snapshot is written to the /usr/es/sbin/cluster/snapshots/ directory.

The snapshot filename uses the syntax MM-DD-YYYY-ClusterName -autosnap.odm. For example, a snapshot
taken on April 2, 2006 on a cluster hacluster01 would be named usr/es/sbin/cluster/snapshots/04-02
-06hacluster01-autosnap.odm.

Phase three: Synchronization
During the synchronization phase of verification, HACMP propagates information to all cluster nodes
. For an inactive cluster node, the DCD is propagated to the DCD of the other nodes. For an active
cluster node, the ACD is propagated to the DCD.

If the process succeeds, all nodes are synchronized and cluster services start. If synchronization
fails, cluster services do not start and HACMP issues an error.


Conditions that can trigger Corrective Action :

https://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.hacmp.admngd/ha_admin
_trigger_corrective.htm

This topic discusses conditions that can trigger a corrective action.

HACMP shared volume group time stamps are not up-to-date on a node
If the shared volume group time stamp file does not exist on a node, or the time stamp files do not match on all nodes, the corrective action ensures that all nodes have the latest up-to-date VGDA time stamp for the volume group and imports the volume group on all cluster nodes where the shared volume group was out of sync with the latest volume group changes. The corrective action ensures that volume groups whose definitions have changed will be properly imported on a node that does not have the latest definition.

The /etc/hosts file on a node does not contain all HACMP-managed IP addresses
If an IP label is missing, the corrective action modifies the file to add the entry and saves a copy of the old version to /etc/hosts.date. If a backup file already exists for that day, no additional backups are made for that day.

Verification does the following:

If the /etc/hosts entry exists but is commented out, verification adds a new entry; comment lines are ignored.
If the label specified in the HACMP Configuration does not exist in /etc/hosts , but the IP address is defined in /etc/hosts, the label is added to the existing /etc/hosts entry. If the label is different between /etc/hosts and the HACMP configuration, then verification reports a different error message; no corrective action is taken.
If the entry does not exist, meaning both the IP address and the label are missing from /etc/hosts, then the entry is added. This corrective action takes place on a node-by-node basis. If different nodes report different IP labels for the same IP address, verification catches these cases and reports an error. However, this error is unrelated to this corrective action. Inconsistent definitions of an IP label defined to HACMP are not corrected.
SSA concurrent volume groups need unique SSA node numbers
If verification finds that the SSA node numbers are not unique, the corrective action changes the number of one of the nodes where the number is not unique. See the Installation Guide for more information on SSA configuration.

A file system is not created on a node, although disks are available
If a file system has not been created on one of the cluster nodes, but the volume group is available, the corrective action creates the mount point and file system. The file system must be part of a resource group for this action to take place. In addition, the following conditions must be met:

This is a shared volume group.
The volume group must already exist on at least one node.
One or more node(s) that participate in the resource group where the file system is defined must already have the file system created.
The file system must already exist within the logical volume on the volume group in such a way that simply re-importing that volume group would acquire the necessary file system information.
The mount point directory must already exist on the node where the file system does not exist.
The corrective action handles only those mount points that are on a shared volume group, such that exporting and re-importing of the volume group will acquire the missing file systems available on that volume group. The volume group is varied off on the remote node(s), or the cluster is down and the volume group is then varied off if it is currently varied on, prior to executing this corrective action.

If Mount All File Systems is specified in the resource group, the node with the latest time stamp is used to compare the list of file systems that exists on that node with other nodes in the cluster. If any node is missing a file system, then HACMP imports the file system.

Disks are available, but the volume group has not been imported to a node
If the disks are available but the volume group has not been imported to a node that participates in a resource group where the volume group is defined, then the corrective action imports the volume group.

The corrective action gets the information regarding the disks and the volume group major number from a node that already has the volume group available. If the major number is unavailable on a node, the next available number is used.

The corrective action is only performed under the following conditions:

The cluster is down.
The volume group is varied off if it is currently varied on.
The volume group is defined as a resource in a resource group.
The major number and associated PVIDS for the disks can be acquired from a cluster node that participates in the resource group where the volume group is defined.
Note: This functionality will not turn off the auto varyon flag if the volume group has the attribute set. A separate corrective action handles auto varyon.

Shared volume groups configured as part of an HACMP resource group have their automatic varyon attribute set to Yes.
If verification finds that a shared volume group inadvertently has the auto varyon attribute set to Yes on any node, the corrective action automatically sets the attribute to No on that node.

Required /etc/services entries are missing on a node.
If a required entry is commented out, missing, or invalid in /etc/services on a node, the corrective action adds it. Required entries are:

Name Port Protocol
topsvcs  6178 udp
grpsvcs  6179 udp
clinfo_deadman  6176 udp
clcomd 6191 tcp

Required HACMP snmpd entries are missing on a node
If a required entry is commented out, missing, or invalid on a node, the corrective action adds it.

Note: The default version of the snmpd.conf file for AIX® is snmpdv3.conf.
In /etc/snmpdv3.conf or /etc/snmpd.conf, the required HACMP snmpd entry is:

smux   1.3.6.1.4.1.2.3.1.2.1.5   clsmuxpd_password # HACMP/ES for AIX clsmuxpd
In /etc snmpd.peers, the required HACMP snmpd entry is:

clsmuxpd   1.3.6.1.4.1.2.3.1.2.1.5 "clsmuxpd_password" # HACMP/ES for AIX clsmuxpd
If changes are required to the /etc/snmpd.peers or snmpd[v3].conf file, HACMP creates a backup of the original file. A copy of the pre-existing version is saved prior to making modifications in the file /etc/snmpd.{peers | conf}.date. If a backup has already been made of the original file, then no additional backups are made.

HACMP makes one backup per day for each snmpd configuration file. As a result, running verification a number of times in one day only produces one backup file for each file modified. If no configuration files are changed, HACMP does not make a backup.

Required RSCT network options settings
HACMP requires that the nonlocsrcroute, ipsrcroutesend, ipsrcrouterecv, and ipsrcrouteforward network options be set to 1; these are set by RSCT's topsvcs startup script. The corrective action run on inactive cluster nodes ensures these options are not disabled and are set correctly.

Required HACMP network options setting
The corrective action ensures that the value of each of the following network options is consistent across all nodes in a running cluster (out-of-sync setting on any node is corrected):

tcp_pmtu_discover
udp_pmtu_discover
ipignoreredirects
Required routerevalidate network option setting
Changing hardware and IP addresses within HACMP changes and deletes routes. Because AIX caches routes, setting the routerevalidate network option is required as follows:

no -o routerevalidate=1
This setting ensures the maintenance of communication between cluster nodes. Verification run with corrective action automatically adjusts this setting for nodes in a running cluster.

Note: No corrective actions take place during a dynamic reconfiguration event.
Corrective actions when using IPv6
If you configure an IPv6 address, the verification process can perform 2 more corrective actions:

Neighbor discovery (ND). Network interfaces must support this protocol which is specific to IPv6. The underlying network interface card is checked for compatibility with ND and the ND related daemons will be started.

Configuration of Link Local addresses (LL). A special link local (LL) address is required for every network interface that will be used with IPv6 addresses. If a LL address is not present the autoconf6 program will be run to configure one.

Creating a volume group outside of CSPOC in HACMP

Creating a volume group outside of cspoc is not the recommended
method, but certainly running importvg -L will result in unpredictable
 behavior if the volume group does not already exist on the node
you're running the importvg on.
.
importvg -L is used to update a host about changes made to a volume
group, not to create a new volume group.
.
Finally, since this appears to be a vio client on at least one node,
per the previous update by Jesse, the volume groups should be in
enhanced concurrent mode for fast disk takeover (HACMP does not know
how to break reserves on a vio client, so requires that there be no
reserves placed).  In that environment, you should never have to run
the varyonvg -bu/importvg -L for LVM updates, as gsclvmd will ensure
the updates are propogated across the cluster.   For a new vg, which
shouldn't be varied on after it is created, a simple importvg -y
will suffice, once it's added to the resource group and the cluster is
 synced it will be brought online correctly.


Unable to create concurrent VG using C-SPOC

we were  trying to configured diskhb network. We
assigned disks and PVID on both nodes. But when tried to create a
concurrent VG using C-SPOC we couldnt see the disks.

lslpp -l bos.clvm.enh => Missing

Even if it is created as concurrent-capable, if we manually vary it on,
the VG will be varied on as non-concurrent. For varying it on as
concurrent, vary it off and then we need to add it to a RG and then
Synchronize and start cluster services.

But for creating a diskhb network we dont really need to varyon the VG.
Hence varyoff the VG on both nodes and just create the diskhb network.


I created diskhbvg concurrent one so I varied it on to check whether it
is really concurrent one or not and it looks like it is not so how to
fix it. Also how to create datavg as concurrent one which was failing
when we tried on that day




root@xxxxx-yyyy-zzzzz:/usr/es/sbin/cluster/utilities>lsvg datavg

VOLUME GROUP:       datavg                   VG IDENTIFIER:
00c7b77d00004c0000000126e7ed7801
VG STATE:           active                   PP SIZE:        64
megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1292 (82688
megabytes)
MAX LVs:            256                      FREE PPs:       1292 (82688
megabytes)
LVs:                0                        USED PPs:       0 (0
megabytes)
OPEN LVs:           0                        QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Non-Concurrent
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable

lssrc -ls clstrmgrES => ST_INIT

 only after starting cluster services, the VG will be
varied on as concurrent.

Started cluster services, lsssrc -ls clstrmgrES => ST_STABLE. But
still he sees the VG in Non-concurrent mode.
found that he didnt varyoff the VG before starting cluster services.
Advised customer to bring the RG offline and then bring it online.

Now the VG is in concurrent mode.

Creating a new shared VG in HACMP

creating a new shared VG
and adding this to a RG while the cluster is running.

The following steps can be used to create a VG and then add this to a
exsisting RG.

Create VG
     smitty hacmp --> System Management --> HACMP Logical Volume
     Management --> Shared Volume Groups -->Create a Shared Volume Group
     -->select all the nodes --> select new luns/disk --> give new VG
     name,set PP size --> press Enter.

Add VG to a RG

     smitty hacmp --> Extended Configuration --> Extended Resource
     Configuration --> HACMP Extended Resource Group Configuration -->
     Change/Show Resources and Attributes for a Resource Group -->
     select RG -->In Volume Groups add the new VG--> press Enter.

After performing this action run a sync/verify once and wait for it to
fully complete before doing anything else  (check cluster is stable)

I need to get a snap -e from the system and run this through the decoder
just to confirm we don't have any bugs at this level..  But looking on
our database it seems fine.



The process is very straight forward.

1. Create a new VG using the following menu

This creates a enhance capable VG

#smitty cl_admin
  HACMP Concurrent Logical Volume Management
    HACMP Concurrent Logical Volume Management
      HACMP Concurrent Logical Volume Management

Select both nodes using F7 key or participating node

   add disk required disks and hit enter

Add new VG to the required Resource group:-

smitty hacmp >> Extended Configuration:-
                  Extended Resource Configuration
                     HACMP Extended Resource Group configuration
                      Change/show attributes
Important:-

Run a sync/verify on the cluster and wait for this to complete and the
cluster to become stable before performing further commands.

check using clstat or cldump command.

/usr/es/sbin/cluster/utilities/cldump

To add LV's and filesystems to this new VG use the HACMP logical Volume
Management menu in the cspoc menu.



A normal(not a concurrent) shared volume group had a failed disk, which was unmirrored & reduced from the vg outside the cspoc utility.
& later the disk was replaced, readded to vg and lv's remirrored.  All this was done on primary node where the vg was active. [This was performed outside the cspoc utility because of the fact that using the cspoc utility we were unable to unmirror].
Since the vg changes were done outside the cspoc(cluster single point of contact) utility, the changes were not synched across to the secondary node. Therefore there was risk that if a failure occurs & if the resource group fails over to the secondary node, it may fail to get activated on secondary node due to vgda in secondary node not in sync state.

Solution [manual updation of vgda on secondary node] [there is no need for any downtime of any resource group or node]:
a)  Take ur system(both nodes) & cluster information.
b)  Ensure the new replaced disk is also seen on the secondary node(run cfgmgr).Using the pvid grep in the lspv output.
c)  unlock the vg (release scsi reserve on the vg/disks) on the primary node:
"varyonvg -bu datavg"
d) Run importvg -L to detect the changes on secondary node:
importvg -L datavg hdiskX     [hdiskX is any disk of the datavg]
(if this command  displays error, u can perform the below steps)
or
exportvg datavg     [on secondary node]
importvg -V 41 -y datavg -n -F hdisk20    [on secondary node]
41 is the major number of the datavg, it should be the same major number as specified on primary node.
-n : option tells the importvg not to varyon the vg (very important).  -F: fast checkup of vgda areas.
hdisk20: is one of the disk of the datavg
e) Run "varyonvg datavg" on primary node to reimpose locking/reserves of the vg.

Important: The above steps are applicable only for the normal shared volume group & not for concurrent volume group.

Ravi was working on this issue, using this approach he successfully resolved the issu







root@dccccccc:/> lsvg -l dcccccccvg
dcccccccvg:
LV NAME             TYPE       LPs   PPs   PVs  LV STATE      MOUNT POINT
hadbaslv            jfs        8     16    2    open/syncd    /hadbas
loglv00             jfslog     1     2     2    open/syncd    N/A
hagfen1lv           jfs        4     8     2    open/syncd    /hagfen1
habfen1lv           jfs        4     8     2    open/syncd    /habfen1
hafenc1lv           jfs        4     8     2    open/syncd    /hafenc1
hagws01lv           jfs        128   256   2    open/syncd    /hagws01
habws01lv           jfs        128   256   2    open/syncd    /habws01
db2_repllv          jfs        128   256   2    open/syncd    /db2/db2_repl
haigb01lv           jfs        64    128   2    open/syncd    /haigb01
db2_tables01lv      jfs        160   320   2    open/syncd    /db2/db2_tables01
db2_tables02lv      jfs        96    192   2    open/syncd    /db2/db2_tables02
db2_indexes01lv     jfs        128   256   2    open/syncd    /db2/db2_indexes01
db2_indexes02lv     jfs        64    128   2    open/syncd    /db2/db2_indexes02
db2_logslv          jfs        128   256   2    open/syncd    /db2/db2_logs
db2_archivelv       jfs        288   576   2    open/syncd    /db2/db2_archive
db2_tmpsp01lv       jfs        192   384   2    open/syncd    /db2/db2_tempspace01
db2_backuplv        jfs        2020  4040  24   open/syncd    /db2/db2_backup
tsmshrlv            jfs        1     2     2    open/syncd    /ha_mnt1/tsmshr
db2_auditlv         jfs        64    64    1    open/syncd    /db2/db2_audit
root@dccccccc:/> lsvg -o
dcccccccvg
rootvg
root@dccccccc:/>

root@dccccccc:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@dccccccc:/usr/sbin/cluster/utilities>
root@dccccccc:/usr/sbin/cluster/utilities> ./clshowres

Resource Group Name                          udb_rg
Node Relationship                            cascading
Site Relationship                            ignore
Participating Node Name(s)                   dccccccc deeeeeee
Node Priority
Service IP Label                             dccccccc
Filesystems                                  ALL
Filesystems Consistency Check                fsck
Filesystems Recovery Method                  sequential
Filesystems/Directories to be exported
Filesystems to be NFS mounted
Network For NFS Mount
Volume Groups                                dcccccccvg
Concurrent Volume Groups
Use forced varyon for volume groups, if necessaryfalse
Disks
GMD Replicated Resources
PPRC Replicated Resources
AIX Connections Services
AIX Fast Connect Services
Shared Tape Resources
Application Servers                          udb_app
Highly Available Communication Links
Primary Workload Manager Class
Secondary Workload Manager Class
Delayed Fallback Timer
Miscellaneous Data
Automatically Import Volume Groups           false
Inactive Takeover                            false
Cascading Without Fallback                   false
SSA Disk Fencing                             false
Filesystems mounted before IP configured     true


Run Time Parameters:

Node Name                                    dccccccc
Debug Level                                  high
Format for hacmp.out                         Standard

Node Name                                    deeeeeee
Debug Level                                  high
Format for hacmp.out                         Standard

root@dccccccc:/usr/sbin/cluster/utilities>

root@dccccccc:/usr/sbin/cluster/utilities> ./cllsserv
libodm: The specified search criteria is incorrectly formed.
        Make sure the criteria contains only valid descriptor names and
        the search values are correct.

Application server [] does not exist.
root@dccccccc:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg -o
dcccccccvg
rootvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>
root@deeeeeee:/> lsvg
rootvg
dcccccccvg
root@deeeeeee:/> lsvg dcccccccvg
0516-010 : Volume group must be varied on; use varyonvg command.
root@deeeeeee:/>


qroot@deeeeeee:/usr/sbin/cluster> cd uti*
root@deeeeeee:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@deeeeeee:/usr/sbin/cluster/utilities> ./clRGinfo
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@deeeeeee:/usr/sbin/cluster/utilities>
root@dccccccc:/usr/sbin/cluster/utilities> lsvg -o
dcccccccvg
rootvg
root@dccccccc:/usr/sbin/cluster/utilities> varyonvg -bu dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities>


 Importvg –L vg0001


importvg -L dcccccccvg hdisk20



root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> importvg -L dcccccccvg hdisk20
0516-304 getlvodm: Unable to find device id 0004047ad46dd30f in the Device
        Configuration Database.
0516-304 : Unable to find device id 0004047ad46dd30f0000000000000000 in the Device
        Configuration Database.
0516-780 importvg: Unable to import volume group from hdisk20.
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> importvg -L dcccccccvg hdisk20
0516-304 getlvodm: Unable to find device id 0004047ad46dd30f in the Device
        Configuration Database.
0516-304 : Unable to find device id 0004047ad46dd30f0000000000000000 in the Device
        Configuration Database.
0516-780 importvg: Unable to import volume group from hdisk20.
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> ls -l /dev/dcccccccvg
crw-r-----   1 root     system       41,  0 Jul 20 02:43 /dev/dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities>



root@deeeeeee:/usr/sbin/cluster/utilities> ls -l /dev/dcccccccvg
crw-r-----   1 root     system       41,  0 Oct 28 14:37 /dev/dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities>


importvg -V 41 -y dcccccccvg -n -F hdisk20


root@deeeeeee:/usr/sbin/cluster/utilities> lsvg -o
rootvg
root@deeeeeee:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lsvg -o
rootvg
root@deeeeeee:/usr/sbin/cluster/utilities> exportvg dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lsvg
rootvg
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    None
hdisk3          0004047a5bbc52c1                    None
hdisk4          0004047a5bbc65da                    None
hdisk5          0004047a5bbc79ad                    None
hdisk6          0004047a5bbc8bed                    None
hdisk7          0004047a5bbc9ee6                    None
hdisk8          0004047a5bbcb0f7                    None
hdisk9          0004047a5bbcc38d                    None
hdisk10         0004047a5bbcd6ed                    None
hdisk11         0004047a5bbce7d9                    None
hdisk12         0004047a5bbcf9df                    None
hdisk13         0004047a5bbd0c49                    None
hdisk14         0004047a5bbd1cac                    None
hdisk15         0004047a5bbd2fde                    None
hdisk16         0004047a5bbd4259                    None
hdisk17         0004047a5bbd5742                    None
hdisk18         0004047a5bbd6bcd                    None
hdisk19         0004047a5bbd7932                    None
hdisk20         0004047a5bbd8068                    None
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    None
hdisk24         0004047a5bbd9d73                    None
hdisk25         0004047a5bbda4bf                    None
hdisk26         0004047a8abfee03                    None
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> cfgmgr -l ssar
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    None
hdisk3          0004047a5bbc52c1                    None
hdisk4          0004047a5bbc65da                    None
hdisk5          0004047a5bbc79ad                    None
hdisk6          0004047a5bbc8bed                    None
hdisk7          0004047a5bbc9ee6                    None
hdisk8          0004047a5bbcb0f7                    None
hdisk9          0004047a5bbcc38d                    None
hdisk10         0004047a5bbcd6ed                    None
hdisk11         0004047a5bbce7d9                    None
hdisk12         0004047a5bbcf9df                    None
hdisk13         0004047a5bbd0c49                    None
hdisk14         0004047a5bbd1cac                    None
hdisk15         0004047a5bbd2fde                    None
hdisk16         0004047a5bbd4259                    None
hdisk17         0004047a5bbd5742                    None
hdisk18         0004047a5bbd6bcd                    None
hdisk19         0004047a5bbd7932                    None
hdisk20         0004047a5bbd8068                    None
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    None
hdisk24         0004047a5bbd9d73                    None
hdisk25         0004047a5bbda4bf                    None
hdisk26         0004047a8abfee03                    None
hdisk27         none                                None
root@deeeeeee:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> ifconfig -a
en0: flags=5e080863,c0
        inet 10.1.50.83 netmask 0xffff0000 broadcast 10.1.255.255
en1: flags=5e080863,c0
        inet 9.23.219.215 netmask 0xffffff00 broadcast 9.23.219.255
en3: flags=5e080863,c0
        inet 192.168.121.14 netmask 0xffffff00 broadcast 192.168.121.255
lo0: flags=e08084b
        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
        inet6 ::1/0
         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
root@deeeeeee:/usr/sbin/cluster/utilities> lsattr -El en0
alias4                    IPv4 Alias including Subnet Mask           True
alias6                    IPv6 Alias including Prefix Length         True
arp           on          Address Resolution Protocol (ARP)          True
authority                 Authorized Users                           True
broadcast                 Broadcast Address                          True
mtu           1500        Maximum IP Packet Size for This Device     True
netaddr       10.1.50.83  Internet Address                           True
netaddr6                  IPv6 Internet Address                      True
netmask       255.255.0.0 Subnet Mask                                True
prefixlen                 Prefix Length for IPv6 Internet Address    True
remmtu        576         Maximum IP Packet Size for REMOTE Networks True
rfc1323                   Enable/Disable TCP RFC 1323 Window Scaling True
security      none        Security Level                             True
state         up          Current Interface Status                   True
tcp_mssdflt               Set TCP Maximum Segment Size               True
tcp_nodelay               Enable/Disable TCP_NODELAY Option          True
tcp_recvspace             Set Socket Buffer Space for Receiving      True
tcp_sendspace             Set Socket Buffer Space for Sending        True
root@deeeeeee:/usr/sbin/cluster/utilities> lsattr -El en3
alias4                       IPv4 Alias including Subnet Mask           True
alias6                       IPv6 Alias including Prefix Length         True
arp           on             Address Resolution Protocol (ARP)          True
authority                    Authorized Users                           True
broadcast                    Broadcast Address                          True
mtu           1500           Maximum IP Packet Size for This Device     True
netaddr       192.168.121.14 Internet Address                           True
netaddr6                     IPv6 Internet Address                      True
netmask       255.255.255.0  Subnet Mask                                True
prefixlen                    Prefix Length for IPv6 Internet Address    True
remmtu        576            Maximum IP Packet Size for REMOTE Networks True
rfc1323                      Enable/Disable TCP RFC 1323 Window Scaling True
security      none           Security Level                             True
state         up             Current Interface Status                   True
tcp_mssdflt                  Set TCP Maximum Segment Size               True
tcp_nodelay                  Enable/Disable TCP_NODELAY Option          True
tcp_recvspace                Set Socket Buffer Space for Receiving      True
tcp_sendspace                Set Socket Buffer Space for Sending        True
root@deeeeeee:/usr/sbin/cluster/utilities> lsattr -El en1
alias4                      IPv4 Alias including Subnet Mask           True
alias6                      IPv6 Alias including Prefix Length         True
arp           on            Address Resolution Protocol (ARP)          True
authority                   Authorized Users                           True
broadcast                   Broadcast Address                          True
mtu           1500          Maximum IP Packet Size for This Device     True
netaddr       9.23.219.215  Internet Address                           True
netaddr6                    IPv6 Internet Address                      True
netmask       255.255.255.0 Subnet Mask                                True
prefixlen                   Prefix Length for IPv6 Internet Address    True
remmtu        576           Maximum IP Packet Size for REMOTE Networks True
rfc1323                     Enable/Disable TCP RFC 1323 Window Scaling True
security      none          Security Level                             True
state         up            Current Interface Status                   True
tcp_mssdflt                 Set TCP Maximum Segment Size               True
tcp_nodelay                 Enable/Disable TCP_NODELAY Option          True
tcp_recvspace               Set Socket Buffer Space for Receiving      True
tcp_sendspace               Set Socket Buffer Space for Sending        True
root@deeeeeee:/usr/sbin/cluster/utilities>

root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    None
hdisk3          0004047a5bbc52c1                    None
hdisk4          0004047a5bbc65da                    None
hdisk5          0004047a5bbc79ad                    None
hdisk6          0004047a5bbc8bed                    None
hdisk7          0004047a5bbc9ee6                    None
hdisk8          0004047a5bbcb0f7                    None
hdisk9          0004047a5bbcc38d                    None
hdisk10         0004047a5bbcd6ed                    None
hdisk11         0004047a5bbce7d9                    None
hdisk12         0004047a5bbcf9df                    None
hdisk13         0004047a5bbd0c49                    None
hdisk14         0004047a5bbd1cac                    None
hdisk15         0004047a5bbd2fde                    None
hdisk16         0004047a5bbd4259                    None
hdisk17         0004047a5bbd5742                    None
hdisk18         0004047a5bbd6bcd                    None
hdisk19         0004047a5bbd7932                    None
hdisk20         0004047a5bbd8068                    None
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    None
hdisk24         0004047a5bbd9d73                    None
hdisk25         0004047a5bbda4bf                    None
hdisk27         0004047ad46dd30f                    None
hdisk26         0004047a8abfee03                    None
root@deeeeeee:/usr/sbin/cluster/utilities> lspv|grep 0004047ad46dd30f
hdisk27         0004047ad46dd30f                    None
root@deeeeeee:/usr/sbin/cluster/utilities>

root@deeeeeee:/usr/sbin/cluster/utilities> importvg -V 41 -y dcccccccvg -n -F hdisk20
dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk21         0004047a5bbd879d                    None
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk27         0004047ad46dd30f                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lspv
hdisk0          0004047ae34c2b1e                    rootvg          active
hdisk1          0004047ae325c810                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg     active
hdisk3          0004047a5bbc52c1                    dcccccccvg     active
hdisk4          0004047a5bbc65da                    dcccccccvg     active
hdisk5          0004047a5bbc79ad                    dcccccccvg     active
hdisk6          0004047a5bbc8bed                    dcccccccvg     active
hdisk7          0004047a5bbc9ee6                    dcccccccvg     active
hdisk8          0004047a5bbcb0f7                    dcccccccvg     active
hdisk9          0004047a5bbcc38d                    dcccccccvg     active
hdisk10         0004047a5bbcd6ed                    dcccccccvg     active
hdisk11         0004047a5bbce7d9                    dcccccccvg     active
hdisk12         0004047a5bbcf9df                    dcccccccvg     active
hdisk13         0004047a5bbd0c49                    dcccccccvg     active
hdisk14         0004047a5bbd1cac                    dcccccccvg     active
hdisk15         0004047a5bbd2fde                    dcccccccvg     active
hdisk16         0004047a5bbd4259                    dcccccccvg     active
hdisk17         0004047a5bbd5742                    dcccccccvg     active
hdisk18         0004047a5bbd6bcd                    dcccccccvg     active
hdisk19         0004047a5bbd7932                    dcccccccvg     active
hdisk20         0004047a5bbd8068                    dcccccccvg     active
hdisk23         0004047a5bbd9622                    dcccccccvg     active
hdisk24         0004047a5bbd9d73                    dcccccccvg     active
hdisk25         0004047a5bbda4bf                    dcccccccvg     active
hdisk26         0004047a8abfee03                    dcccccccvg     active
hdisk21         0004047ad46dd30f                    dcccccccvg     active
root@dccccccc:/usr/sbin/cluster/utilities> lspv|wc -l
      26
root@dccccccc:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> lspv|wc -l
      27
root@deeeeeee:/usr/sbin/cluster/utilities> rmdev -dl hdisk
root@deeeeeee:/usr/sbin/cluster/utilities> lspv hdisk21
0516-320 : Physical volume hdisk21 is not assigned to
        a volume group.
root@deeeeeee:/usr/sbin/cluster/utilities> rmdev -dl hdisk21
hdisk21 deleted
root@deeeeeee:/usr/sbin/cluster/utilities> lspv
hdisk0          0007986ae34ab7c2                    rootvg          active
hdisk1          0007986ae326c405                    rootvg          active
hdisk2          0004047a5bbbb462                    dcccccccvg
hdisk3          0004047a5bbc52c1                    dcccccccvg
hdisk4          0004047a5bbc65da                    dcccccccvg
hdisk5          0004047a5bbc79ad                    dcccccccvg
hdisk6          0004047a5bbc8bed                    dcccccccvg
hdisk7          0004047a5bbc9ee6                    dcccccccvg
hdisk8          0004047a5bbcb0f7                    dcccccccvg
hdisk9          0004047a5bbcc38d                    dcccccccvg
hdisk10         0004047a5bbcd6ed                    dcccccccvg
hdisk11         0004047a5bbce7d9                    dcccccccvg
hdisk12         0004047a5bbcf9df                    dcccccccvg
hdisk13         0004047a5bbd0c49                    dcccccccvg
hdisk14         0004047a5bbd1cac                    dcccccccvg
hdisk15         0004047a5bbd2fde                    dcccccccvg
hdisk16         0004047a5bbd4259                    dcccccccvg
hdisk17         0004047a5bbd5742                    dcccccccvg
hdisk18         0004047a5bbd6bcd                    dcccccccvg
hdisk19         0004047a5bbd7932                    dcccccccvg
hdisk20         0004047a5bbd8068                    dcccccccvg
hdisk23         0004047a5bbd9622                    dcccccccvg
hdisk24         0004047a5bbd9d73                    dcccccccvg
hdisk25         0004047a5bbda4bf                    dcccccccvg
hdisk27         0004047ad46dd30f                    dcccccccvg
hdisk26         0004047a8abfee03                    dcccccccvg
root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@dccccccc:/usr/sbin/cluster/utilities>


root@deeeeeee:/usr/sbin/cluster/utilities> ./clfindres
-----------------------------------------------------------------------------
Group Name     Type       State      Location
-----------------------------------------------------------------------------
udb_rg         cascading  ONLINE     dccccccc
                          OFFLINE    deeeeeee

root@deeeeeee:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities> chvg -a n dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        no
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>


root@dccccccc:/usr/sbin/cluster/utilities> lsvg
rootvg
dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities> chvg -a n dcccccccvg
root@dccccccc:/usr/sbin/cluster/utilities> lsvg dcccccccvg
VOLUME GROUP:       dcccccccvg              VG IDENTIFIER:  0004047a00004c00000000fb5e7fe19e
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      13008 (208128 megabytes)
MAX LVs:            256                      FREE PPs:       5852 (93632 megabytes)
LVs:                19                       USED PPs:       7156 (114496 megabytes)
OPEN LVs:           19                       QUORUM:         13
TOTAL PVs:          24                       VG DESCRIPTORS: 24
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         24                       AUTO ON:        no
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size:           128 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
root@dccccccc:/usr/sbin/cluster/utilities>