Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Sunday 17 November 2013

HARDWARE MANAGEMENT CONSOLE (HMC ) EXPLAINED

HMC (Hardware Management Console) is a technology created by IBM  Vendor to provide a standard utility (interface) for configuring and operating logical partitions (also known as an LPAR or virtualized systems) and managing the  SMP (Symmetric multiprocessing)  systems such as IBM System i/z/p and IBM Power Systems.

Basically HMC is customized Linux blended with Java and many other graphical components. As per wiki "The HMC is a Linux kernel using Busybox to provide the base utilities and X Window using the Fluxboxwindow manager to provide graphical logins. The HMC also utilizes Java applications to provide additional functionality."

As AIX admins like me very much  fond of  HMC uses in day-today operations. HMC supports the system with features that enable a system administrator to manage configuration and operation of partitions in a system, as well as to monitor the system for hardware problems. It consists of a 32-bit Intel-based desktop PC with a DVD-RAM drive.

Connection of HMC with different managed systems is shown in below diagram. 

What does the HMC do?

  •     Creates and maintains a multiple-partitioned environment
  •     Displays a virtual operating system session terminal for each partition
  •     Displays virtual operator panel values for each partition
  •     Detects, reports, and stores changes in hardware conditions
  •     Powers managed systems on and off
  •     Powers Logical partitions on and off 
  •     Booting systems in Maintenance mode and  doing dump reboots
  •     Acts as a service focal point for service representatives to determine an appropriate service                                                   strategy  and enable the Service Agent Call-Home capability
  •     Activates additional resources on demand ( we call it as CoD, capacity on demand)
  •     Perform DLAR Operations. 
  •     Perform  Firmware up-gradations on managed systems
  •     Remote  management of managed systems

HMC  Facts:

  • Single HMC can manage multiple physical frames frames ( managed systems)
  • You can't open more than one virtual console for a given lpar at a given  time.
  • If your HMC is down , nothing will happen to your managed systems and their lpars they will operate as usual but only thing we can't manage them if something happens
  • There wont be direct root login . By default we get hscroot. ( need to engage  IBM support to get the root password)

HMC Operating Modes:

You can operate  HMC in two modes.

  1. Command Line Interface ( CLI )
  2. Graphical Interface
Each methods has its own merits ,  graphical interface can you clear view , how you can operate the managed systems even with  minimal knowledge on HMC

Where as using CLI you can run the information very fastly using commands  and scripts.

Below figure show how graphical interface.






















 

HMC Version Evolution:

  • HMC V7, for POWER5, POWER6 and POWER7 models
    • HMC V7R7.2.0 (Initial support for Power 710, Power 720, Power 730, Power 740 and Power 795 models)
    • HMC V7R7.1.0 (Initial support for POWER7)
    • HMC V7R3.5.0 (released Oct. 30, 2009)
    • HMC V7R3.4.0
    • HMC V7R3.3.0
    • HMC V7R3.2.0
    • HMC V7R3.1.0 (Initial support for POWER6 models)
  • HMC V6
    • HMC V6R1.3
    • HMC V6R1.2
  • 5.2.1
  • 5.1.0
  • 4.5.0
  • 4.4.0
  • 4.3.1
  • 4.2.1
  • 4.2.0, for POWER5 models
  • 4.1.x
  • 3.x, for POWER4 models

 RMC (Resource Monitoring and Control) & Association with HMC:

RMC is a distributed framework and architecture that allows the HMC to communicate with a managed logical partition. for example "IBM.DMSRM" is deamon which needs to run on the lapr inorder do DLAPR operation on through HMC on the lpar.

Both daemons in LPARs and HMCs use external network  to communicate among themselves but not through server processor  means both have access to same external  network in order to work with RMC related commands.

In order for RMC to work, port 657 upd/tcp must be open in both directions between the HMC public interface and the lpar.

The RMC daemons are part of the Reliable, Scalable Cluster Technology (RSCT) and are controlled by the System Resource Controller (SRC). These daemons run in all LPARs and communicate with equivalent RMC daemons running on the HMC. The daemons start automatically when the operating system starts and synchronize with the HMC RMC daemons.

Note: Apart from rebooting, there is no way to stop and start the RMC daemons on the HMC!

Things to check at the HMC:

- checking the status of the managed nodes: /usr/sbin/rsct/bin/rmcdomainstatus -s ctrmc  (you must be root on the HMC)

- checking connection between HMC and LPAR:
hscroot@umhmc1:~> lspartition -dlpar
<#0> Partition:<2 10.10.50.18="" aix10.domain.com="">
       Active:<1>, OS:DCaps:<0x4f9f>, CmdCaps:<0x1b 0x1b="">, PinnedMem:<1452>
<#1> Partition:<4 10.10.50.71="" aix20.domain.com="">
       Active:<0>, OS:DCaps:<0x0>, CmdCaps:<0x1b 0x1b="">, PinnedMem:<656>

For correct DLPAR function:
- the partition must return with the correct IP of the lpar.
- the active value (Active:...) must be higher than zero,
- the decaps value (DCaps:...) must be higher 0x0

(The first line shows a DLPAR capable LPAR, the second line is anon-working LPAR)

----------------------------------------

Things to check at the LPAR:

- checking the status of the managed nodes: /usr/sbin/rsct/bin/rmcdomainstatus -s ctrmc

- Checking RMC status:

# lssrc -a | grep rsct
 ctrmc            rsct             8847376      active   <== it is a RMC subsystem
 IBM.DRM          rsct_rm          6684802      active   <== it is for executing the DLPAR command on the partition
 IBM.DMSRM        rsct_rm          7929940      active    <== it is for tracking statuses of partitions
 IBM.ServiceRM    rsct_rm          10223780     active
 IBM.CSMAgentRM   rsct_rm          4915254      active   <== it is for  handshaking between the partition and HMC
 ctcas            rsct                          inoperative    <== it is for security verification
 IBM.ERRM         rsct_rm                       inoperative
 IBM.AuditRM      rsct_rm                       inoperative
 IBM.LPRM         rsct_rm                       inoperative
 IBM.HostRM       rsct_rm                       inoperative    <==it is for obtaining OS information

You will see some active and some missing (The key for DLPAR is the IBM.DRM)
- Stopping and starting RMC without erasing configuration:

# /usr/sbin/rsct/bin/rmcctrl -z   <== it stops the daemons
# /usr/sbin/rsct/bin/rmcctrl -A   <== adds entry to /etc/inittab and it starts the daemons
# /usr/sbin/rsct/bin/rmcctrl -p   <== enables the daemons for remote client connections

(This is the correct method to stop and start RMC without erasing the configuration.)
Do not use stopsrc and startsrc for these daemons; use the rmcctrl commands instead!

recfgct: deletes the RMC database, does a discovery, and recreates the RMC configuration

# /usr/sbin/rsct/install/bin/recfgct (Wait several minutes)
# lssrc -a | grep rsct

(If you see IBM.DRM active, then you have probably resolved the issue)

Getting  Information of LPARS & HMC either way:

Make a Note: In-order to work with these commands you should have rsct daemons running on the servers means make sure RMC communication between the HMC and LPAR is happening.

1) Getting HMC IP  information from LPAR:

If you get the information of which HMC/HMCs your lpar associate  managed system ( frame) connected by using "lsrsrc" command .

Command: finding the HMC IP address
 (lsrsrc IBM.ManagementServer (or lsrsrc IBM.MCP on AIX 7))
$ lsrsrc IBM.ManagementServer

Resource Persistent Attributes for IBM.ManagementServer
resource 1:
Name             = "192.168.1.2″
Hostname         = "192.168.1.2″
ManagerType      = "HMC"
LocalHostname    = "ldap1-en1″
ClusterTM        = "9078-160″
ClusterSNum      = ""
ActivePeerDomain = ""
NodeNameList     = {"lpar1"}
So in this case, the HMC IP address is 192.168.1.2.

2) Get Managed System & LPAR information :

Below script will give us full details about Frame & LPAR information and their allocated CPU & Memory.

Script to get Frame & LPAR information
for system in `lssyscfg -r sys -F "name,state" | sort | grep ",Operating" | sed 's/,Operating//'`; do 
  echo $system
  echo "    LPAR            CPU    VCPU   MEM    OS"
  for lpar in `lssyscfg -m $system -r lpar -F "name" | sort`; do
     default_prof=`lssyscfg -r lpar -m $system --filter "lpar_names=$lpar" -F default_profile`
     procs=`lssyscfg -r prof -m $system --filter "profile_names=$default_prof,lpar_names=$lpar" -F desired_proc_units`
     vcpu=`lssyscfg -r prof -m $system --filter "profile_names=$default_prof,lpar_names=$lpar" -F desired_procs`
     mem=`lssyscfg -r prof -m $system --filter "profile_names=$default_prof,lpar_names=$lpar" -F desired_mem`
     os=`lssyscfg -r lpar -m $system --filter "lpar_names=$lpar" -F os_version`
     printf "    %-15s %-6s %-6s %-6s %-30s\n" $lpar $procs $vcpu $mem "$os"
  done
done


Generally  people will think there is no way to run scripts in HMC,but we have a possibility for this  use "rnvi" command to make scrippt file i.e  "rnvi -f  hmcscriptfile".

To run the script, use the "source" command.   For example "source hmcscriptfile".   This will run the script in your current shell.  


Here  "hmcscriptfile" is the script name and run the script like below you will see the o/p as below.

How to run script & o/p
hscroot@umhmc1:~> source hmcscriptfile
p570_frame5_ms
    LPAR            CPU    VCPU   MEM    OS
    umlpar1         0.1    3      512    AIX 6.1 6100-07-05-1228       
    umlpar2         0.1    3      512    AIX 6.1 6100-07-05-1228       
    umlpar3         0.1    3      512    Unknown                       
    linux1          0.1    3      512    Unknown                       
    vio1            0.2    2      512    VIOS 2.2.1.4                  
    vio2            0.1    2      352    Unknown       

How to backup and restore the Virtual I/O Server

This document describes different methods to backup and restore the Virtual I/O Server.

Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method
Restore method
To tapeFrom bootable tape
To DVDFrom bootable DVD
To remote file systemFrom HMC using the NIMoL facility and installios
To remote file systemFrom an AIX NIM server

Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed
  1. check the status and the name of the tape/DVD drive
lsdev | grep rmt (for tape)
lsdev | grep cd (for DVD)
  1. if it is Available, backup the Virtual I/O Server with the following command
backupios –tape rmt#
backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server.

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server,including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.

The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed
  1. Create a mount directory where the backup file will be written
mkdir /backup_dir
  1. Mount the exported remote directory on the directory created in step 1.
mount server:/exported_dir /backup_dir
  1. Backup the Virtual I/O Server with the following command
backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note:The ability to run the installios command from the NIM server against thenim_resources.tar file is enabled with APAR IY85192.

The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data:
  1. Extract from the nim_resources.tar the bosinst.data
tar -xvf nim_resources.tar ./bosinst.data
  1. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =
  1. Fill the value of HDISKNAME with the name of the disk to which you want to restore to
  2. Put back the modified bosinst.data in the nim_resources.tar image
tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.
Use the following steps
  1. extract from the nim_resources.tar the bosinst.data
tar -xvf nim_resources.tar ./bosinst.data
  1. extract the mksysb from the nim_resources.tar
tar -xvf nim_resources.tar ./5300-00_mksysb
  1. extract the original bosinst.data
restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
  1. view the original target_disk_data
grep -p target_disk_data ./var/adm/ras/bosinst.data
           The above command displays something like the following:

target_disk_data:                                    
PVID = 00c5951e63449cd9                          
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0                          
LOCATION = 0A-08-00-5,0                          
SIZE_MB = 140000                                 
HDISKNAME = hdisk0  
  1. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one                            
  2. add the modified file to the nim_resources.tar
tar -uvf nim_resources.tar ./bosinst.data

Backing up the Virtual I/O Server to a remote file system by creating a mksysb image


You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

  1. Create a mount directory where the backup file will be written
mkdir /backup_dir
  1. Mount the exported remote directory on the just created directory
mount NIM_server:/exported_dir /backup_dir
  1. Backup the Virtual I/O Server with the following command
backupios –file /backup_dir/filename.mksysb -mksysb

Restoring the Virtual I/O Server


As there are 4 different ways to backup the Virtual I/O Server, so there are 4 ways to restore it.

Restoring from a tape or DVD


To restore the Virtual I/O Server from tape or DVD, follow these steps:

  1. specify the Virtual I/O Server partition to boot from the tape or DVD by using the bootlistcommand or by altering the bootlist in SMS menu.
  2. insert the tape/DVD into the drive.
  3. from the SMS menu, select to install from the tape/DVD drive.
  4. follow the installation steps according to the system prompts.

Restoring the Virtual I/O Server from a remote file system using a nim_resources.tar file

To restore the Virtual I/O Server from a nim_resources.tar image in a file system, perform the following steps:
  1. run the installios command without any flag from the HMC command line.
a)      Select the Managed System where you want to restore your Virtual I/O Server from the objects of type "managed system" found by installios command.
b)      Select the VIOS Partition where you want to restore your system from the objects of type "virtual I/O server partition" found
c)      Select the Profile from the objects of type "profile" found.
d)     Enter the source of the installation images [/dev/cdrom]: server:/exported_dir
e)      Enter the client's intended IP address: <IP address of the VIOS>
f)       Enter the client's intended subnet mask: <subnet of the VIOS>
g)      Enter the client's gateway: <default gateway of the VIOS>
h)      Enter the client's speed [100]: <network speed>
i)        Enter the client's duplex [full]: <network duplex>
j)        Would you like to configure the client's network after the installation [yes]/no?
k)      Select the Ethernet Adapter used for the installation from the objects of type "ethernet adapters" found.
  1. when the restoration is finished, open a virtual terminal connection (for example, using telnet) to the Virtual I/O Server that you restored. Some additional user input might be required.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.

Restoring the Virtual I/O Server from a remote file system using a mksysb image

To restore the Virtual I/O Server from a mksysb image in a file system using NIM, complete the following tasks:

  1. define the mksysb file as a NIM object, by running the nim command.
nim -o define -t mksysb -a server=master –a location=/export/ios_backup/filename.mksysbobjectname
objectname is the name by which NIM registers and recognizes the mksysb file. 
  1. define a SPOT resource for the mksysb file by running the nim command.
nim -o define -t spot -a server=master -a location=/export/ios_backup/
SPOT -a source=objectname SPOTname
SPOTname is the name of the SPOT resource for the mksysb file. 
  1. install the Virtual I/O Server from the mksysb file using the smit command.
smit nim_bosinst
The following entry fields must be filled:
“Installation type” => mksysb
“Mksysb” =>  the objectname chosen in step1
“Spot” => the SPOTname chosen in step2
  1. start the Virtual I/O Server logical partition.
a)      On the HMC, right-click the partition to open the menu.
b)      Click Activate. The Activate Partition menu opens with a selection of partition profiles. Be sure the correct profile is highlighted.
c)      Select the Open a terminal window or console session check box to open a virtual terminal (vterm) window.
d)     Click (Advanced...) to open the advanced options menu.
e)      For the Boot mode, select SMS.
f)       Click OK to close the advanced options menu.
g)      Click OK. A vterm window opens for the partition.
h)      In the vterm window, select Setup Remote IPL (Initial Program Load).
i)        Select the network adapter that will be used for the installation.
j)        Select IP Parameters.
k)      Enter the client IP address, server IP address, and gateway IP address. Optionally, you can enter the subnet mask. After you have entered these values, press Esc to return to the Network Parameters menu.
l)        Select Ping Test to ensure that the network parameters are properly configured. Press Esc twice to return to the Main Menu.
m)    From the Main Menu, select Boot Options.
n)      Select Install/Boot Device.
o)      Select Network.
p)      Select the network adapter whose remote IPL settings you previously configured.
q)      When prompted for Normal or Service mode, select Normal.
r)       When asked if you want to exit, select Yes.

Integrated Virtualization Manager (IVM) Consideration


If your Virtual I/O Server is managed by the IVM, prior to backup of your system, you need to backup your partition profile data for the management partition and its clients as IVM is integrated with Virtual I/O Server, but the LPARs profile is not saved with the backupios command.

There are two ways to perform this backup:
From the IVM Web Interface
1)      From the Service Management menu, click Backup/Restore
2)      Select the Partition Configuration Backup/Restore tab
3)      Click Generate a backup

From the Virtual I/O Server CLI
1)      Run the following command
bkprofdata -o backup

Both these ways generate a file named profile.bak with the information about the LPARs configuration. While using the Web Interface, the default path for the file is /home/padmin. But if you perform the backup from CLI, the default path will be /var/adm/lpm. This path can be changed using the –l flag. Only ONE file can be present on the system, so each time the bkprofdata is issued or the Generate a Backup button is pressed, the file is overwritten.

To restore the LPARs profile you can use either the GUI or the CLI

From the IVM Web Interface
1)      From the Service Management menu, click Backup/Restore
2)      Select the Partition Configuration Backup/Restore tab
3)      Click Restore Partition Configuration

From the Virtual I/O Server CLI
1)      Run the following command
rstprofdata –l 1 –f /home/padmin/profile.bak

It is not possible to restore a single partition profile. In order to restore LPARs profile, none of the LPARs profile included in the profile.bak must be defined in the IVM.

Troubleshooting

Error during information gathering

In the case where after you specify the System Managed and the profile,the HMC is not able to find a network adapter:
  1. Check if the profile has a physical network adapter assigned
  2. Check if there is an hardware conflict with other running partition
  3. Check if the status of the LPAR is not correct (must be Not Activated)

Error during NIMOL initialization

  1. nimol_config ERROR: error from command /bin/mount < remoteNFS> /mnt/nimol
mount:< remoteNFS>  failed, reason given by server: Permission denied
probably the remote FS is not correctly exported.
  1. nimol_config ERROR: Cannot find the resource SPOT in /mnt/nimol.
probably you have specified a NFS which doesn’t contain a valid nim_resources.tar or the nim_resources.tar is a valid file but it doesn’t have valid permission for “others”

Error during lpar_netboot

In the case where the LPAR fails to power on 
  1. Check if there is an hardware conflict with other running partition
  2. Check if the status of the LPAR is not correct (must be Not Activated)
In the case of Bootp failure
If the NIMOL initialization was successful
  1. Check if there is a valid route between the HMC and the LPAR
  2. Check that you have insert valid information during the initial phase

Error during BOS install phase

Probably there is a problem with the disk used for the installation
  1. Open a Vterm and check if the system is asking to select a different disk
  2. power off the LPAR, modify the profile to use another storage unit and restart installation

Setup SEA Failover on Dual VIO Servers



How do I setup SEA failover on DUAL VIO servers (VIOS)?

This document describes some general concepts related to Shared Ethernet Adapter (SEA), and the procedure to configure SEA Failover.

Answer

Shared Ethernet Adapter

A Shared Ethernet Adapter can be used to connect a physical network to a virtual Ethernet network. It provides the ability for several client partitions to share one physical adapter.

SEA can only be configured on the Virtual I/O server (VIOS) and requires the POWER Hypervisor and Advanced POWER Virtualization feature. The SEA, hosted on the Virtual I/O server, acts as a Layer-2 bridge between the internal and external network.

Restrictions with Configuring SEA Failover 

  • It can only be hosted on the VIOS and not on the client partition.
  • The VIOS running Integrated Virtualization Manager (IVM) cannot implement SEA Failover because only one single VIOS can be configured on the P5/P6 with IVM.
  • SEA Failover was introduced with Fixpack 7 (Virtual I/O server version 1.2), so both Virtual I/O Servers need to be at this minimum level.
Requirements for Configuring SEA Failover

  • One SEA on one VIOS acts as the primary (active) adapter and the second SEA on the second VIOS acts as a backup (standby) adapter.
  • Each SEA must have at least one virtual Ethernet adapter with the “Access external network” flag (previously known as “trunk” flag) checked. This enables the SEA to provide bridging functionality between the two VIO servers.
  • This adapter on both the SEAs has the same PVID, but will have a different priority value.
  • A SEA in ha_mode (Failover mode) might have more than one trunk adapters, in which case all should have the same priority value.
  • The priority value defines which of the two SEAs will be the primary and which will be the backup. The lower the priority value, the higher the priority, e.g. an adapter with priority 1 will have the highest priority.
  • An additional virtual Ethernet adapter , which belongs to a unique VLAN on the system, is used to create the control channel between the SEAs, and must be specified in each SEA when configured in ha_mode.
  • The purpose of this control channel is to communicate between the two SEA adapters to determine when a fail over should take place.
NOTE: If the SEA Failover will be using etherchannel as the physical device, configure the switch ports for etherchannel PRIOR to configuring the etherchannel device on the VIO server. Failure to follow this sequence may result in a network storm.

PROCEDURE:

 

1. Create a virtual adapter to be used in the SEA adapter on VIOS1. EX: (ent2).

To configure a virtual Ethernet adapter via Dynamic Logical Partition (DLPAR) for a running logical partition using HMC V7R3, follow these steps:

Note: a DLPAR operation requires the partition to be on the network.
  • 1. In the navigation panel, open Systems Management, open Servers, and click on the managed system on which the logical partition is located.
  • 2. In the contents panel, select the VIOS on which you want to configure the virtual Ethernet adapter, click on the Tasks button -> choose Dynamic Logical Partitioning -> Virtual Adapters.
  • 3. Click Actions -> Create -> Ethernet Adapter.
  • 4. Enter the slot number for the virtual Ethernet adapter into Adapter ID.
  • 5. Enter the Port Virtual LAN ID (PVID) for the virtual Ethernet adapter into VLAN ID. The PVID allows the virtual Ethernet adapter to communicate with other virtual Ethernet adapters that have the same PVID.  Note: Give the virtual adapter a unique VLAN ID (PVID): "1"
  • 6. Select IEEE 802.1 compatible adapter if you want to configure the virtual Ethernet adapter to communicate over multiple virtual LANs. If you leave this option unchecked and you want this partition to connect to multiple virtual networks, then you must create multiple virtual adapters by creating additional virtual LAN IDs.
  • 7. Check the box "access external network".
  • 8. Give the virtual adapter a low trunk priority. EX: "1"
  • 9. Click OK.
NOTE !!
 
After you have finished, access any existing partition profiles for the logical partition and add the same virtual ethernet adapters to those partitions profiles. The dynamic virtual ethernet adapter will be lost if you shut down the logical partition and activate that logical partition using a partition profile that does not have the new virtual ethernet adapter in it.

2. Create another virtual adapter to be used as a Control Channel on VIOS1EX: (ent3)
  • a. Give this new virtual adapter another unique VLAN ID (PVID) EX: "99
  • b. Do NOT check the box "access external network". 
  • c. Shutdown, Activate VIOS1 or run cfgdev from VIOS command line if created with DLPAR.
3. Create SEA on VIO server 1 with failover attribute:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3

Note: The defaultid value of the SEA should be the Port VLAN ID (PVID) of the default trunk adapter, in case there are more than one trunk adapters configured in the SEA.
4. (OPTIONAL) Assign an ip address to SEA on VIOS1:

$ mktcpip -hostname vio1 -interface en4 -inetaddr 9.3.5.136 -netmask 255.255.255.0 -gateway 9.3.5.41 -nsrvaddr 9.3.4.2 -nsrvdomain itsc.austin.ibm.com -start

5. Create a virtual adapter to be used in the SEA adapter on VIOS2. EX: (ent2)
  • a. Give the virtual adapter the same VLAN ID (PVID) as VIOS1. EX: "1" . 
  • b. Check the box "access external network". 
  • c. Give the virtual adapter a higher trunk priority. EX: "2"
6. Create another virtual adapter to be used as a Control Channel on VIOS2. EX: (ent3):
  • a. Give this new virtual adapter the same unique VLAN ID (PVID) as the control channel on VIOS1. EX: "99
  • b. Do NOT check the box "access external network". 
  • c. Shutdown, Activate VIOS2 or run cfgdev from VIOS command line if created with DLPAR.
7. Create SEA on VIOS2 with failover attribute:


$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3

8. (OPTIONAL) Assign an ip address to SEA on VIOS2:

$ mktcpip -hostname vio2 -interface en4 -inetaddr 9.3.5.137 -netmask 255.255.255.0 -gateway 9.3.5.41 -nsrvaddr 9.3.4.2 -nsrvdomain itsc.austin.ibm.com -start


Manual SEA Failover On VIO server:


Scenario 1:ha_mode change

$ lsdev -type adapter
or 
$ oem_setup_env 
# lsdev -Cc adapter |grep ent --> Note which ent is the SEA 
# entstat -d entX | grep State --> Check for the state (PRIMARY, or BACKUP)

Set ha_mode to standby on primary VIOS with chdev command:

# chdev -l entX -a ha_mode=standby     <== entX is SEA
or
$ chdev -dev entX  -attr ha_mode=standby

Reset it back to auto and the SEA should fail back to the primary VIOS:

# chdev -l entX -a ha_mode=auto        <== entX is SEA 
or
$ chdev -dev entX  -attr ha_mode=auto

Scenario 2: Primary VIOS Shutdown

  • Reboot the primary VIOS for fail over to backup SEA adapter.
  • When the primary VIOS is up again, it should fail back to the primary SEA adapter.
Scenario 3: Primary VIOS Error

  • Deactivate primary VIOS from the HMC for fail over to backup SEA adapter.
  • Activate the primary VIOS for the fail back to the primary SEA adapter again.
Scenario 4: Physical Link Failure

  • Unplug the cable of the physical ethernet adapter on primary VIOS for the failover to the backup VIOS.
  • Replug the cable of the physical ethernet adapter on primary VIOS for the failback to the primary VIOS.
Scenario 5: Reverse Boot Sequence

  • Shut down both the VIO servers.
  • Activate the VIOS with backup SEA until the adapter becomes active.
  • Activate the VIOS with primary SEA. The configuration should fail back to the primary SEA.