Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Saturday 9 November 2013

HACMP Installation on AIX 6.1


  1. Check for required AIX BOS components

  1. # lslpp -l bos.adt.lib
    # lslpp -l bos.adt.libm
    # lslpp -l bos.adt.syscalls
    # lslpp -l bos.net.tcp.client
    # lslpp -l bos.net.tcp.server
    # lslpp -l bos.rte.SRC
    # lslpp -l bos.rte.libc
    # lslpp -l bos.rte.libcfg
    # lslpp -l bos.rte.libcur
    # lslpp -l bos.rte.libpthreads
    # lslpp -l bos.rte.odm
    # lslpp -l bos.lvm.rte
    # lslpp -l bos.clvm.enh

  1. If any of the above fileset is missing, install these from the AIX installation media.

  1. # installp -acgXYd . bos.adt.libm
    # installp -acgXYd . bos.adt.syscalls
    # installp -acgXYd . bos.clvm.enh
    # installp -acgXYd . bos.net.nfs.server ( Required for NFS )
    # installp -acgXYd . bos.data

  1. Run bosboot command to rebuild the boot image.

  1. # bosboot -ad /dev/hdisk0
    # bosboot -ad /dev/hdisk1

  1. Install RSCT images from HACMP installation images

Install RSCT filesets from HACMP installation image, if it is not installed during AIX installation.

Note: Install the higher version of RSCT if available in AIX installation media.
  1. Install HACMP Cluster filesets.

  2. Note: Installation may fail of the required version of RSCT is not installed
Go to PowerHa installation binary location.

Install the Cluster filesets either by SMIT or installp.

cluster.adt.es
cluster.assist.license
cluster.doc.en_US.assist
cluster.doc.en_US.es
cluster.doc.en_US.glvm
cluster.doc.en_US.pprc
cluster.es.assist
cluster.es.cfs
cluster.es.cgpprc
cluster.es.client
cluster.es.cspoc
cluster.es.ercmf 

cluster.es.nfs
cluster.es.plugins
cluster.es.pprc
cluster.es.spprc
cluster.es.server
cluster.es.svcpprc
cluster.es.worksheets
cluster.license
cluster.man.en_US.es
cluster.msg.en_US.cspoc
cluster.xd.glvm
cluster.xd.license
  1. Reboot the nodes after installation.

  1. # shutdown -Fr

  1. Update /etc/hosts files on both the nodes

  1. ###### Entries Required for HACMP #####
    # Persistant IP
    10.112.75.48 wisapp1per
    10.112.75.49 wisord1per

    # Boot IP
    10.2.2.77 wisapp1p wisapp1p_bt # Boot IP for wisapp1p
    10.2.2.78 wisord1p wisord1p_bt # Boot IP for wisord1p

    # Standby IP
    10.2.3.77 wisapp1p_stby # Standby IP for wisapp1p
    10.2.3.78 wisord1p_stby # Standby IP for wisord1p

    # Service IP
    10.112.75.74 wisappsvc

  1. Assign the IP address to both the nodes as shown below

  1. wisapp1p
    en0 10.2.2.77
    en1 10.2.3.77

    wisapp1p
    en0 10.2.2.78
    en1 10.2.3.78

How to configure a disk heartbeat in HACMP

Configuring a disk heartbeat network in HACMP

Ensure that the disks to be used for disk heartbeating are assigned and configured to each
cluster node. Enter:

lspv -> ensure that a PVID is assigned to the disk on each cluster node

If a PVID is not assigned, run one of the following commands:

chdev -l hdisk# -a pv=yes
OR
chdev -l vpath# -a pv=yes (if SDD is installed)


1.Create an enhanced concurrent mode volume group on the disk or disks in question using
Smitty Enter:

smitty hacmp
Select:
        System Management (C-SPOC)
            HACMP Concurrent
               Logical Volume Management
                  Concurrent Volume Groups
                       Create a Concurrent Volume Group (with Datapath Devices, if applicable)

Press F7 to select each cluster node. Select the PVID of the disk to be added to the Volume
Group. Enter the Volume Group Name, Desired Physical Partition Size, and major number.
Enhanced Concurrent Mode should be set to True.


2. Create a diskhb network. Enter:

smitty hacmp
Select:
        Extended Configuration
          Extended Topology
            Configuration
               Configure HACMP Networks
                 Add a Network to the HACMP cluster
                    Choose diskhb. Enter the network name or accept the default.


3.Add each disk-node pair to the diskhb network. Enter:

smitty hacmp
Select:
      Extended Configuration
         Extended Topology
            Configuration
               Configure HACMP Communication
                   Interfaces/Devices
                     Add Communication Interfaces/Devices
                        Add Pre-Defined Communication Interfaces and Devices
                           Communication Devices

Choose your diskhb Network Name. For Devices Name, enter a unique name; For device
path, enter /dev/vpath# or /dev/hdisk#; For nodename, enter the node on which this device
resides.

Repeat step 4 for every node in the cluster.


4.Verifying communication across a disk heartbeat network

Run the following command on the first node to put it in Receive Mode:

/usr/sbin/rsct/bin/dhb_read -p hdisk# -r  (replace hdisk# with rvpath# if using SDD)

The following should be displayed:

  Receive Mode:
  Waiting for Response . . .


Run the following command on a different node to put it in Transmit Mode:

/usr/sbin/rsct/bin/dhb_read -p hdisk# -t   (replace hdisk# with rvpath# if using SDD)
If communication is successful, the following should be displayed:

  Link operating normally.

AIX Files Modified by HACMP


The following AIX files are modified to support HACMP. They are not distributed with HACMP.

/etc/hosts

The cluster event scripts use the /etc/hosts file for name resolution. All cluster node IP interfaces must be added to this file on each node.
If you delete service IP labels from the cluster configuration using SMIT, we recommend that you remove them from /etc/hosts too. This reduces the possibility of having conflicting entries if the labels are reused with different addresses in a future configuration.
Note that DNS and NIS are disabled during HACMP-related name resolution. This is why HACMP IP addresses must be maintained locally.
HACMP may modify this file to ensure that all nodes have the necessary information in their /etc/hosts file, for proper HACMP operations.

/etc/inittab

During installation, the following entry is made to the /etc/inittab file to start the Cluster Communication Daemon at boot:
clcomdES:2:once:startsrc -s clcomdES >dev/console 2>&1  
The /etc/inittab file is modified in each of the following cases:
  •  HACMP is configured for IP address takeover
  •  The Start at System Restart option is chosen on the SMITStart Cluster Services panel
  •  Concurrent Logical Volume Manager (CLVM) is installed with HACMP 5.2.

Modifications to the /etc/inittab File due to IP Address Takeover

The following entry is added to the /etc/inittab file for HACMP network startup with IP address takeover:
harc:2:wait:/usr/es/sbin/cluster/etc/harc.net # HACMP network startup  
When IP address takeover is enabled, the system edits /etc/inittabto change the rc.tcpip and inet-dependent entries from run level “2” (the default multi-user level) to run level “a”. Entries that have run level “a” are processed only when the telinit command is executed specifying that specific run level.

Modifications to the /etc/inittab File due to System Boot

The /etc/inittab file is used by the init process to control the startup of processes at boot time. The following line is added to /etc/inittabduring HACMP install:
clcomdES:2:once:startsrc -s clcomdES >/dev/console 2>&1  
This entry starts the Cluster Communications Daemon (clcomd) at boot.
The following entry is added to the /etc/inittab file if the Start at system restart option is chosen on the SMIT Start Cluster Servicespanel:
hacmp:2:wait:/usr/sbin/etc/rc.cluster -boot> /dev/console 2>&1 # Bring
up Cluster  
When the system boots, the /etc/inittab file calls the/usr/es/sbin/cluster/etc/rc.cluster script to start HACMP.
Because the inet daemons must not be started until after HACMP-controlled interfaces have swapped to their service address, HACMP also adds the following entry to the end of the /etc/inittab file to indicate that /etc/inittab processing has completed:
clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit  
#HACMP for AIX These must be the last entry in run level “a” in inittab! 
pst_clinit:a:wait/bin/echo Created /usr/es/sbin/cluster/ .telinit >
/dev/console 
#HACMP for AIX These must be the last entry in run level “a” in inittab!  
See Chapter 8: Starting and Stopping Cluster Services, for more information about the files involved in starting and stopping HACMP.

/etc/rc.net

The /etc/rc.net file is called by cfgmgr to configure and start TCP/IP during the boot process. It sets hostname, default gateway and static routes. The following entry is added at the beginning of the file for a node on which IP address takeover is enabled:
# HACMP for AIX 
# HACMP for AIX These lines added by HACMP for AIX software 
[ "$1" = "-boot" ] && shift || { ifconfig 1o0 127.0.0.1 up; exit 0; }
#HACMP for AIX 
# HACMP for AIX  
The entry prevents cfgmgr from reconfiguring boot and service addresses while HACMP is running.

/etc/services

The /etc/services file defines the sockets and protocols used for network services on a system. The ports and protocols used by the HACMP components are defined here.
#clinfo_deadman             6176/tcp 
#clm_keepalive              6255/udp 
#clm_pts                    6200/tcp 
#clsmuxpd                   6270/tcp 
#clm_lkm                    6150/tcp 
#clm_smux                   6175/tcp 
#godm                       6177/tcp 
#topsvcs    6178/udp 
#grpsvcs    6179/udp 
#emsvcs    6180/udp 
#clver    6190/tcp 
#clcomd    6191/tcp  

/etc/snmpd.conf

Note: The version of snmpd.conf depends on whether you are using AIX 5L v.5.1 or v.5.2. The default version for v.5.2. is snmpdv3.conf
The SNMP daemon reads the /etc/snmpd.conf configuration file when it starts up and when a refresh or kill -1 signal is issued. This file specifies the community names and associated access privileges and views, hosts for trap notification, logging attributes, snmpd-specific parameter configurations, and SMUX configurations for thesnmpd. The HACMP installation process adds the clsmuxpd password to this file. The following entry is added to the end of the file, to include the HACMP MIB managed by the clsmuxpd:
smux  1.3.6.1.4.1.2.3.1.2.1.5  "clsmuxpd_password" # HACMP clsmuxpd  
HACMP supports SNMP Community Names other than “public.” If the default SNMP Community Name has been changed in/etc/snmpd.conf to something different from the default of “public” HACMP will function correctly. The SNMP Community Name used by HACMP is the first name found that is not “private” or “system” using the lssrc -ls snmpd command.
The Clinfo service also gets the SNMP Community Name in the same manner. The Clinfo service supports the -c option for specifying SNMP Community Name but its use is not required. The use of the -coption is considered a security risk because doing a ps command could find the SNMP Community Name. If it is important to keep the SNMP Community Name protected, change permissions on/tmp/hacmp.out/etc/snmpd.conf/smit.log and/usr/tmp/snmpd.log to not be world readable.

/etc/snmpd.peers

The /etc/snmpd.peers file configures snmpd SMUX peers. The HACMP install process adds the following entry to include theclsmuxpd:
clsmuxpd 1.3.6.1.4.1.2.3.1.2.1.5  "clsmuxpd_password" # HACMP clsmuxpd  

/etc/syslog.conf

The /etc/syslog.conf file is used to control output of the syslogddaemon, which logs system messages. During the install process HACMP adds entries to this file that direct the output from HACMP-related problems to certain files.
# example: 
# "mail messages, at debug or higher, go to Log file. File must exist." 
# "all facilities, at debug and higher, go to console" 
# "all facilities, at crit or higher, go to all users" 
#  mail.debug           /usr/spool/mqueue/syslog 
#  *.debug              /dev/console 
#  *.crit                       * 
# HACMP Critical Messages from HACMP 
local0.crit /dev/console 
# HACMP Informational Messages from HACMP 
local0.info /usr/es/adm/cluster.log 
# HACMP Messages from Cluster Scripts 
user.notice /usr/es/adm/cluster.log  
The /etc/syslog.conf file should be identical on all cluster nodes.

/etc/trcfmt

The /etc/trcfmt file is the template file for the system trace logging and report utility, trcrpt. The install process adds HACMP tracing to the trace format file. HACMP tracing applies to the following daemons: clstrmgr, clinfo, and clsmuxpd.

/var/spool/cron/crontab/root

The /var/spool/cron/crontab/root file contains commands needed for basic system control. The install process adds HACMP logfile rotation to the file.

Maintaining an HACMP Cluster


The following maintenance tasks for an HACMP system are described in detail in subsequent chapters:
  •  Starting and stopping cluster services
  •  Managing shared LVM and CLVM components
  •  Managing the cluster topology
  •  Managing cluster resources
  •  Managing cluster resource groups
  •  Managing users, groups and security in a cluster
  •  Saving and restoring an HACMP configuration

Limitations & Considerations of LPAR management server


If you use an LPAR as a CSM management server, consider the following limitations. Note that these limitations apply only if the CSM management server is an LPAR and not a separate physical machine:
  1. The CSM management server can be brought down inadvertently by a user on the HMC who deactivates the LPAR. Even if a user does not have access to the CSM management server, a user with access to the HMC can power off the management server or move resources such as CPU or I/O from the LPAR.
  2. If the firmware needs to be upgraded, the LPAR management server might also go down when the system is quiesced. However, bringing the CEC back up returns the system to normal.
  3. There is no direct manual hardware control of the CSM management server. You must use the HMC for power control of the management server.
  4. An LPAR management server cannot have an attached display. This limitation can affect the performance of your CSM GUIs.
  5. In machines such as the p690, you can assign a CD-ROM drive to one LPAR on the CEC, (the management server LPAR).
  6. Do not define an LPAR management server as a managed node.
  7. A cluster that is installed and configured can still function even if the management server goes down. For example, cluster applications can continue to run, and nodes in the cluster can be rebooted. However, tasks including monitoring, automated responses for detecting problems in the cluster, and scheduled file and software updates cannot occur while the management server is down.
  8. If the cluster contains a 9076 SP Node or 7026 server, you cannot define an LPAR management server for the cluster.

HACMP Scripts


The following scripts are supplied with the HACMP software.

STARTUP AND SHUTDOWN SCRIPTS

Each of the following scripts is involved in starting and stopping the HACMP software.

/usr/es/sbin/cluster/utilities/clstart

The /usr/es/sbin/cluster/utilities/clstart script, which is called by the /usr/es/sbin/cluster/utilities/rc.cluster script, invokes the AIX System Resource Controller (SRC) facility to start the cluster daemons. The clstart script starts HACMP with the options currently specified on the Start Cluster Services SMIT panel.
There is a corresponding C-SPOC version of this script that starts cluster services on each cluster node. The/usr/es/sbin/cluster/sbin/cl_clstart script calls the HACMP clstartscript.
At cluster startup, clstart looks for the file /etc/rc.shutdown. The system file /etc/rc.shutdown can be configured to run user specified commands during processing of the AIX /usr/sbin/shutdowncommand.
Newer versions of the AIX /usr/sbin/shutdown command automatically call HACMP's /usr/es/sbin/cluster/etc/rc.shutdown, and subsequently call the existing /etc/rc.shutdown (if it exists).
Older versions of the AIX /usr/sbin/shutdown command do not have this capability. In this case, HACMP manipulates the /etc/rc.shutdown script, so that both/usr/es/sbin/cluster/etc/rc.shutdown and the existing/etc/rc.shutdown (if it exists) are run. Since HACMP needs to stop cluster services before the shutdown command is run, on cluster startup, rc.cluster replaces any user supplied /etc/rc.shutdown file with the HACMP version. The user version is saved and is called by the HACMP version prior to its own processing. When cluster services are stopped, the clstop command restores the user's version ofrc.shutdown.

/usr/es/sbin/cluster/utilities/clstop

The /usr/es/sbin/cluster/utilities/clstop script, which is called by the SMIT Stop Cluster Services panel, invokes the SRC facility to stop the cluster daemons with the options specified on the Stop Cluster Services panel.
There is a corresponding C-SPOC version of this script that stops cluster services on each cluster node. The/usr/es/sbin/cluster/sbin/cl_clstop script calls the HACMP clstopscript.
Also see the notes on /etc/rc.shutdown in the section on clstartabove for more information.

/usr/es/sbin/cluster/utilities/clexit.rc

If the SRC detects that the clstrmgr daemon has exited abnormally, it executes the /usr/es/sbin/cluster/utilities/clexit.rc script to halt the system. If the SRC detects that any other HACMP daemon has exited abnormally, it executes the clexit.rc script to stop these processes, but does not halt the node.
You can change the default behavior of the clexit.rc script by configuring the /usr/es/sbin/cluster/etc/hacmp.term file to be called when the HACMP cluster services terminate abnormally. You can customize the hacmp.term file so that HACMP will take actions specific to your installation. See the file for full information.

/usr/es/sbin/cluster/etc/rc.cluster

If the Start at system restart option is chosen on the Start Cluster Services SMIT panel, the /usr/es/sbin/cluster/etc/rc.cluster script is called by the /etc/inittab file to start HACMP. The/usr/es/sbin/cluster/etc/rc.cluster script does some necessary initialization and then calls the usr/es/sbin/cluster/utilities/clstartscript to start HACMP.
The /usr/es/sbin/cluster/etc/rc.cluster script is also used to start the clinfo daemon on a client.
There is a corresponding C-SPOC version of this script that starts cluster services on each cluster node. The/usr/es/sbin/cluster/sbin/cl_rc.cluster script calls the HACMPrc.cluster script.
See the man page for rc.cluster for more information.

/etc/rc.net

The /etc/rc.net script is called by the/usr/es/sbin/cluster/etc/rc.cluster script to configure and start the TCP/IP interfaces and to set the required network options. The/etc/rc.net script is used in the boot process to retrieve interface information from the ODM and to configure all defined interfaces. If IP address takeover is configured, the /etc/rc.net script is called from the /usr/es/sbin/cluster/etc/rc.cluster script at cluster startup instead of during the boot process.

AIX HACMP Startup Procedure

Purpose:

This document describes steps to start and stop IBM HACMP cluster.

Startup Procedure:

Phase1: Powering on the servers and bringing up AIX.

Caution: Do NOT power on both the servers simultaneously.

BEGINNING of phase1:

1) Power on all peripherals.If any of the peripherals is already on, leave it as it is.

2) Power ON node1 server.

3) Wait till the Common Desktop Environment (CDE) screen is displayed in the monitor display of this server.

4) Power ON node2 server.

5) Wait till the Common Desktop Environment (CDE) screen is displayed in the monitor display of this server.

END of phase1


Phase2: Bringing up cluster (HACMP):

Caution: Do not login now through CDE. Through out this procedure use only command line login.

BEGINNING of phase2:

1)login as : root

2) Checks to see that the cluster is not running:
a)
# lssrc -g cluster

Subsystem Group PID Status


b)
# netstat -i


c)
# lsvg -o

rootvg


3) Check to see if network is working:
a)
# ping node2_boot
# ping node2_stdby
# ping node2_prv



4) Start HACMP:
# smitty clstart

Start Cluster Services

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[Entry Fields]
* Start now, on system restart or both now

BROADCAST message at startup? true
Startup Cluster Lock Services? true
Startup Cluster Information Daemon? true


[[ Left Hand Top Corner - "Running". Wait till "O.K" is displayed ]]
Esc+0 to exit to prompt.



To see the progress of cluster startup:

# tail -f /tmp/hacmp.out


When no more update takes place, the cluster is fully up on this system.

5) Check if cluster is up and normal:
a)
# lssrc -g cluster

Subsystem Group PID Status
clstrmgrES cluster 12762 active
clsmuxpdES cluster 14672 active
clinfoES cluster 15570 active


b)
# netstat -i


c)
# lsvg -o



othervg
rootvg


END of phase2:
Cluster is ready.

VIOS frequently asked questions


VIOS FAQ

VIOS frequently asked questions
1) What is the Virtual I/O Server?
 The Virtual I/O Server is an appliance that provides virtual storage and shared Ethernet adapter capability to client logical partitions on POWER5 systems. It allows a physical adapter with attached disks on the Virtual I/O Server partition to be shared by one or more partitions, enabling clients to consolidate and potentially minimize the number of physical adapters required.
2) Is there a VIOS website?
Yes. The below VIOs website contains links to documentation, hints and tips, VIOS updates and fixes.
3) What documentation is available for the VIOS?
The VIOS documentation can be found online in the below  link.
4) What is NPIV?
N_Port ID Virtualization(NPIV) is a standardized method for virtualizing a physical fibre channel port. An NPIV-capable fibre channel HBA can have multiple N_Ports, each with a unique identity. NPIV coupled with the Virtual I/O Server (VIOS) adapter sharing capabilities allow a physical fibre channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER logical partitions (LPARs) to have virtual fibre channel HBAs, each with a dedicated world wide port name. Each virtual fibre channel HBA has a unique SAN identity similar to that of a dedicated physical HBA.
The minimum requirement for the 8 Gigabit Dual Port Fibre Channel adapter, feature code 5735, to support NPIV is 110304. You can obtain this image from the http://www-933.ibm.com/support/fixcentral/  
5) What is virtual SCSI (VSCSI)?
Virtual SCSI is based on a client and server relationship. The Virtual I/O Server owns the physical resources and acts as server, or target, device. Physical adapters with attached disks on the Virtual I/O Server partition may be shared by one or more partitions. These partitions contain a virtual SCSI client adapter that sees these virtual devices as standard SCSI compliant devices and LUNs.
6) What is the shared Ethernet adapter (SEA)?
A shared Ethernet adapter is a bridge between a physical Ethernet adapter or link aggregation and one or more virtual Ethernet adapters on the Virtual I/O Server. A shared Ethernet adapter enables logical partitions on the virtual Ethernet to share access to the physical Ethernet and communicate with stand-alone servers and logical partitions on other systems. The shared Ethernet adapter provides this access by connecting the internal VLANs with the VLANs on the external switches.
7) What physical storage can be attached to the VIOS?
See the http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html  for supported storage and configurations.
8) What client operating systems support attachment to the VIOS?
1.     AIX 5.3 and AIX 6.1 TL 2
2.     SUSE LINUX Enterprise Server 9 for POWER
3.     Red Hat Enterprise Linux AS for POWER Version 3(update 2 or newer)
4.     Red Hat Enterprise Linux AS for POWER Version 4
5.     IBM i
9) What solutions can be supported using virtual devices and the VIOS?
Virtual SCSI disk devices are standard SCSI compliant devices that support all mandatory SCSI commands. Solutions that have special requirements at the device level should consult the IBM Solutions team to determine if the device meets your requirements.
The VIOS datasheet includes some information on VSCSI solutions.
10) Can SCSI LUNs be moved between the physical and virtual environment as is?
That is, given a physical SCSI device(ie LUN), with user data on it, that resides in a SAN environment; can this device be allocated to a VIOS and then provisioned to a client partition and used by the client as is?
No, this is not supported at this time. The device cannot be used as is, virtual SCSI devices are new devices when created, and the data must be put onto them after creation. This typically would require some type of backup of the data in the physical SAN environment with a restoration of the data onto the virtual disk.
11) In the context of virtual I/O, what do the terms server, hosting, client, and hosted partition mean?
Server and hosting partition is synonymous, as is client and hosted. The server/hosting partition(s) own physical resources and facilitates the sharing of the physical resource amongst the client/hosted partition(s).
12) Do AIX, Linux, and IBM i all provide Virtual I/O Servers?
The Linux and IBM i operating systems do provide various virtual I/O server/hosting features(virtual SCSI, ethernet bridging, etc). AIX does not provide virtual I/O server/hosting capabilities. There is only one product named the Virtual I/O Server. It is a single function appliance that provides I/O resource to client partitions, and does not support general purpose applications.
13) The VIOS appears to have some similarites with AIX, explain.
The VIOS is not AIX. The VIOS is a critical resource and as such, the product was originally based on a version of the AIX operating system to create a foundation based on a very mature and robust operating system. The VIOS provides a generic command line interface for management. Some of the commands in the VIOS CLI may have common names with AIX and Linux commands. These command names were chosen only because they were generic, the flags and parameters will differ. While some of the VIOS commands may drop the user into an AIX-like environment, this environment is only supported for the installing and setup of certain software packages(typically software for managing storage devices, see the VIOS's Terms and Conditions). Any other tasks performed in this environment are not supported. While the VIOS will continue to support it's current user interfaces going foward, the underlying operating system may change at any time.
14) What is the purpose of the oem_setup_env CLI command?
The sole purpose of the oem_setup_env VIOS CLI command is for ease in installing and setting up certain software packages for the VIOS. See the VIOS datasheet for a list of supported VIOS software solutions.
15) What type of performance can I expect from the VSCSI devices?
Please see the section titled "Planning for Virtual SCSI Sizing Considerations" in the VIOS online pubs in InfoCenter.
16) How do I size the VIOS and client partitions?
The VIOS online pubs in InfoCenter include sections on sizing for both Virtual SCSI and SEA. For Virtual SCSI, please see the section titled "Planning for shared Ethernet adapters".
In addition, the WorkLoad Estimator Tool is being upgraded to accommodate virtual I/O and the VIOS.
17) Why can't AIX VSCSI MPIO devices do load balancing?
Typical multipathing solutions provide two key functions: failover and load balancing. MPIO for VSCSI devices does provide failover protection. The benefit of load balancing is less obvious in this environment. Typically, load balancing allows the even distribution of I/O requests across multiple HBA's of finite resource. Load balancing for VSCSI devices would mean distributing the I/O workload between multiple VIOS's. Since the resources allocated to a given VIOS can be increased to handle larger workloads, load balancing seems to have limited benefit.
18) What is APV (Advanced Power Virtualization)?
The Advanced POWER Virtualization feature is a package that enables and manages the virtual I/O environment on POWER5 systems. The main technologies include:
  • Virtual I/O Server
    - Virtual SCSI Server
    - Shared Ethernet Adapter
  • Micro-Partitioning technology
  • Partition Load Manager
The primary benefit of Advanced POWER Virtualization is to increase overall utilization of system resources by allowing only the required amount of processor and I/O resource needed by each partition to be used.
19) What are some of the restrictions and limitations in the VIOS environment?
  • Logical volumes used as virtual disks must be less than 1 TB in size.
  • Logical volumes on the VIOS used as virtual disks cannot be mirrored, striped, or have bad block relocation enabled.
  • Virtual SCSI supports certain Fibre Channel, parallel SCSI, and SCSI RAID devices as backing devices.
  • Virtual SCSI does not impose and software limitations on the number of supported adapters. A maximum of 256 virtual slots can be assigned to a single partition. Every virtual slot that is created requires resources in order to be instantiated. Therefore, the resources allocated to the Virtual I/O Server limits the number of virtual adapters that can be configured.
  • The SCSI protocol defines mandatory and optional commands. While virtual SCSI supports all of the mandatory commands, some optional commands may not be supported at this time.
  • The Virtual I/O Server is a dedicated partition to be used only for VIOS operations. No other applications can be run in the Virtual I/O Server partition.
  • Future considerations for VSCSI devices: The VIOS uses several methods to uniquely identify a disk for use in as a virtual SCSI disk, they are:
    • Unique device identifier(UDID)
    • IEEE volume identifier
    • Physical volume identifier(PVID)
Each of these methods may result in different data formats on the disk. The preferred disk identification method for virtual disks is the use of UDIDs.

MPIO uses the UDID method.

Most non-MPIO disk storage multi-pathing software products use the PVID method instead of the UDID method. Because of the different data format associated with the PVID method, customers with non-MPIO environments should be aware that certain future actions performed in the VIOS LPAR may require data migration, that is, some type of backup and restore of the attached disks. These actions may include, but are not limited to the following:
  • Conversion from a Non-MPIO environment to MPIO
  • Conversion from the PVID to the UDID method of disk identification
  • Removal and rediscovery of the Disk Storage ODM entries
  • Updating non-MPIO multi-pathing software under certain circumstances
  • Possible future enchancements to VIO
  • Due in part to the differences in disk format as descibed above, VIO is currently supported for new disk installations only
  • Considered when implementing shared Ethernet adapters:
    • Only Ethernet adapters can be shared. Other types of network adapters cannot be shared.
    • IP forwarding is not supported on the Virtual I/O Server.