Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Thursday 21 November 2013

General Parallel File System (GPFS)

IBM  General Parallel File System (GPFS)
IBM  General Parallel File System (GPFS) is a scalable high-performance shared-disk clustered file system management infrastructure for AIX®, Linux® and Windows developed by IBM It is efficient storage management for big data applications.
Like some other cluster filesystems, GPFS provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX 5L clusters, Linux clusters, or a heterogeneous cluster of AIX and Linux nodes. In addition to providing filesystem storage capabilities, GPFS provides tools for management and administration of the GPFS cluster and allows for shared access to file systems from remote GPFS clusters.
GPFS has been available on AIX since 1998 and on Linux since 2001, and is offered as part of the IBM  System Cluster 1350.
Versions of GPFS:
Versions:
GPFS 3.2, September 2007
GPFS 3.2.1-2, April 2008
GPFS 3.2.1-4, July 2008
GPFS 3.1
GPFS 2.3.0-29
Architecture:
GPFS provides high performance by allowing data to be accessed over multiple computers at once. Most existing file systems are designed for a single server environment, and adding more file servers does not improve performance. GPFS provides higher input/output performance by “striping” blocks of data from individual files over multiple disks, and reading and writing these blocks in parallel. Other features provided by GPFS include high availability, support for heterogeneous clusters, disaster recovery, security, DMAPI, HSM and ILM.
GPFS File System:
A GPFS file system is built from a collection of disks which contain the file system data and metadata. A file system can be built from a single disk or contain thousands of disks, storing Petabytes of data. A GPFS cluster can contain up to 256 mounted file systems. There is no limit placed upon the number of simultaneously opened files within a single file system. As an example, current GPFS customers are using single file systems up to 2PB in size and others containing tens of millions of file.
Application interfaces:
Applications can access files through standard UNIX® file system interfaces or through enhanced interfaces available for parallel programs. Parallel and distributed applications can be scheduled on GPFS clusters to take advantage of the shared access architecture. This makes GPFS a key component in many grid-based solutions. Parallel applications can concurrently read or update a common file from multiple nodes in the cluster. GPFS maintains the coherency and consistency of the file system using a sophisticated byte level locking, token (lock) management and logging. In addition to standard interfacesGPFS provides a unique set of extended interfaces which can be used to provide high performance for applications with demanding data access patterns. These extended interfaces are more efficient for traversing a file system, for example, and provide more features than the standard POSIX interfaces.
Performance and scalability:
GPFS provides unparalleled performance especially for larger data objects and excellent performance for large aggregates of smaller objects. GPFS achieves high performance I/O by:
• Striping data across multiple disks attached to multiple nodes.
• Efficient client side caching.
• Supporting a large block size, configurable by the administrator, to fit I/O requirements.
• Utilizing advanced algorithms that improve read-ahead and writebehind file functions.
Why Choose GPFS:
GPFS is highly scalable: (2000+ nodes)
  1. Symmetric, scalable software architecture
  2. Distributed metadata management
  3. Allows for incremental scaling of system (nodes, disk space) with ease
GPFS is high performance file system:
  1. Large block size (tunable) support with wide striping (across nodes and
  2. disks)
  3. Parallel access to files from multiple nodes
  4. Thorough token refinement (sector range) and token management
  5. Efficient deep prefetching: read ahead, write behind
  6. Recognize access patterns (adaptable mechanism)
  7. Highly multithreaded daemon
  8. Data shipping mode for MPI-IO and other applications
GPFS is highly available and fault tolerant:
  1. Data protection mechanisms include journaling, replication, mirroring,
  2. shadowing (these are standard file system techniques)
  3. Heartbeat mechanism to recover from multiple disk, node, connectivity
  4. failures
  5. Recovery software mechanisms implemented in all layers
GPFS is in fact transparent to most applications, therefore virtually any applications can work with GPFS as though they were using a local file system. There are some restrictions, though, which must be understood to make sure that your application is able to deliver the expected results when using GPFS (application concurrent mechanisms, application locking characteristics, etc.).
Install and configure a GPFS cluster on AIX
  1. Verify the system environment
  2. Create a GPFS cluster
  3. Define NSD‘s
  4. Create a GPFS file system
GPFS minimum requirements
  1. Two AIX 6.1 or 7.1 operating systems (LPARs)
  2. Very similar to Linux installation. AIX LPP packages replace the Linux RPMs, some of the administrative commands are different.
  3. At least 4 hdisks
  4. GPFS 3.4 Software with latest PTFs
  5. GPFS.base
  6. GPFS.docs.data
  7. GPFS.msg.en_US
Step 1: Verify Environment
  1. Verify nodes properly installed
    1. Check that the operating system level is supported
      On the system run oslevel
      Check the GPFS
    2. Is the installed OS level supported by GPFS? Yes No
    3. Is there a specific GPFS patch level required for the installed OS? Yes No
    4. If so what patch level is required? ___________
  2. Verify nodes configured properly on the network(s)
    1. Write the name of Node1: ____________
    2. Write the name of Node2: ____________
    3. From node 1 ping node 2
    4. From node 2 ping node 1
      If the pings fail, resolve the issue before continuing.
  3. Verify node-to-node ssh communications (For this lab you will use ssh and scp for secure remote commands/copy)
    1. On each node create an ssh-key. To do this use the command ssh-keygen; if you don’t specify a blank passphrase, -N, then you need to press enter each time you are promoted to create a key with no passphrase until you are returned to a prompt. The result should look something like this:
# ssh-keygen -t rsa -N “” -f $HOME/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory ‘/.ssh’.
Your identification has been saved in /.ssh/id_rsa.
Your public key has been saved in /.ssh/id_rsa.pub.
The key fingerprint is:
7d:06:95:45:9d:7b:7a:6c:64:48:70:2d:cb:78:ed:61
root@node1
  1. On node1 copy the $HOME/.ssh/id_rsa.pub file to $HOME/.ssh/authorized_keys
# cp $HOME/.ssh/id_rsa.pub $HOME/.ssh/authorized_keys
  1. From node1 copy the $HOME/.ssh/id_rsa.pub file from node2 to /tmp/id_rsa.pub
# scp node2:/.ssh/id_rsa.pub /tmp/id_rsa.pub
  1. Add the public key from node2 to the authorized_keys file on node1
# cat /tmp/id_rsa.pub >> $HOME/.ssh/authorized_keys
  1. Copy the authorized key file from node1 to node2
# scp $HOME/.ssh/authorized_keys node2:/.ssh/authorized_keys
  1. To test your ssh configuration ssh as root from node 1 to node1 and node1 to node2 until you are no longer prompted for a password or for addition to the known_hosts file.
    node1# ssh node1 date
    node1# ssh node2 date
    node2# ssh node1 date
    node2# ssh node2 date
  2. Supress ssh banners by creating a .hushlogin file in the root home directory
# touch $HOME/.hushlogin
  1. Verify the disks are available to the system
    For this lab you should have 4 disks available for use hdiskw-hdiskz.
  2. Use lspv to verify the disks exist
  3. Ensure you see 4 unused disks besides the existing rootvg disks and/or other volume groups.
Step 2: Install the GPFS software
On node1
  1. Locate the GPFS software in /yourdir/GPFS/base/
# cd /yourdir/GPFS/base/
  1. Run the inutoc command to create the table of contents, if not done already
# inutoc .
  1. Install the base GPFS code using the installp command
# installp -aXY -d/yourdir/GPFS/base all
  1. Locate the latest GPFS updates in /yourdir/GPFS/fixes/
# cd /yourdir/GPFS/fixes/
  1. Run the inutoc command to create the table of contents, if not done already
# inutoc .
  1. Install the GPFS PTF updates using the installp command
# installp -aXY -d/yourdir/GPFS/fixes all
  1. Repeat Steps 1-7 on node2. On node1 and node2 confirm GPFS is installed using the lsLPP command
# lsLPP -L GPFS.\*
the output should look similar to this
Fileset                      Level  State Type  Description (Uninstaller)
———————————————————————————————————————————————————————————————————————————————–
GPFS.base                  3.4.0.11    A    F    GPFS File Manager
GPFS.docs.data             3.4.0.4     A    F    GPFS Server Manpages and Documentation
GPFS.gnr                   3.4.0.2     A    F    GPFS Native RAID
GPFS.msg.en_US             3.4.0.11    A    F    GPFS Server Messages U.S. English
Note1: Exact versions of GPFS may vary from this example, the important part is that the base, docs and msg filesets are present.
Note2: The GPFS.gnr fileset is used by the Power 775 HPC cluster only
  1. Confirm the GPFS binaries are in your $PATH using the mmlscluster command
# mmlscluster
mmlscluster: This node does not belong to a GPFS cluster.
mmlscluster: Command failed.  Examine previous error messages to determine cause.
Note: The path to the GPFS binaries is: /usr/LPP/mmfs/bin
Step 3: Create the GPFS cluster
For this exercise the cluster is initially created with a single node. When creating the cluster make node1 the primary configuration server and give node1 the designations quorum and manager. Use ssh and scp as the remote shell and remote file copy commands.
*Primary Configuration server (node1): __________
*Verify fully qualified path to ssh and scp: ssh path__________
scp path_____________
  1. Use the mmcrcluster command to create the cluster
# mmcrcluster -N node1:manager-quorum -p node1 -r /usr/bin/ssh -R /usr/bin/scp
Thu Mar 1 09:04:33 CST 2012: mmcrcluster: Processing node node1
mmcrcluster: Command successfully completed
mmcrcluster: Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.
  1. Run the mmlscluster command again to see that the cluster was created
# mmlscluster
=====================================================================================
| Warning:                                                                    |
|   This cluster contains nodes that do not have a proper GPFS license        |
|   designation.  This violates the terms of the GPFS licensing agreement.    |
|   Use the mmchlicense command and assign the appropriate GPFS licenses      |
|   to each of the nodes in the cluster.  For more information about GPFS     |
|   license designation, see the Concepts, Planning, and Installation Guide.  |
===============================================================================
GPFS cluster information
========================
GPFS cluster name:         node1.IBM .com
GPFS cluster id:           13882390374179224464
GPFS UID domain:           node1.IBM .com
Remote shell command:      /usr/bin/ssh
Remote file copy command:  /usr/bin/scp
GPFS cluster configuration servers:
———————————–
1.Primary server:    node1.IBM .com
2.Secondary server:  (none)
Node Daemon node name            IP address       Admin node name             Designation
———————————————————————————————–
1  node1.lab.IBM .com          10.0.0.1         node1.IBM .com               quorum-manager
  1. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
# mmchlicense server –accept -N node1
The following nodes will be designated as possessing GPFS server licenses:
node1.IBM .com
mmchlicense: Command successfully completed
Step 4: Start GPFS and verify the status of all nodes
  1. Start GPFS on all the nodes in the GPFS cluster using the mmstartup command
# mmstartup -a
  1. Check the status of the cluster using the mmgetstate command
# mmgetstate -a
Node number Node name GPFS state
——————————————
1 node1 active
Step 5: Add the second node to the cluster
  1. One node 1 use the mmaddnode command to add node2 to the cluster
# mmaddnode -N node2
  1. Confirm the node was added to the cluster using the mmlscluster command
# mmlscluster,/span>
  1. Use the mmchcluster command to set node2 as the secondary configuration server
# mmchcluster -s node2m
  1. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
# mmchlicense server –accept -N node2
  1. Start node2 using the mmstartup command
# mmstartup -N node2
  1. Use the mmgetstate command to verify that both nodes are in the active state
# mmgetstate -a
Step 6: Collect information about the cluster
Now we will take a moment to check a few things about the cluster. Examine the cluster configuration using themmlsclustercommand
  1. What is the cluster name? ______________________
  2. What is the IP address of node2? _____________________
  3. What date was this version of GPFS “Built”? ________________
    Hint: look in the GPFS log file: /var/adm/ras/mmfs.log.latest
Step 7: Create NSDs
You will use the 4 hdisks.
•Each disk will store both data and metadata
•The storage pool column blank (not assigning storage pools at this time)
•The NSD server field (ServerList) is left blank (both nodes have direct access to the shared LUNs)
  1. On node1 create the directory /yourdir/data
  2. Create a disk descriptor file /yourdir/data/diskdesc.txt using the format:
#DiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePoole
hdiskw:::dataAndMetadata::nsd1:
hdiskx:::dataAndMetadata::nsd2:
hdisky:::dataAndMetadata::nsd3:
hdiskz:::dataAndMetadata::nsd4:
Note: hdisk numbers will vary per system.
  1. Create a backup copy of the disk descriptor file /yourdir/data/diskdesc_bak.txt
# cp /yourdir/data/diskdesc.txt /yourdir/data/diskdesc_bak.txt
  1. Create the NSD’s using the mmcrnsd command
# mmcrnsd -F /yourdir/data/diskdesc.txt
Step 8: Collect information about the NSD’s
Now collect some information about the NSD‘s you have created.
  1. Examine the NSD configuration using the mmlsnsdcommand
    1. What mmlsnsd flag do you use to see the operating system device (/dev/hdisk?) associated with an NSD? _______
Step 9: Create a file system
Now that there is a GPFS cluster and some NSDs available you can create a file system. In this section we will create a file system.
•Set the file system blocksize to 64kb
•Mount the file system at /GPFS
  1. Create the file system using the mmcrfs command
# mmcrfs /GPFS fs1 -F diskdesc.txt -B 64k
  1. Verify the file system was created correctly using the mmlsfs command
# mmlsfs fs1
Is the file system automatically mounted when GPFS starts? _________________
  1. Mount the file system using the _mmmount_ command
# mmmount all -a
  1. Verify the file system is mounted using the df command
# df -k
Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4            65536      6508   91%     3375    64% /
/dev/hd2          1769472    465416   74%    35508    24% /usr
/dev/hd9var        131072     75660   43%      620     4% /var
/dev/hd3           196608    192864    2%       37     1% /tmp
/dev/hd1            65536     65144    1%       13     1% /home
/proc                   -         -    -         -     -  /proc
/dev/hd10opt       327680     47572   86%     7766    41% /opt
/dev/fs1        398929107 398929000    1%        1     1% /GPFS
  1. Use the mmdf command to get information on the file system.
# mmdf fs1
How many inodes are currently used in the file system? ______________


logical partitions

Creating logical partitions
To create a logical partition, begin in the HMC workplace window. Select Systems Management -> Servers and then select the name of the server. This action takes you to the view shown in Figure

Expand Configuration and then expand Create Logical partition to open the area to create an AIX, Linux, or Virtual I/OServer partition.
Creating an AIX or a Linux partition
Note:The options and window views for creating a Virtual I/O Server (VIO Server) partition are the same as those that we present in this section. Thus, we do not document the steps for the VIO Server.
To create an AIX or a Linux partition, follow these steps:
1. Select Configuration → Create Logical partition → AIX or Linux to open the window shown in Figure. Here you can set the partition ID and specify the partition’s name. Then, select Next.
1
Figure: Create an AIX partition
2. Enter a profile name for this partition and click Next (Figure). You can
then create a partition with either shared or dedicated processors on your
server.
2
Figure: Create an AIX partition
Configuring a shared processor partition
This section describes how to create a partition with a shared processor.
To configure a shared processor partition:
  1. Select Shared and then select Next, as shown in Figure
3
Figure: Create a shared processor partition
2. Specify the processing units for the partition as well as any settings for virtual processors, as shown in Figure. The sections that immediately follow this figure discuss the settings in this figure in detail.
3. After you have entered data in each of the fields (or accepted the defaults) select Next and then proceed to “Setting partition memory”
a
Figure: Shared partition settings
Processing Settings area
In the Processing Settings area, you must specify the minimum number of processors that you want the shared processor partition to acquire, the desired amount, and the maximum upper limit allowed for the partition.
The values in each field can range anywhere between .1 and the total number of processors in the managed server and can be any increment in between in tenths of a processor.
Each field defines the following information:
  • Minimum processing units
The absolute minimum number of processing units required from the shared processing pool for this partition to become active. If the number in this field is not available from the shared processing pool, this partition cannot be activated. This value has a direct bearing on dynamic logical partitioning (DLPAR), as the minimum processing units value represents the smallest value of processors the partition can have as the result of a DLPAR deallocation.
  • Desired processing units
This number has to be greater than or equal to the amount set in Minimum processing units, and represents an amount of processors asked for above the minimum amount. If the minimum is set to 2.3 and the desired set to 4.1, then the partition could become active with any number of processors between 4.1 and 2.3, whatever number is greater and available from the shared resource pool. When a partition is activated, it queries for processing units starting at the desired value and goes down in .1 of a processor until it reaches the minimum value. If the minimum is not met, the partition does not become active. Desired processing units only governs the possible number of processing units a partition can become active with. If the partition is made upcapped, then the hypervisor can let the partition exceed its desired value depending on how great the peak need is and what is available from the shared processing pool.
  • Maximum processing units
This setting represents the absolute maximum number of processors this partition can own at any given time, and must be equal to or greater than the Desired processing units. This value has a direct bearing on dynamic logical partitioning (DLPAR), as the maximum processing units value represents the largest value of processors the partition can have as the result of a DLPAR allocation. Furthermore, while this value affects DLPAR allocation, it does not affect the processor allocation handled by the hypervisor for idle processor allocation during processing peaks.

Note:Whether your partition is capped or uncapped, the minimum value for Maximum processing units is equal to the value specified for Desired processing units.
Uncapped option
The Uncapped option represents whether you want the HMC to consider the partition capped or uncapped. Whether a partition is capped or uncapped, when it is activated it takes on a processor value equal to a number somewhere between the minimum and desired processing units, depending on what is available from the shared resource pool. However, if a partition is capped, it can gainprocessing power only through a DLPAR allocation and otherwise stays at thevalue given to it at time of activation. If the partition is uncapped, it can exceed the value set in Desired virtual processors and it can take the number of processing units from the shared processor pool that it needs. This is not seen from the HMC view of the partition, but you can check the value of processors owned by the partition from the operating system level with the appropriate commands. Theweight field defaults to 128 and can range from 0 to 256. Setting this number below 128 decreases a partition’s priority for processor allocation, and increasing it above 128, up to 256, increases a partition’s priority for processor allocation. If all partitions are set to 128 (or another equivalent number), then all partitions have equal access to the shared processor pool. If a partition’s uncapped weight is set to 0, then that partition is considered capped, and it never owns a number of processors greater than that specified in Desired processing units. 
Virtual processors area
The values that are set in the Virtual processors are of this window govern how many processors to present to the operating system of the partition. You must show a minimum of one virtual processor per actual processor, and you can have as many as 10 virtual processors per physical processing unit. As a general recommendation, a partition requires at least as many virtual processors as you have actual processors, and a partition should be configured with no more than twice the number of virtual processors as you have actual processors. Each field defines the following information:
  • Minimum virtual processors
Your partition must have at least one virtual processor for every part of a physical processor assigned to the partition. For example, if you have assigned 2.5 processing units to the partition, the minimum number of virtual processors is three. Furthermore, this value represents the lowest number of virtual processors that can be owned by this partition as the result of a DLPAR operation.
Desired virtual processors
The desired virtual processors value has to be greater than or equal to the value set in Minimum virtual processors, and as a general guideline about twice the amount set in Desired processing units. Performance with virtual processing can vary depending on the application, and you might need to experiment with the desired virtual processors value before you find the perfect value for this field and your implementation.
  • Maximum virtual processors
You can only have 10 virtual processors per processing unit. Therefore, you cannot assign a value greater than 10 times theMaximum processing units value as set in “Processing Settings area” on page 232. It is recommended, though not required, to set this number to twice the value entered in Maximum processing units.
Finally, this value represents the maximum number of virtual processors that this partition can have as the result of a DLPAR operation. 

Note:The desired virtual processors value, along with the resources available in the shared resource pool, is the only value that can set an effective limit on the amount of resources that can be utilized by an uncapped partition.

Note:Regardless of the number of processors in the server or the processing units owned by the partition, there is an absolute upper limit of 64 virtual processors per partition with the HMC V7 software.
Configuring a dedicated processor partition
This section describes how to create a partition with a dedicated processor. If you want to create a partition with a sharedprocessor, refer to “Configuring a shared processor partition” on page 231.
To configure a dedicated processor partition:
  1. Select Dedicated and then select Next, as shown in Figure
6
Figure: Create dedicated processor partition
2     Specify the number of minimum, desired, and maximum processors for the
partition, as shown in Figure.
4
Figure: Processor settings with dedicated processors
3         After you have entered the values for the fields, select Next.

Setting partition memory
Now, you need to set the partition memory, as shown in Figure.
5
Figure:  Set partition memory

The minimumdesired, and maximum settings are similar to their processor
counterparts:
  • Minimum memory
Represents the absolute memory required to make the partition active. If the amount of memory specified under minimum is not available on the managed server then the partition cannot become active.
  • Desired memory
Specifies the amount of memory beyond the minimum that can be allocated to the partition. If the minimum is set at 256 MB and the desired is set at 4 GB, then the partition in question can become active with anywhere between 256 MB and 4 GB.
  • Maximum memory
Represents the absolute maximum amount of memory for this partition, and it can be a value greater than or equal to the number specified in Desired memory. If set at the same amount as desired, then the partition is considered capped, and if this number is equal to the total amount of memory in the server then this partition is considered uncapped. After you have made your memory selections, select Next.
Configuring physical I/O
On the I/O window, as shown in Figure, you can select I/O resources for the partition to own. After you have made your selections in this window, select Next.
 Configuring virtual resources
If you have adapters assigned to the Virtual I/O server (as explained in Chapter 9, “Virtual I/O” on page 259), you can create a virtual adapter share for your partition. Follow these steps:
1. Select Actions → Create → SCSI Adapter to create a virtual SCSI share. Alternatively, select Actions → Create Ethernet Adapter to create a shared Ethernet share.
 2. You can specify your server partition, get System VIOS info, and specify a tag for adapter identification, as shown in Figure. When you have entered all of the data, select OK.
b
Figure: Create virtual SCSI adapter
You are returned to the virtual adapters window as shown in Figure. When you are done creating all the virtual resources, select Next.


 Optional Settings window
On the Optional Settings window shown in Figure you can:
  • Enable connection monitoring
  • Start the partition with the managed system automatically
  • Enable redundant error path reporting
You can also specify one of the various boot modes that are available.
After you have made your selections in this window, click Next to continue.
c
Figure: Optional settings
 Enabling connection monitoring
Select this option to enable connection monitoring between the HMC and the logical partition that is associated with this partition profile. When connection monitoring is enabled, the Service Focal Point (SFP) application periodically tests the communications channel between this logical partition and the HMC. If the channel does not work, the SFP application generates a serviceable event in the SFP log. This ensures that the communications channel can carry service requests from the logical partition to the HMC when needed. If this option is not selected, the SFP application still collects service request information when there are issues on the managed system. This option only controls whether the SFP application automatically tests the connection and generates a serviceable event if the channel does not work. Clear this option if you do not want the SFP application to monitor the communications channel between theHMC and the logical partition associated with this partition profile.
Starting with managed system automatically
This option shows whether this partition profile sets the managed system to activate the logical partition that is associated with this partition profile automatically when you power on the managed system. When you power on a managed system, the managed system is set to activate certain logical partitions automatically. After these logical partitions are activated, you must activate any remaining logical partitions manually. When you activate this partition profile, the partition profile overwrites the current setting for this logical partition with this setting. If this option is selected, the partition profile sets the managed system to activate this logical partition automatically the next time the managed system is powered on. If this option is not selected, the partition profile sets the managed system so that you must activate this logical partition manually the next time the managed system is powered on.
Enabling redundant error path reporting
Select this option to enable the reporting of server common hardware errors from this logical partition to the HMC. The service processor is the primary path for reporting server common hardware errors to the HMC. Selecting this option allows you to set up redundant error reporting paths in addition to the error reporting path provided by the service processor. Server common hardware errors include errors in processors, memory, power subsystems, the service processor, the system unit vital product data (VPD), non-volatile random access memory (NVRAM), I/O unit bus transport (RIO and PCI), clustering hardware, and switch hardware. Server common hardware errors do not include errors in I/O processors (IOPs), I/O adapters (IOAs), or I/O device hardware. If this option is selected, thislogical partition reports server common hardware errors and partition hardware errors to the HMC. If this option is not selected, this logical partition reports only partition hardware errors to the HMC. This option is available only if the server firmware allows you to enable redundant error path reporting (the Redundant Error Path Reporting Capable option on the Capabilities tab in Managed System Properties is True.
Boot modes
Select the default boot mode that is associated with this partition profile. When you activate this partition profile, the system uses this boot mode to start the operating system on the logical partition unless you specify otherwise when activating the partition profile. (The boot mode applies only to AIX, Linux, and Virtual I/O server logical partitions. This area is unavailable for i5/OS logical partitions.) Valid boot modes are as follows:
  • Normal
The logical partition starts up as normal. (This is the mode that you use to perform most everyday tasks.)
  • System Management Services (SMS)
The logical partition boots to the System Management Services (SMS) menu.
  • Diagnostic with default boot list (DIAG_DEFAULT)
The logical partition boots using the default boot list that is stored in the system firmware. This mode is normally used to boot customer diagnostics from the CD-ROM drive. Use this boot mode to run standalone diagnostics.
  • Diagnostic with stored boot list (DIAG_STORED)
The logical partition performs a service mode boot using the service mode boot list saved in NVRAM. Use this boot mode to run online diagnostics.
  • Open Firmware OK prompt (OPEN_FIRMWARE)
The logical partition boots to the open firmware prompt. This option is used by service personnel to obtain additional debug information.
Profile summary
When you arrive at the profile summary as shown in Figure 7-19, you can review your partition profile selections. If you see anything that you want to change, select Back to get to the appropriate window and to make changes. If you are satisfied with the data represented in the Profile Summary, select Finish to create your partition.
d
Figure: Profile summary
After you select Finish, for a few minutes the window shown in Figure displays. When this window goes away, go back to your main HMC view, and the partition that you created is listed under the existing partitions on your managed server.

Figure: Partition creation status window
Virtual I/O
Virtual I/O provides the capability for a single physical I/O adapter and disk to be used by multiple logical partitions of the same server, allowing consolidation of I/O resources and minimizing the number of I/O adapters that are required.
Understanding Virtual I/O
Virtual I/O describes the ability to share physical I/O resources between partitions in the form of virtual adapter cards that are located in the managed system. Each logical partition typically requires one I/O slot for disk attachment and another I/O slot for network attachment. In the past, these I/O slot requirements would have been physical requirements. To overcome these physical limitations, I/O resources are shared with Virtual I/O. In the case of Virtual Ethernet, the physical Ethernet adapter is not required to communicate between LPARS. Virtual SCSI provides the means to share I/O resources for SCSI storage devices.
POWER Hypervisor for Virtual I/O
The POWER Hypervisor™ provides the interconnection for the partitions. To use the functionalities of Virtual I/O, a partition uses a virtual adapter as shown in Figure. The POWER Hypervisor provides the partition with a view of an adapter that has the appearance of an I/O adapter, which might or might not correspond to a physical I/O adapter.

Figure: Role of POWER Hypervisor for Virtual I/O
Virtual I/O Server
The Virtual I/O Server can link the physical resources to the virtual resources. By this linking, it provides virtual storage and Shared Ethernet Adapter capability to client logical partitions on the system. It allows physical adapters with attached disks on the Virtual I/O Server to be shared by one or more client partitions.
Virtual I/O Server mainly provides two functions:
  • Serves virtual SCSI devices to clients,
  • Provides a Shared Ethernet Adapter for virtual Ethernet
Virtual I/O Server partitions are not intended to run applications or for general user logins. The Virtual I/O Server is installed in its own partition. The Virtual I/O Server partition is a special type of partition which is marked as such on the first window of the Create Logical partitioning Wizard program. Currently the Virtual I/O Server is implemented as a customized AIX partition, however the interface to the system is abstracted using a secure shell-based command line interface (CLI). When a partition is created as this type of partition, only the Virtual I/O Server software boot image will boot successfully when the partition is activated. This Virtual I/O Server should be properly configured with enough resources. The most important resource is the processor resources. If a Virtual I/O Server has to host a lot of resources to other partitions, you must ensure that enough processor power is available.
Virtual SCSI
Virtual SCSI is based on a client/server relationship. A Virtual I/O Server partition owns the physical resources, and logical client partitions access the virtual SCSI resources provided by the Virtual I/O Server partition. The Virtual I/OServer partition has physically attached I/O devices and exports one or more of these devices to other partitions as shown in Figure

Figure: Virtual SCSI overview
 The client partition is a partition that has a virtual client adapter node defined in its device tree and relies on theVirtual I/O Server partition to provide access to one or more block interface devices. Virtual SCSI requires POWER5 or POWER6 hardware with the Advanced POWER Virtualization feature activated.
Client/server communications
In the Figure, the virtual SCSI adapters on the server and the client are connected through the hypervisor. The virtual SCSI adapter drivers (server and client) communicate control data through the hypervisor. When data is transferred from the backing storage to the client partition, it is transferred to and from the client’s data buffer by the DMA controller on the physical adapter card using redirected SCSI Remote Direct Memory Access (RDMA) Protocol. This facility enables the Virtual I/O Server to securely target
memory pages on the client to support virtual SCSI.
Adding a virtual SCSI server adapter
You can create the virtual adapters in two periods. One is to create those during
installing the Virtual I/O Server. The other is to add those in already existing
Virtual I/O Server. In this chapter, we suppose that we already created the Virtual
I/O Server.
Before activating a server, you can add the virtual adapter using the Manage
Profiles task. For an activated server, you can only do that through dynamic
LPAR operation if you want to use virtual adapters immediately. This procedure
requires that the network is configured with connection to the HMC to allow for
dynamic LPAR.
Now, you can add the adapter through dynamic LPAR. To add the adapter:
1. Select the activated Virtual I/O Server partition in HMC. Then click Virtual
Adapters in the Dynamic Logical partitioning section in the Task pane. The
Virtual Adapters window opens.
  1. Click Actions → Create → SCSI Adapter, as shown in Figure