This article describes the virtualization capabilities of the IBM® POWER5™ servers, provides examples that apply equally to both pSeries® p5 and eServer™ OpenPower™ systems, and shows how to set up and use the IBM Virtual I/O Server (VIO Server). The VIO Server is currently based on a subset of AIX® 5.3 that includes additional packages and services. It comes with the IBM optional packages called the Advanced POWER Virtualization (APV) for pSeries p5 machines or the Advanced OpenPower Virtualization (AOPV) packages for OpenPower machines. As these two versions have identical features and functions, I reference them as VIO Server throughout this article. The VIO Server provides a complete environment for the VIO Server, full support from IBM, and higher levels of performance.
Virtualization is a hot topic in the computing industry, with many widely different technologies and solutions being recommended, developed, and used. The POWER5-based machines have inherited the know how from the IBM mainframes to provide opportunities for a significant reduction in operating costs for complex environments. Unlike software solutions available from other vendors, the POWER5 implementation uses advanced processor features, firmware (also known as Hypervisor), and hardware features to create efficient and flexible virtualization capabilities. Uniquely, these capabilities are offered from the top to the bottom of the server range -- a powerful 64-way SMP (Symmetric multiprocessor) machine down to a two-way, desk-side system. The key to this virtualization is the VIO Server.
This article:
- Explains the VIO Server concepts and how it works between logical partitions (LPARs) for disk access and networks.
- Covers the advantages of using a VIO Server and typical usage scenarios.
- Shows, by example, how to set up the IBM VIO server and VIO clients.
pSeries servers from IBM have, since October 2001, allowed a machine to be divided into LPARs, with each LPAR running a different OS image -- effectively a server within a server. You can achieve this by logically splitting up a large machine into smaller units with CPU, memory, and PCI adapter slot allocations.
The new POWER5 machines (pSeries p5 and OpenPower servers) can also run an LPAR with less than one whole CPU -- up to ten LPARs per CPU. So, for example, on a four CPU machine, 20 LPARs can easily be running. With each LPAR needing a minimum of one SCSI adapter for disk I/O and one Ethernet adapter for networking, the example of 20 LPARs would require the server to have at least 40 PCI adapters. This is where the VIO Server helps.
The VIO Server owns real PCI adapters (Ethernet, SCSI, or SAN), but lets other LPARs share them remotely using the built-in Hypervisor services. These other LPARs are called Virtual I/O client partitions (VIO client). And because they don't need real physical disks or real physical Ethernet adapters to run, they can be created quickly and cheaply.
There are different VIO Server implementations:
- Both APV and AOPV versions of the VIO Server are special-purpose, single-function appliances and are not intended to run general applications.
- The Linux VIO Server for pSeries p5 or OpenPower hardware first became available with the SUSE SLES 9 distribution. Unlike the VIO Server, this is just a copy of the Linux operating system. This means it can run other central services such as NFS, network installation, DNS, an Apache Web site, or Samba services. Some care should be taken that these functions do not interfere with the performance of the VIO Server service. This software is also available on the Debian Linux for POWER distribution.
There are different implementations for VIO clients. Actually, these are just the regular operating systems, but they include the device drivers for running as a VIO client.
- AIX 5.3 (only supported by the APV or AOPV VIO Server)
- Linux -- SUSE SLES 9
- Linux -- Red Hat EL 3 update 3 onwards and Red Hat EL 4
- Linux -- Debian for POWER
This article covers the VIO Server and the AIX and Linux VIO clients.
The VIO Server provides a virtual SCSI disk service, as shown in Figure 1 below.
Figure 1. Virtual SCSI disk service
Figure 1 shows a single VIO Server providing virtual SCSI services to multiple VIO client partitions. Each VIO client operates as if it had a dedicated SCSI device but, in fact, each client device is a real disk partition (logical disk partition) on the VIO Server. Alternatively, on the VIO Server, it could use a complete disk (hdisk). The VIO Server and VIO client communicate using the internal pSeries Hypervisor firmware (PHYP) feature, which efficiently allows disk I/O requests to be transferred between the LPARs using a message-passing protocol.
In Figure 1 above, the VIO Server has a few disks that could be SCSI or fiber channel storage area network (SAN) disks. The disk subsystem hardware or a RAID5 SCSI adapter can provide data protection. The VIO clients use the VIO client device driver just as they would a regular local device disk to communicate with the matching server VIO device driver. Then the VIO Server actually does the disk transfers on behalf of the VIO client. There is a strict client/server relationship between the VIO client and the VIO Server.
The LPARs in the machine can use the virtual Ethernet switch service (in the Hypervisor) in a number of different ways.
- Case one: Internal only networks
- You can use the Virtual Ethernet to allow TCP/IP (Transmission Control Protocol/Internet Protocol) to communicate between the LPARs, as shown in Figure 2 below. This provides high-speed data transfer without any hardware adapters starting at roughly one Gbit per second (can be much higher), especially using larger block sizes. Figure 2 also shows that there is no client/server relationship between the LPARs -- all are equally using the Virtual Ethernet. There can be many Virtual Ethernets in one machine, where groups of LPARs can communicate only within the virtual Ethernet they're connected to, allowing fast communication and complete security without buying additional Ethernet adapters, cables, hubs, or routers.
Figure 2. Virtual Ethernet -- Private/internal only networks
- Case two: Routing to a physical LAN
- One LPAR on the Virtual Ethernet can also communicate externally to other machines using a real physical network on behalf of all the LPARs. In this case, this special LPAR is being used to route Ethernet packets between the internal Virtual Ethernet and the external physical Ethernet network. It will work well, but involves setting up TCP/IP routes between the two networks (internal and external) and can take time to set up. Figure 3 below shows one LPAR with a real physical Ethernet adapter providing standard network routing between the two Ethernets. Note that this is not using any VIO Server features.
Figure 3. Internal Virtual Ethernet with a bridge to the external LAN
- Case three: Shared Ethernet Adapter (SEA) to a physical LAN
- Here, the VIO Server is being used to bridge Ethernet packets between the internal Virtual Ethernet and the external physical Ethernet network so that all the LPARs appear as regular machines on the physical network. This is simple to set up and is the option used in the example in this article. In Figure 4, the VIO Server is being used to join the two networks using the SEA. Strictly speaking, the adapter is not shared. It's owned and controlled by the VIO Server; however, it also provides shared access to the real physical network.
Figure 4. Internal Virtual Ethernet with a SEA to the external LAN
- Case four: Bridging with virtual LANs (VLANs)
- This particular scenario is almost the same as Case three. The only difference is the number of VLANs within the machine using Virtual Ethernet. These are connected to VLANs on the external network with a bridging LPAR and a network router that supports VLAN. This complex scenario is beyond the scope of this article, but some hints are included and it's supported.
You can use a VIO Server in any number of scenarios. Below are five typical examples that would make good use of a VIO Server.
- Small machine with limited PCI slots
You have one set of internal SCSI disks or you can split the SCSI disks in two 4-packs on the OpenPower 720 or p5-550. This gives you two LPARs (at most) using the internal disks. So, you might run a VIO Server to support the other LPARs. For example, try a VIO Server (0.5 of a CPU) with four to six clients (0.1 to 1 CPU). Typically, clients might be small -- four to 16 GB virtual SCSI disks and one Virtual Ethernet for the whole machine. Figure 5 shows multiple LPARs running on a single disk pack.
Figure 5. Multiple LPARs - Mid-range machines with extra small workloads
This might be an eight or 16 CPU machine with large partitions for production use. But many system administrators also want a small number of extra LPARs. Rather than buy an extra machine, a VIO Server can easily host a half dozen smaller LPARs. For example, larger production LPARs might have one to four larger dedicated CPU(s), dedicated disk I/O(s), and network(s) each.The VIO Server is used for "bits and bobs" LPAR like test, development, training, practice, new application trials, and so on. Typically, VIO clients might have a couple of four GB to eight GB virtual SCSI disks and one or two virtual Ethernets.In Figure 6, three large production LPARs are running (they would have dedicated disks and Ethernet) with a few extra small VIO clients and one VIO Server on the machine using spare capacity. This "spare" capacity could be demanded by the production LPARs during peaks in their workload.
Figure 6. Three large production LPARs - Ranch or server farm style
Lots of small server consolidation workloads from smaller or older machines or many small servers are required, but they are unlikely to peak at the same time. The machine is to run lots of LPARs, for example 10 to 20 clients on a four-way machine or many times that on larger machines. Each LPAR is for small applications, but not high demand (0.2, or 0.5 CPU up to 2 CPU). This could be server consolidation or, for example, the importance of data isolation from a collection of small Web servers.The VIO Server has one or two CPU(s), possible RAID 5 SCSI disks, or SAN disks. Typically, clients have one or more four GB virtual SCSI disks each and might have different groups of LPARs around a different Virtual Ethernet.Figure 7 shows dozens of VIO clients with a medium-sized VIO Server supporting them on what might be several disk packs.
Figure 7. Different groups of LPARs - Serious I/O setup only once (to reduce setup and management)
The VIO Server has SAN disks connected by two to four Fibre Channel adapters and two Ethernet adapters to run Ether channel for redundancy and additional bandwidth. The VIO Server has load balancing and failover, but VIO clients have a much simpler disk and Ethernet setup. The VIO Server could have one to three CPU(s), but the VIO clients are larger, too.For example, one to eight CPU(s) run quite large applications. Typically, VIO clients could have hundreds of GB of virtual SCSI disks and many Virtual Ethernets. This complex setup is not covered in this article. Figure 8 shows two regular LPARs (it would have dedicated disks) and a fully configured VIO Server (large) with multiple paths to disks and Ethernet. This is supporting some large VIO client LPARs.
Figure 8. Regular LPAR - Serious with high availability backup
Same as above, but with a second VIO Server for availability/throughput. There are arguments that for very high availability you should spread your access to virtual SCSI and Virtual Ethernet across two VIO Servers in order to continue running in case one VIO Server goes down.The counter argument is that VIO Server is only running a few device drivers. Devices drivers are extremely reliable. Also, anything that would crash one VIO server could also crash the second one. Figure 9 shows that instead of using local physical device drivers, the VIO client uses the virtual resource device drivers to communicate with the VIO Server, which does the real I/O. Except the virtual VIO Server device drivers and the physical resource device drivers, there is very little code running on the VIO Server. Little can go wrong on the VIO Server side.
Figure 9. Figure 9. VIO Server
I'm not going to cover duplicated VIO Servers. Further details are in the Advanced POWER Virtualization on IBM p5 Serversredbook (Download Adobe Reader).
This section describes the software, hardware, skills, and type of network you'll need.
Where do I get the VIO Server?
- For a pSeries p5 machine, the software is included in the APV feature. This runs AIX and Linux VIO clients.
- For an OpenPower machine, the software is included in the AOPV feature. This will only run the Linux VIO clients (AIX does not run on these machines).
You'll need:
- An OpenPower or pSeries p5 machine with spare resources:
- Some CPU resources -- can be less than one CPU
- Memory -- 512MB per LPAR (if necessary, just 256MB)
- Real Ethernet adapter
- Time with CD drive -- unless network installation is preferred
- SCSI adapter and a SCSI disk -- could equally use a SAN disk
- The hardware virtualization feature activated, which is needed for LPAR and VIO Server features, but optional on some POWER5 machines.
- The VIO Server software on CD-ROM. Network Installation Manager (NIM) is possible too, but not covered here.
This article doesn't show you a screen-by-screen level of detail and each input field. It's assumed you already understand:
- Basic AIX systems administration such as installing from an
mksysb
image, configuring networks, AIX style volume groups, and logical volumes terms. - If you intend to use a Linux VIO client, then you need:
- Basic Linux systems administration such as installing an RPM (rpm -Uvh
.rpm), configuring an Ethernet network adapter (ifconfig eth0 mask 255.255.255.0), and managing a filesystem (mount /dev/sda5 /mnt). These tasks are identical to working on the Intel platform, and there are many books, training courses, and Internet material covering the regular system administration commands and tasks. - How to install SuSE Linux in either text mode (on a dumb/ASCII screen) or with a VNC (Virtual Network Computing) session. Once you've installed Linux a couple of times, this becomes a routine task. For the VNC install, the boot prompt extra command is
vnc=1 password=abc123
. There are six characters for the password. The system also prompts you for the other details.
- Basic Linux systems administration such as installing an RPM (rpm -Uvh
- Hardware Management Console (HMC):
- How to install the HMC hardware and software
- How to set it up (It's assumed this has already been done.)
- How to use the HMC to create and start a simple LPAR and its profiles
- The pSeries p5 and OpenPower range of machines internals are like the names of the adapter positions. For example, the Tn names for internal adapters and Cn for real adapters in a PCI slot. You need to create the VIO Server LPAR with the right SCSI disks and Ethernet resources on the HMC, with and without the CD. Above n is the number of the slot; details are in the hardware manuals, redbooks, or on the large sticker on the outside of the machine covers.
The VIO Server must be able to communicate directly with the HMC for advanced functions and error reporting. Since this is easily forgotten, I recommend a network like the one in Figure 10.
Figure 10. The network
Many sites also have other dedicated networks in addition to the above. For example, a network for remote backup or a network dedicated for systems administration in addition to the networks in Figure 11 below.
This section covers the steps and three extra common tasks to get started with the VIO Server.
- Step 1. Logical diagram of the example setup
- Step 2.Planning your setup
- Step 3. Create the SUSE SLES 9 VIO Server LPAR
- Step 4. Install SUSE SLES 9 VIO Server
- Step 5. HMC defining the VIO Server -- Virtual Ethernet
- Step 6. HMC defining the VIO Server -- virtual SCSI
- Step 7. HMC create the VIO client LPARs
- Step 8. Clean up the HMC
- Step 9. VIO Server preparing for the clients:
- Step 10. VIO client LPAR installations
- Step 11. *Backing up a VIO Server and VIO client
- Step 12. *Cloning a client
- Step 13. *Linux dynamic LPARs (DLPARs) and RAS
*These particular tasks are useful and recommended.
Please note that it takes longer to describe some steps than to actually implement!
Figure 11 shows the SUSE VIO Server in an LPAR and two VIO client LPARs that are going to be set up for this article. It's a logical diagram of the example setup, which is explained in the rest of this article.
Figure 11. The SUSE SLES 9 Server LPAR
For simplicity, the VIO client LPARs are given Ethernet IP addresses within the address range of the regular physical Ethernet network in this computer room. The VIO Server bridges between physical and virtual networks, meaning that the client LPARs will appear like any other computer to users. This is the most likely option to be implemented and hides the Virtual Ethernet network completely from users in order to allow simple access to the client LPARs.
For the disks, let's use the internal SCSI adapter in the VIO Server and one disk. The first client's (Client X) virtual disk connects to a logical volume (disk partition) on the VIO Server. The second client's (Client Y) virtual disk is supported by a whole disk on the VIO Server. This shows all of the common types of setup -- the SEA network and disk partitions, plus allocating a whole disk. In practice, most people use a logical volume.
First, do some planning of the VIO Server and client logical partitions. Experience has shown that just creating LPARs without some planning causes problems and can waste a lot of time. Table 1 shows the planning I've done for this example, which is an OpenPower 720. Except for the references to PCI slots like C3, T6, and T14, which are machine dependant, the references could be for any pSeries p5 or OpenPower machine. In this example, the VIO client logical partitions are going to be Linux, but equally they could be running AIX. Notes are included where AIX VIO clients would be different.
Table 1. Planning
VIO Server | Client A | Client B | |
Hostname | op34 | op36 | op37 |
Ethernet adapter | C3 bridging | Virtual | Virtual |
IP address | 9.137.62.34 | 9.137.62.36 | 9.137.62.37 |
Virtual LAN ID (port) | 1 | 1 | 1 |
Mask | 255.255.255.0 | 255.255.255.0 | 255.255.255.0 |
Gateway | 9.137.62.1 | 9.137.62.1 | 9.137.62.1 |
DNS | 9.137.62.2 | 9.137.62.2 | 9.137.62.2 |
CD adapter | T6 for install only | T6 for install only | T6 for install only |
SCSI adapter | T14 | Virtual | Virtual |
Disk size | hdisk0 is 36 GB | 4 GB | |
hdisk1 is 36 GB for client Y | 73 GB | ||
Device on VIO Server | lv00 | hdisk1 | |
Virtual SCSI adapters | Slot 3 for Client X | Slot 3 to server slot 3 | |
Slot 4 for Client Y | Slot 3 to server slot 4 | ||
Profile names | Normal | Normal | Normal |
Normal with CD | Normal with CD | Normal with CD | |
CPU values: | |||
Dedicated/shared CPU | Shared | Shared | Shared |
CPU desired | 0.4 | 0.3 | 0.3 |
CPU min | 0.2 | 0.1 | 0.1 |
CPU max | 1 | 2 | 2 |
Virtual processors | 1 | 2 | 2 |
Memory values: | |||
Memory | 512 MB | 2048 MB | 256 MB |
Next, you need to create the VIO Server LPAR. You do this on the HMC and create a special VIO Server LPAR, but initially with no extra virtualization features. You'll add the virtual features later. The only feature that is different from a regular Linux LPAR is the LPAR Partition Environment feature on the first panel of the Create Logical Partition Wizard. Here you must not select the AIX or Linux option, but must select VIO Server. See Figure 12 below.
Figure 12. The SUSE SLES 9 Server LPAR
Create the LPAR and the first profile as above, using the details in Table 1. (This article assumes you are familiar with the HMC and creating LPARs. If you are not, see Resources for documents that describe how to create LPARs.) I call the LPAR profile that is normally used with a name of "Normal". Further hints:
- A VIO Server LPAR can use dedicated CPUs, which is a good idea if you have plenty of CPUs or are expecting to do lots of I/O for many VIO client LPARs; it avoids any delay in starting the I/O on the real adapters. Dedicated CPUs are running the VIO Server all the time.
- A VIO Server LPAR can use shared CPUs, which is a good idea if you don't have whole CPUs that can be assigned. This also means unused CPU cycles are given back to the shared pool for other LPARs to use. If the machine becomes heavily loaded, it can introduce tiny delays in starting the I/O on real adapters. Shared CPU partitions are time-sliced onto the CPU, along with other LPARs. Setting the VIO Server partition to Uncapped and with a high weight is generally a good idea.
- A simple CPU rule of thumb: Assign at least ten percent of those CPUs to the VIO Server for CPUs that are going to be used for the VIO Server and client partitions. For example, for five CPUs in the shared pool being used for both VIO Server and VIO clients, allocate 0.5 of a CPU to the VIO Server.
- A simple memory rule of thumb: Use 512 MB of memory.
- It's recommended to have an LPAR profile with the adapter connected to the CD drive included to make installing the IBM VIO Server from CD straightforward. Copy the Normal profile and rename it Normal with CD. Then change the new profile properties to include the CD SCSI adapter. This will be used to initially boot the LPAR with a DVD/CD drive for installing the VIO Server.
- If this is a new machine and you are the only user, installations go much faster if you assign the LPAR a whole CPU or more. If the LPAR is going to be assigned less than this in production, it can always be reduced later, but this simple trick might save you ten minutes per LPAR installation.
Now install the VIO Server into this partition as a recover from
mksysb
image methods. AIX systems administration experts will be familiar with this. The basic steps are:- On the HMC in the Activate LPAR dialog, boot the LPAR into the System Management Services (SMS) menu by selecting both Open a Terminal and the Advanced button and then Boot Mode = SMS. Once in the SMS menus, choose the Boot Options and select Install/Boot. Then choose List all Devices and carefully select the CD-ROM drive, Normal Boot, andYes to leave the SMS menus.
- Read the instructions carefully and, if free, elect to install the VIO Server on hdisk0. (This is assumed in the rest of the article.)
- Warning: If you want to use a whole disk for your VIO client, then you need to make sure the recovery of the VIO Server
mksysb
is not spread across all the disks. - Assuming you now have the VIO Server up and running, you need can add and set up the VIO Server virtualization features.
If you have the DLPAR change software installed and working, it's possible to dynamically add Virtual Ethernet and virtual SCSI. In practice, I recommend you shutdown your VIO Server and VIO client LPARs during this initial setup to make sure it works the first time. If you set up DLPAR later on, you can then experiment, but remember DLPAR changes also have to be implemented identically in your LPAR profile -- if you want the same configuration next time, you need to reboot your LPAR.
In this article, I take a simple and ultra-safe approach to shut down the VIO Server, making changes to the VIO Server profile and restarting it to avoid any confusion and complications. So on the VIO Server, use
shutdown
.
If you make changes to an LPAR profile, the LPAR must be shutdown and then restarted from the HMC to pick up those changes. If you use
shutdown restart
in the VIO Server LPAR, then you'll have only the same resources that were available when the LPAR was previously started from the HMC. You need to completely stop the LPAR to get the new resources.
On the HMC, you can now define the Virtual Ethernet. First shutdown the VIO Server LPAR (as root run:
shutdown
). On the HMC, change your Normal profile properties by right-clicking the Profile and selecting Properties. You also need to select the VIO tab. Select Ethernet at the bottom and click Create. By default, this will be allocated to slot number 2 and a Port virtual LAN ID of 1. Any LPAR with the same Port virtual LAN ID will be able to communicate with each other. This is going to be set up as SEA. To do this, you want to log into the VIO Server over the network and set these two options:- Select trunk adapter.
- Leave the IEEE 802.1Q compatible adapter unchecked -- this is only needed if you are using VLANs internally.
If you want different Virtual Ethernet LANs so that different groups of LPARS can communicate with each other, all you need to do is have different Port virtual LAN ID numbers. These complex configurations are not covered in this article.
In Figure 13, you should see the VIO Server in the lower half and VIO client in the top half. It shows that if the Port virtual LAN IDs are the same, then the LPARs can communicate. It also shows the additional settings for the VIO Server. (The trunk is selected and IEEE 802.1Q is not selected.) These additional settings are really for the bridging feature, as Virtual Ethernet does not really have a client/server relationship -- all LPARs are equal on the network.
Figure 13 also shows the Virtual Ethernet settings. At the bottom is the VIO Server (or any LPAR that will be doing the bridging to the real Ethernet adapter) and at the top is the VIO client, or any LPAR that only uses the Virtual Ethernet.
Figure 13. Virtual Ethernet settings
On the other Virtual Ethernet LPARs, you can use the
ifconfig
command to set up your network just as you would any network. If it's AIX, you can use smitty
or websm
. If it's SUSE, the YAST tool can be used. It finds the Virtual Ethernet adapter just like any other regardless of the tool selection. Figure 14 shows a SUSE example.Figure 14. Non-bridging Virtual Ethernet LPARs
On the HMC, you can now define the two different virtual SCSI devices. These two types of virtual disks (a logical volume or a whole hdisk) appear identical on the HMC; only on the actual VIO Server LPAR are they set up any differently.
If not done already, shutdown the VIO Server LPAR (as root run:
shutdown
).- On the HMC, select the VIO Server and change your Normal profile properties by right-clicking the Profile and selectingProperties. You also need to select the Virtual I/O tab.
- Select SCSI at the bottom and click Create.
- This is the VIO Server, so select Adapter Type: Server.
- Select Any Remote Partition and Slot can connect. Ideally, this should name the specific LPAR and slot to eliminate the risk of the wrong connection between server and client, but at this point you have not created the client partition, so you can't name it yet. This is fixed up later; see Clean up the HMC section for details.
- Select OK.
- Do this a second time for the second virtual SCSI.
The client LPARs are going to use the VIO Server slots 3 and 4. Any more SCSI adapters are optional in this example. In practice, the writer typically sets up a handful of extra virtual devices, so they can be used in the future without stopping the VIO Server or having to do dynamic changes. Unused virtual adapters cost very little, so it's not a waste.
In Figure 15, you have the eventual configuration showing how the VIO Server at the bottom and client at the top both explicitly refer to each other to eliminate errors. You'll reach this configuration in Step 11. I have not covered it yet, but the client LPAR is, of course, shown here too.
Figure 15. Configuration of VIO Server and client
Now you can create the two VIO client LPARs for the two different types of virtual SCSI used in the example. It's assumed you already know the procedure to create a regular LPAR; this section covers additional things you need to consider.
This might be obvious, but you don't need real adapters for your disks or Ethernet connection because you're going to use virtual resources for these.
I recommend you install the client LPAR using CD because it's simple, so you'll want to have the CD SCSI adapter within your LPAR. Once installed, this can be removed from the LPAR profile.
It's recommended that you create two identical LPAR profiles -- one with and one without the CD. Once installed, the writer uses NFS to remotely mount a filesystem containing the AIX and Linux CDs, so you don't need the CD drive from then on to install additional LPP or RPM packages.
Add the Virtual Ethernet adapter on the VIO screen with the same Port virtual LAN ID, which is 1 in this example, but do not select the trunk or IEEE 802.1Q compatible adapter options.
Add the virtual SCSI adapter as follows:
- Set the Adapter Type: Client.
- Explicitly name the Remote Partition (the LPAR in which you have the VIO Server).
- Explicitly name the Remote Partition Virtual Slot Number, which is slots 3 for the first client LPAR (Client X) and 4 for the second client LPAR (Client Y).
Don't forget, you have two client LPARs to create with the two different SCSI Remote Partition virtual slot numbers but the same Virtual Ethernet Port virtual LAN ID.
Now that you've created the client LPARs, you can go back to the VIO Server LPAR and connect up the virtual SCSI adaptersexplicitly to their virtual client LPARs and slots. This ensures that only the right client LPAR connects to the right virtual SCSI disk. It's a safety precaution and worth doing.
On the HMC, highlight the VIO Server LPAR profile and bring up its properties. In the VIO tab, select each Server SCSI resource and the Properties button. You also need to set:
- Only selected Remote Partition and Slot connect option
- The correct Remote Partition name
- The correct Remote Partition Virtual Slot number
In this example, you have two virtual SCSI adapters on the VIO Server to "clean up". This is very easy to get wrong, and this is why I planned it in advance (see Table 1).
You now have all the connections set up for the VIO Server and virtual clients, but still have to connect the virtual SCSI disk to a piece of real disk space and the virtual and real Ethernets using the SEA. This is done on the VIO Server only as follows. First, start the VIO Server LPAR again from the HMC.
Once the VIO Server is running and assuming no network is set up, you need to find out the names of the virtual resources you have to work with using:
$ lsdev -virtual name status description nt2 Available Virtual I/O Ethernet Adapter (l-lan) vhost0 Available Virtual SCSI Server Adapter vhost1 Available Virtual SCSI Server Adapter vhost2 Available Virtual SCSI Server Adapter vhost3 Defined Virtual SCSI Server Adapter vsa0 Available LPAR Virtual Serial Adapter clientY Available Virtual Target Device - Logical Volume clientZ Available Virtual Target Device - Logical Volume |
To see the real adapters, use:
$ lsdev -type adapter name status description ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902) ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902) ent2 Available Virtual I/O Ethernet Adapter (l-lan) ide0 Defined ATA/IDE Controller Device lai0 Defined GXT135P Graphics Adapter sisioa0 Defined PCI-X Dual Channel U320 SCSI RAID Adapter sisioa1 Available PCI-X Dual Channel U320 SCSI RAID Adapter . . . |
If your real Ethernet adapter has more than one port, it can be confusing since your Virtual Ethernet will have a higher number. In the case of a two-port Ethernet card, the Virtual Ethernet name might be en2, as en1 can be the second port on the real adapter. Also, make sure you plug in the Ethernet cable into the right port.
On your machine, the resource names might be slightly different, so be careful in following this example.
Create the SEA between the real and Virtual Ethernets with (Note: Do not type the arrow or comments when you try this.) :
$ mkvdev -sea ent0 <- -default="" -defaultid="" -vadapter="" 1="" default="" ent2="" ethernet="" from="" hmc="" id="" is="" it="" one="" only="" port="" pre="" real="" s="" setup="" simple="" so="" the="" this="" to="" virtual="">-> |
This returned the below results:
ent3 Available en3 et3 $ |
And created the SEA with a name of ent3. Take a look with:
$ lsdev -dev en3 name status description ent3 Available Shared Ethernet Adapter |
This new SEA adapter is used in the
mktcpip
command below.
Now program the TCP/IP details on the SEA adapter. This command is used instead of
smitty
(which is not available on the VIO Server) or ifconfig
(not allowed for the SEA). You'll, of course, have to decide your own hostname and address.mktcpip -hostname op34 <- -gateway="" -inetaddr="" -interface="" -netmask="" 255.255.255.0="" 9.137.62.1="" 9.137.62.34="" address="" command="" en3="" from="" hostname="" ip="" meaning="" mkvdev="" normal="" own="" pre="" tcpip="" the="" use="" you="" your="">-> |
You should now be able to ping your gateway:
ping 9.137.62.1
.
Once the VIO Server is running and before the virtual clients are started, you need to create the disk space and connect it to the virtual SCSI resource that the VIO client will try to attach to.
Check your disks:
$ lspv hdisk0 00c033eaf709961e rootvg active hdisk1 none None hdisk2 none None $ |
Here you see the VIO Server is using the first disk and the others are currently unused. Next, take a look at the free space on that first disk and the primary volume group called rootvg:
$ lsvg rootvg $ lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 00c033ea00004c000000010104ffa3fc VG STATE: active PP SIZE: 128 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 271 (34688 megabytes) MAX LVs: 256 FREE PPs: 151(18328 megabytes) LVs: 14 USED PPs: 120(15636 megabytes) OPEN LVs: 12 QUORUM: 2 TOTAL PVs: 1 VG DESCRIPTORS: 2 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 1 AUTO ON: yes MAX PPs per VG: 32512 MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable $ lsvg -pv rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION Hdisk0 active 271 55 22..09..00..00..24 $ |
Look at the disk view, too:
$ lspv hdisk0 PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg PV IDENTIFIER: 00c033eaf709961e VG IDENTIFIER 00c033ea00004c000000010104ffa3fc PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: yes PP SIZE: 128 megabyte(s) LOGICAL VOLUMES: 14 TOTAL PPs: 271 (34688 megabytes) VG DESCRIPTORS: 2 FREE PPs: 151(19328 megabytes) HOT SPARE: no USED PPs: 120 (15636 megabytes) MAX REQUEST: 256 kilobytes FREE DISTRIBUTION: 22..09..00..00..24 USED DISTRIBUTION: 33..45..54..54..30 $ |
Important things to note here are:
- The TOTAL PPs entry shows the disk is a 36 GB drive -- actually 34688 MB, but remember the disk sizes are quoted in millions and billions and not the binary numbers used here.
- The FREE PPs entry shows there is approximately 18 GB free space on the disk.
- The PP SIZE entry shows that the VIO Server is allocating disk space in a minimum of this amount.
To create a logical volume in the rootvg volume group, try:
$ mklv -lv lv00 rootvg 4G lv00 |
The lv00 confirms the name used to create the logical volume.
To connect this to the VIO client resource, the name of the resource (here it's clientx) is used to make it very clear which partition is using it, but you could use any suitable name:
$ mkvdev -vdev lv00 -vadapter vhost0 -dev clientx clientx Available |
You can now start the VIO Client X and find its virtual SCSI resources. In this example, you've defined just one logical volume for this VIO client, but many logical volumes could be used. I recommend you keep them to a minimum to make the configuration simpler.
In the above section, you found that hdisk1 was unused. This disk will be used to support the VIO Client Y. The disk must not be in a volume group. In the
lspv
command above, there is no logical volume group name next to the disk, so it's not in a volume group. To connect this disk to the virtual SCSI disk for VIO Client Y, try:$ mkvdev -vdev hdisk1 -vadapter vhost1 -dev clienty clienty Available |
You can now see the configuration:
$ lsdev -virtual name status description ent2 Available Virtual I/O Ethernet Adapter (l-lan) vhost0 Available Virtual SCSI Server Adapter vhost1 Available Virtual SCSI Server Adapter ... vsa0 Available LPAR Virtual Serial Adapter clientx Available Virtual Target Device - Logical Volume clienty Available Virtual Target Device - Logical Volume ent3 Available Shared Ethernet Adapter $ $ lsdev -dev clientx -attr attribute value description user_settable LogicalUnitAddr 0x8100000000000000 Logical Unit Address False aix_tdev lv00 Target Device Name False $ lsdev -dev ent3 -attr attribute value description pvid 3 PVID to use for the SEA device pvid_adapter ent2 Default virtual adapter to use for non-VLAN-tagged real_adapter ent0 Physical adapter associated with the SEA thread 0 Thread mode enabled (1) or disabled (0) virt_adapters ent2 List of virtual adapters associated with the SEA |
Now you can start up your VIO clients and install them. This can be AIX (but only if the hardware is pSeries p5 and not OpenPower), SUSES SLES 9, Red Hat 3 update 3 onwards, or Debian. They should find both the:
- Virtual Ethernet, which will be named a Virtual Ethernet and behave like a real physical adapter.
- Virtual SCSI disk, which is presented just like an SCSI disk, but it will only be the size of the underlying disk partition or disk.
These should install just like a regular real Ethernet and SCSI disk.
For AIX, this installation should be just like a normal AIX partition.
For Linux, here are some additional notes. Once running, the Virtual Ethernet looks and behaves like a very fast one GB real adapter:
clienta:~ # ifconfig eth0 Link encap:Ethernet HWaddr AE:38:00:00:D0:02 inet addr:9.137.62.178 Bcast:9.137.62.255 Mask:255.255.255.0 inet6 addr: fe80::ac38:ff:fe00:d002/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1075 errors:0 dropped:0 overruns:0 frame:0 TX packets:350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:113566 (110.9 Kb) TX bytes:40940 (39.9 Kb) Interrupt:184 |
Once running, you can see the virtual SCSI disk is being treated just like a regular disk:
clienta:~ # fdisk -l Disk /dev/sda: 4194 MB, 4194304000 bytes 130 heads, 62 sectors/track, 1016 cylinders Units = cylinders of 8060 * 512 = 4126720 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 1 3999 41 PPC PReP Boot /dev/sda2 6 132 511810 82 Linux swap /dev/sda3 133 1016 3562520 83 Linux |
And you'll find the IBM virtual SCSI client kernel module installed (see ibmveth and ibmvscsic below):
clienta:~ # lsmod Module Size Used by evdev 31416 0 joydev 31520 0 st 72688 0 ipv6 478560 93 sg 74176 0 usbcore 183644 1 ibmveth 44536 0 subfs 30168 2 dm_mod 108224 0 ibmvscsic 43072 2 sr_mod 44380 0 sd_mod 43792 3 scsi_mod 192024 5 st,sg,ibmvscsic,sr_mod,sd_mod |
Network installation might be trickier, as you might have to activate the devices drivers for the installation tools to find the virtual adapters. Network installations are not covered in this article.
Depending on which release you use, the installer might give you, with a series of menus, options to install the IBM virtual SCSI client and Virtual Ethernet drivers before you start an installation properly. Later releases fully understand and install the device drivers for these virtual resources without manual intervention.
Once you've created your client LPAR and set it up the way you like, you should consider backing up the operating system images. Backups are a large subject for which many books have been written. There are many backup solutions from both commercial applications and freely available tools in the AIX and Linux world. For AIX, IBM has the Tivoli® Storage Manager product for remote backups. For Linux, one of the popular freely available tools is Amanda (Advanced Maryland Automatic Network Disk Archiver), which provides remote backup with disk caching and tape library management. There is also a Linux "Backup and Recovery How To" on the Internet for more information.
This article just covers the special considerations for VIO Servers and VIO clients. Backups are important for at least these three reasons, and these apply to VIO systems, too:
- Recovery of files that are accidentally removed.
- Disk failure, assuming your disks are not already protected with a mirror or RAID 5, or you are very unlucky and lose more than one disk.
- Recreating the entire system for a disaster (total machine loss) from backups held off the site.
The HMC data includes definitions of the LPAR physical resources such as CPU, memory, PCI slots, and definitions of the LPAR virtual resources (such as the connections between VIO Server and VIO clients).
If the HMC fails, the data is still held in the Service Processor (FSP) and can be read by a replacement or recovered HMC. It's vital that the configuration details are available in case of a disaster. HMC backups are documented in manuals, InfoCenter help files, and IBM Redbooks. It's also recommended that details of the LPARs are documented on paper. For example, something similar to the planning table used to create the LPARs in this article.
The VIO Server itself needs to be backed up. There is the
backupios
command for saving the rootvg volume group to either a tape, filesystem, CD, or DVD. The other volume group structure (not the contents) can be saved and restored with thesavevgstruct
and restorevgstrct
commands. To save the contents of other volume groups (not rootvg), you'll have to make other arrangements such as using the oem_setup_env
command, dd
command, savevg
command, tar
command, or other backup solutions.
If the VIO Server is purely being used for virtual I/O, then you need to back up:
- The details of the client logical volumes or the details of hdisks. Details include number of logical volumes, their size, the disk layout, which clients, and their use.
- The contents of the client logical volumes or hdisks.
To recover the VIO Server, you can simply reinstall the original install image, which is a
mksysb
image in much the same way as you installed the VIO Server in the first place.
There are different approaches to backing up the VIO client images from the VIO Server. First, note that you have the option of doing hot or cold backups:
- Hot backup
- A hot backup is while the VIO client is running. This is dangerous and is not recommended.
- Cold backup
- A cold backup is the only sensible way to back up the client from the VIO Server. This is simply a matter of shutting down the VIO client. For Linux as root on the client, try:
shutdown -fh now
, and for AIXshutdown -Fh
.For the logical volume method, you have to use thedd
command to copy the logical volume images to a file, tape, NFS, and so on.For the whole hdisk method, the large size probably means a tape is the best option.Thecp
command is not a good idea, since it copies a file using small blocks and is very inefficient and slow. A better command isdd
with a large block size. For example, 64 KB blocks use the bs=64k option. To copy a logical volume, try:dd bs=64k if=/dev/lv01 of=/backup/B
Alternatively, you can back up straight to a tape drive using a command liketar
orbackup
. Some machines support a writeable DVD device that can also be used as a backup medium.Because AIX and the VIO Server can perform DLPAR changes of PCI slots, a single tape drive and its associated SCSI adapter can be moved to the VIO Server for the backup period, and then removed so it can be used in other LPARs.Recovery of a VIO client involves getting the disk image back in the right place and starting the VIO client LPAR again.
The VIO client can be used to back up its own data just like a regular LPAR running AIX or Linux. With an AIX VIO client, the best backup method is the
mksysb
command.
It's unlikely that VIO client LPARs will have its own tape, since the purpose of a VIO client is to share physical resources to reduce hardware requirements. As with the VIO Server, the DLPAR changes of PCI slots can be used to temporarily introduce a tape driver to the client and then for it to back up its own data. Automating this process can be hard to coordinate between lots of client LPARs, but it can be done using scripts on a central machine. Some machines support a writeable DVD device that can also be used as a backup medium.
A second option is for the client to use another LPAR (possibly the VIO Server) or another machine to save the data using either a:
- Remote tape drive
- You can find lots of information on how to do this and make it secure on the Internet. You might need to check the speed of this mechanism and use a Linux tool called "buffer" to ensure the tape drive streams data onto the tape drive efficiently.
- NFS server
- To temporarily store the backup data before it's backed up to tape.
- Remote Backup application
- Discussed at the top of this section and uses a local client application to transfer data to the server machine to provide a backup service.
In all three cases, the high speed of the Virtual Ethernet can boost backup performance. Recovery using these methods can be harder work. With AIX, you can simply recover the
mksysb
image. With Linux, you might have to reinstall from the original CD-ROMs and then overwrite the running Linux with the backup.
Another option is to create a copy of the VIO client SCSI disk and use it as the virtual SCSI disk for a new VIO client LPAR.
For the logical volume, create another logical volume of identical size and use the
dd
command to create a copy of the original logical volume. Be careful with the header structure of the logical volume; AIX can keep some information in the first block.
For the whole disk, you would need a disk of identical size and then copy from the original to the new partition using the
dd
command.
You need to check that the cloned LPAR is not using the same Ethernet IP addresses as the original. Alternatively, you can clone the original LPAR before putting it on the network.
This is not really part of the VIO Server, but it's important for what IBM calls reliability, availability, and serviceability (RAS).
For AIX 5.3, VIO clients have the DLPAR and RAS features already installed.
For the Linux VIO clients, these need to be added as described below.
After installing Linux, it's strongly recommended that you install the IBM packages for both DLPARs (this works for physical and virtual resources) and the daemons and tools to increase RAS. This ensures that you get the expected reliability from your pSeries p5 or OpenPower machine. These RPMs put the LPAR in touch with the HMC for dynamic changes, reporting problems, and provides tools to use on the LPAR, too.
The tools can be downloaded from Service and productivity tools for Linux on POWER. At this Web site, select the right version ofLinux tab then the HMC-Managed option to find the list. Below is the list of RPMs at the time of writing this article, but the version numbers might have since been updated since.
1. src-1.2.2.1-0.ppc.rpm 2. rsct.core.utils-2.3.4.2-0.ppc.rpm 3. rsct.core-2.3.4.2-0.ppc.rpm 4. csm.core-1.4.0.3-79.ppc.rpm 5. csm.client-1.4.0.3-79.ppc.rpm 6. devices.chrp.base.ServiceRM-2.2.0.0-1.ppc.rpm 7. DynamicRM-1.1-2.ppc.rpm 8. rpa-dlpar-1.0-11.ppc.rpm 9. rpa-pci-hotplug-1.0-8.ppc.rpm 10. librtas-1.1-12.ppc64.rpm |
A prerequisite is the
rdist
command. For SUSE Linux, this is on the Linux SuSE SLES 9 CD3 and file name rdist-6.1.5-792.1.ppc.rpm.
Download these RPMs and install or update existing packages. You should find you can now do DLPAR changes. In addition, Linux can report problems to the HMC, which is used to forward problems to IBM (if set up) and used by hardware maintenance staff for diagnosis and correction.
A similar procedure is available for Red Hat and other Linux versions. See the above Web site for details about the packages you need to install.
This article described the virtualization capabilities of IBM POWER5 servers and, through examples that apply equally to both pSeries p5 and eServer OpenPower systems, showed how to set up and use the VIO Server.
Virtualization is a hot topic in the computing industry. The POWER5-based machines provide opportunities for a significant reduction in operating costs for complex environments. Unlike software solutions available from other vendors, the POWER5 implementation uses advanced processor features (Hypervisor) and hardware features to create efficient and flexible virtualization capabilities.
The author wishes to thank Dave Williams and Stephen Atkins, both of IBM UK, for reviewing and improving this article.
- Get more information on:
- Get a full list of Linux on Power applications and more on the way.
- Find several Linux on Power developerWorks articles.
- Download IBM Redbooks:
- Read the following whitepapers:
- Go to the InfoCenter for:
- Visit the Virtual Innovation Center for Hardware for AIX development support. This is the primary source for all pSeries AIX development.
- Browse for books on these and other technical topics.
- Want more? The developerWorks AIX and UNIX zone hosts hundreds of informative articles and introductory, intermediate, and advanced tutorials on the eServer brand.
- Download Adobe Reader.
- Get involved in the developerWorks community by participating in developerWorks blogs.
- The IBM developerWorks team hosts hundreds of technical briefings around the world which you can attend at no charge
No comments:
Post a Comment