Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Thursday 7 November 2013

AIX network administration

The examples here assume that the default TCP/IP con guration

(rc.net) method is used. If the alternate method of using rc.bsdnet
is used then some of these examples may not apply.
Determine if rc.bsdnet is used over rc.net
lsattr -El inet0 -a bootup option
TCP/IP related daemon startup script
/etc/rc.tcpip
To view the route table
netstat -r
To view the route table from the ODM DB
lsattr -EHl inet0 -a route
Temporarily add a default route
route add default 192.168.1.1
Temporarily add an address to an interface
ifconfig en0 192.168.1.2 netmask 255.255.255.0
Temporarily add an alias to an interface
ifconfig en0 192.168.1.3 netmask 255.255.255.0 alias
To permanently add an IP address to the en1 interface
chdev -l en1 -a netaddr=192.168.1.1 -a netmask=0xffffff00
Permanently add an alias to an interface
chdev -l en0 -a alias4=192.168.1.3,255.255.255.0
Remove a permanently added alias from an interface
chdev -l en0 -a delalias4=192.168.1.3,255.255.255.0
List ODM (next boot) IP con guration for interface
lsattr -El en0
Permanently set the hostname
chdev -l inet0 -a hostname=www.tablesace.net
Turn on routing by putting this in rc.net
no -o ipforwarding=1
List networking devices
lsdev -Cc tcpip
List Network Interfaces
lsdev -Cc if
List attributes of inet0
lsattr -Ehl inet0
List (physical layer) attributes of ent0
lsattr -El ent0
List (networking layer) attributes of en0
lsattr -El en0
Speed is found through the entX device
lsattr -El ent0 -a media speed
Set the ent0 link to Gig full duplex
(Auto Negotiation is another option)
chdev -l ent0 -a media speed=1000 Full Duplex -P
Turn o Interface Speci c Network Options
no -p -o use isno=0
Get (long) statistics for the ent0 device (no -d is shorter)
entstat -d ent0
List all open, and in use TCP and UDP ports
netstat -anf inet
List all LISTENing TCP ports
netstat -na grep LISTEN
Remove all TCP/IP con guration from a host
rmtcpip
IP packets can be captured u

Migrating Users from One AIX System to Another AIX System


This document discusses migrating users from one AIX system to another. This does not include transferring the user's personal data or home directories.
The information in this document applies to AIX 5.2 and above.

Since the files involved in the following procedure are flat ASCII files and their format has not changed from V4 to V5, the users can be migrated between systems running the same or different versions of AIX (for example, from V4 to V5).

Files that can be copied over: 
/etc/group 
/etc/passwd
/etc/security/group 
/etc/security/limits 
/etc/security/passwd
/etc/security/.ids 
/etc/security/environ 
/etc/security/.profile 
 
NOTE: Edit the passwd file so the root entry is as follows

 root:!:0:0::/:/usr/bin/ksh

When you copy the /etc/passwd and /etc/group files, make sure they contain at least a minimum set of essential user and group definitions.

Listed specifically as users are the following:
root 
daemon 
bin 
sys 
adm 
uucp 
guest 
nobody 
lpd
Listed specifically as groups are the following:

system 
staff 
bin 
sys 
adm 
uucp 
mail 
security 
cron 
printq 
audit 
ecs 
nobody 
usr 

If the bos.compat.links fileset is installed, you can copy the /etc/security/mkuser.default file over. If it is not installed, the file belongs in the /usr/lib/security directory.
If you copy over mkuser.default, changes must be made to the stanzas. Replace group with pgrp, and program with shell. A proper stanza should look like the following:

    user: 
            pgrp = staff 
            groups = staff 
            shell = /usr/bin/ksh 
            home = /home/$USER 
 
The following files may also be copied over, as long as the AIX version in the new machine is the same:

   /etc/security/login.cfg 
   /etc/security/user 
 
NOTE: If you decide to copy these two files, open the /etc/security/user file and make sure that variables such as tty, registry, auth1 and so forth are set properly with the new machine. Otherwise, do not copy these two files, and just add all the user stanzas to the new created files in the new machine.

Once the files are moved over, execute the following:

    usrck -t ALL 
    pwdck -t ALL 
    grpck -t ALL 
 
This will clear up any discrepancies (such as uucp not having an entry in  /etc/security/passwd). Ideally this should be run on the source system before copying over the files as well as after porting these files to the new system.
NOTE: It is possible to find user ID conflicts when migrating users from older versions of AIX to newer versions. AIX has added new user IDs in different release cycles. These are reserved IDs and should not be deleted. If your old user IDs conflict with the newer AIX system user IDs, it is advised that you assign new user IDs to these older IDs. 

work with sendmail in AIX


Sendmail has been included with the AIX operating system for many years now.
Despite its reputation for being difficult to administer, it is very powerful and can perform some interesting tricks. It's helped me overcome some challenges over the years.
This article shares two interesting tricks that I discovered with Sendmail on AIX.
 

To start the Sendmail daemon, use the startsrc command. For example:

# startsrc -s sendmail -a "-bd -q30m"

The –s flag specifies the subsystem to start, and the –a flag instructs startsrc to execute the subsystem with the specified arguments.
The -bd flag starts Sendmail as a daemon (running in the background) as a Simple Mail Transfer Protocol (SMTP) mail router. The –q flag specifies the interval at which the Sendmail daemon processes saved messages in the mail queue. In this example, Sendmail will process the mail queue every 30 minutes.

To start the Sendmail daemon automatically on a reboot, uncomment the following line in the /etc/rc.tcpip file:
# vi /etc/rc.tcpip
start /usr/lib/sendmail "$src_running" "-bd -q${qpi}"

Execute the following command to display the status of the Sendmail daemon:
# lssrc -s sendmail

To stop Sendmail, use stopsrc:
# stopsrc -s sendmail

The Sendmail configuration file is located in the /etc/mail/sendmail.cf file, and the Sendmail mail alias file is located in /etc/mail/aliases.
If you add an alias to the /etc/mail/aliases file, remember to rebuild the aliases database and run the sendmailcommand with the -bi flag or the /usr/sbin/newaliases command. This forces the Sendmail daemon to re-read the aliases file.
# sendmail -bi

To add a mail relay server (smart host) to the Sendmail configuration file, edit the /etc/mail/sendmail.cf file, modify the DS line, and refresh the daemon:
# vi /etc/mail/sendmail.cf
DSsmtpgateway.xyz.com.au
# refresh -s sendmail

To log Sendmail activity, place the following entry in the /etc/syslog.conf file, create the log file, and refresh the syslog daemon:
# grep mail /etc/syslog.conf
mail.debug  /var/log/maillog rotate time 7d files 4 compress
# touch /var/log/maillog
# refresh –s syslogd

Introduction to POWERVM


Some of the growing challenges for the companies in managing IT infrastructure include cutting down or sharing server resources (such as CPU, memory, IO) reducing power cooling cost and reducing server rack unit space. IBM POWERVM technology which was introduced in POWER6 systems helps in consolidating servers by virtualizing CPU, memory and IO adapter resources. It helps in managing servers efficiently by improving performance and availability of the servers.

CPU Virtualization is achieved through technology called micropartitioning which was introduced in POWER5 systems. Micropartitioning is the process in which a physical CPU can be segmented and shared across multiple logical partitions (LPAR). Memory sharing is achieved through Active Memory Sharing (AMS) setup between VIO and LPARs. For AMS, POWERVM Enterprise edition is needed. IO adapter can be virtualized using Virtual I/O servers (VIO) by creating and configuring the following:

1. Virtual SCSI adapter for virtualizing local or SAN drives.
2. Shared Ethernet Adapter for virtualizing ethernet adapters.
3. NPIV (N-Port ID Virtualization) for virtualizing HBA (Host Bus Adapters).

I'll talk about the steps involved in creating and configuring these virtual adapters in the upcoming posts.

TSM Configuration Steps


See below the TSM configuration steps.


1.) Defining library, Tape drives and path:

define library autolibrary libtype=scsi
define path TSMSERVERA autolibrary srctype=server desttype=library device=/dev/smc2 online=yes
define drive autolibrary LIBTAPE0
define drive autolibrary LIBTAPE1
define drive autolibrary LIBTAPE2
define drive autolibrary LIBTAPE3
define drive autolibrary LIBTAPE4
define drive autolibrary LIBTAPE5
define path TSMSERVERA LIBTAPE0 srctype=server desttype=drive library=autolibrary
device=/dev/rmt0 online=yes

define path TSMSERVERA LIBTAPE1 srctype=server desttype=drive library=autolibrary
device=/dev/rmt1 online=yes

define path TSMSERVERA LIBTAPE2 srctype=server desttype=drive library=autolibrary
device=/dev/rmt2 online=yes

define path TSMSERVERA LIBTAPE3 srctype=server desttype=drive library=autolibrary
device=/dev/rmt3 online=yes

define path TSMSERVERA LIBTAPE4 srctype=server desttype=drive library=autolibrary
device=/dev/rmt4 online=yes

define path TSMSERVERA LIBTAPE5 srctype=server desttype=drive library=autolibrary
device=/dev/rmt5 online=yes

/******Second library******/

define library AUTOLIB2 libtype=scsi
define path TSMSERVERA AUTOLIB2 srctype=server desttype=library device=/dev/smc4
online=yes
define drive AUTOLIB2 LIBTAPE6
define path TSMSERVERA LIBTAPE6 srctype=server desttype=drive library=AUTOLIB2
device=/dev/rmt6 online=yes

2) Defining device class.

define devclass 3592LIB library=autolibrary devtype=3592 format=drive MOUNTRetention=5

define devclass 3592LIB2 library=AUTOLIB2 devtype=3592 format=drive MOUNTRetention=5

define devclass FILE directory=/adsmstore devtype=FILE


3) Defining Primary Storage Pool:

define stgpool DB2DBA3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2DB_OFFLINEA3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2FSA3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2LOG1A3592LIB 3592LIB pooltype=primary MAXSCRatch=100
define stgpool DB2LOG2A3592LIB 3592LIB2 pooltype=primary MAXSCRatch=100


4) Defining Copy Storage Pool:


define stgpool CPDB2DBA3592LIB 3592LIB pooltype=copy MAXSCRatch=100
define stgpool CPDB2LOG1A3592LIB 3592LIB pooltype=copy MAXSCRatch=100



5) Defining Policy Domain and policy set:

define domain DB2DOM description="DB2 Policy Domain"
define policy DB2DOM DB2POL description="DB2 Policy Set"

6) Defining Management Class:

define mgmtclass DB2DOM DB2POL DB2MGDB description="Production DB Mgmt Class"
define mgmtclass DB2DOM DB2POL DB2MGDBOFFLINE description="Production DB Offline Mgmt Class"
define mgmtclass DB2DOM DB2POL DB2MGFS description="Production FS Mgmt Class "
define mgmtclass DB2DOM DB2POL DB2MGLOG1 description="Production DB Log 1 Mgmt Class"
define mgmtclass DB2DOM DB2POL DB2MGLOG2 description="Production DB Log 2 Mgmt Class"

/****assigning default management class************/
assign defmgmtclass DB2DOM DB2POL DB2MGFS


Copy Group under management class:

define copygroup DB2DOM DB2POL DB2MGDB standard type=archive destination=DB2DBA3592LIB
retver=60 retmin=90

define copygroup DB2DOM DB2POL DB2MGDBOFFLINE standard type=archive
destination=DB2DB_OFFLINEA3592LIB retver=9999

define copygroup DB2DOM DB2POL DB2MGFS standard type=backup destination=DB2FSA3592LIB
VERExists=180 RETEXTRA=10 VERDELETED=NOLIMIT VERExists=NOLIMIT SERialization=SHRDYnamic

define copygroup DB2DOM DB2POL DB2MGLOG1 standard type=archive
destination=DB2LOG1A3592LIB retver=60

define copygroup DB2DOM DB2POL DB2MGLOG2 standard type=archive
destination=DB2LOG2A3592LIB retver=60

define copygroup DB2DOM DB2POL DB2MGFS standard type=archive destination=DB2FSA3592LIB

retver=60

7) Labeling the Tapes:

Insert the scratch tapes into the library and label them.

label libv autolibrary search=yes checkin=scr labels=barcode
label libv AUTOLIB2 search=yes checkin=scr labels=barcode

VIO Shared Ethernet Setup


This article discusses the steps involved in setting up VIO Shared Ethernet. First of all, why do we need it? Why can't we assign a physical Ethernet adapter to Logical Partition (LPAR) and configure it?
Imagine a physical server (Managed Power system) having just four Ethernet adapters. Now, if we assign one physical adapter per LPAR, we will run out of Ethernet adapter as soon as we build four LPARS. If there is a requirement to build 10 LPARs, how do we suffice the Ethernet adapter requirement? This is where the VIO server comes handy to share the physical adapters across all the LPARs.
TheVIO shared Ethernet adapter will help in sharing a physical adapter across all the LPARs. If we are going to use one physical adapter for all the 10 LPARs, Can it sustain the load i.e. all the network traffic coming from all the 10 LPARs? In the VIO, we can create link aggregation using multiple physical adapters to address the network traffic needs of the LPARs. Now we will talk about how to setup the link aggregation and shared Ethernet in the VIO servers.

Let’s say we have the following physical Ethernet adapters for public network:

Physical Ethernet Adapters
==========================

ent0 - Public network

ent1 - Public network

==========================

Create two Virtual Ethernet Adapters in VIO LPAR Profile. One will be used for communication between VIO and LPARs and another one for control channel. Control channel is used in the dual-VIO setup and used for heartbeat mechanism to detect failures.
=========================
ent2 - Virtual for Public - VLAN ID 1

ent3 - Virtual Control channel for public - VLAN ID 99
==========================

Command to Configure link aggregation

==========================

mkvdev -lnagg ent0,ent1 -attr mode=8023ad
==========================

The above command will create ent4 which is an aggregated link of two physical adapter ent0 and ent1. The mode 8023ad specifies to use IEEE 802.3ad standard and Link Aggregation Control Protocol (LACP) at the switch side. Have the network team configure the etherchannel on the switch ports.

Now it’s time to create the shared Ethernet adapter.
==========================
mkvdev -sea ent4 -vadapter ent2 -default ent2 -defaultid 1 -attr ha_mode=auto ctl_chan=ent3
==========================


The above command will create ent5 where you can assign IP address of the VIO servers for connectivity. Now in the client LPAR profiles, create virtual Ethernet with VLAN ID as 1 to make use of shared Ethernet adapter.


Important Note: In the Dual-VIO setup, make sure control channel is configured properly with proper VLAN ID on both the VIO servers. Any mis-configuration will flood the network with BPDU packets.

Configuring NPIV in VIO


Using NPIV setup in the VIO, physical fibre channel adapters can be shared across multiple LPARs. Traditionally we have been assigning physical adapters
to AIX/Linux LPARs, and soon we'd run out of adapters if the requirement to build more LPAR arises.

Below are the steps to configure NPIV:

Minimum Requirements to setup NPIV:

- POWER6
- HMC 7.3.4 or later
- VIOS 2.1.2
- AIX 5.3 TL9, or later / AIX 6.1 TL2 or later
- NPIV enabled Switch
- System Firmware level of EL340_039
- 8 Gigabit PCI Express Dual Port Fibre Channel Adapter (Feature Code 5735)

Steps:

1.) Modify LPAR Profile:
a. Assign the 8 Gigabit PCI Express Dual Port Fibre Channel Adapter to the VIO servers.b. Create virtual "server" fibre channel adapters in VIO and assign adapter ID. Specify the client partition and adapter ID. I would suggest to give the
adapter ID as same as the VIO servers adapter ID.c. Create virtual "client" fibre channel adapters in AIX LPARs and specify VIO partition and adapter ID as the connecting partition.


2.) Activate VIO partition and execute "lsnports" to see the physical adapter assigned and it supports NPIV. You should also a virtual adapter "vfcshost#"
which you created in step 1-b.

For this example, let's assume that fcs0 and fcs1 are the dual physical fibre adapters and vfchost0 and vfchost1 are the virtual adapter.

3.) Execute the following command to create the mapping in the VIO server.
vfcmap -vadapter vfchost0 -fcp fcs0 vfcmap -vadapter vfchost1 -fcp fcs1

4.) To verify the mapping, do "lsmap -npiv" and check the mapping. This command will be useful when there is a problem in the NPIV setup.

5.) Now Activate the client LPAR and you will two fibre channel adapters (fcs0 and fcs1). Use
the WWPN (same as WWN but it is logical ID) of the client
LPARs to assign LUNs.

6.) Inform Storage admins to enable NPIV in the SAN switch ports where the VIO server is connected.

You can use the same physical adapters to create multiple virtual client fibre channel adapters and thus they are shared across LPARs. 

IBM VIO Virtual SCSI Disk configuration


Using VIO server, a disk or a logical volume or a volume group can be shared to a client LPAR as a "VIRTUAL SCSI DISK"

Steps to configure Virtual SCSI:
1. Choose a disk (physical volume) in the VIO server. For example hdisk1.

2. Modify LPAR profile of the VIO server to add a virtual server SCSI adapter. Specify a unique adapter slot number and also specify connecting client partition and slot.

3. After adding the virtual scsi adapter and activating the VIO server, you should see a virtual scsi server adapter vhost#. For example vhost0.

4. Modify the client LPAR profile to create new virtual scsi adapter and specify the connecting server partition as the VIO server and server SCSI adapter slot.

5. Then to create the mapping in the VIO execute the following command.

 mkvdev -vdev hdisk1 -vadapter vhost0 -dev client1_rootvg

In Dual VIO setup with a powerpath device coming from SAN:

Let's assume hdiskpower0 is shared between vioservera and vioserverb.

Follow the above steps 1-3 for both the VIO servers.

Execute the following on both the VIO servers:
a. Switch user to root using the command "oem_setup_env"
b. chdev -l hdiskpower0 -a reserve_lock=no hdiskpower0
c. mkvdev -vdev hdiskpower0 -vadapter vhost0 -dev client1_rootvg
d. Perform the step-4 twice for each VIO servers.

The above steps will create two virtual scsi path for the same physical volume. You may login to the client and execute "lspath" to view the vscsi path to the virtual SCSI device.

To verify the mappings in the VIO server, use the command lsmap -all.

DLPAR: Removing I/O Adapters in AIX using HMC


Unlike removing CPU or Memory through Dynamic LPAR operation, removing physical I/O adapters requires few additional steps at the OS level. Each physical I/O adapters do have a parent PCI device and they belong to a slot. To remove an IO adapter, PCI and its child devices and the holding PCI I/O slot need to be removed first.

Steps are detailed below.

In the following example, device fcs0 is removed through Dynamic LPAR operation.

1. Use  lsdev -Cl fcs0 -F parent to find the parent pci device. Lets assume it returns "pci2"

2. Use lsslot -c slot or lsslot -c pci to find the respective slot. e.g. U001.781.DERTRGD.P1-C3

3. Remove pci and child devices. Devices shouldn't be busy. i.e. for fcs devices, volume group must be varied off and for ent devices, network interfaces
must be down.

Use the following command to remove them: rmdev -Rdl pci2

4. After removing the devices, respective slot has to be removed using
"drslot -r -s U001.781.DERTRGD.P1-C3 -c pci "

5. Now, Go to HMC and perform the DLPAR physical IO adapter remove operation. Slot must be listed in the DLPAR window.

IBM VIO Server Basics

VIO Server:
 
What is VIO? Why do we need?
VIO Server (Virtual I/O Server) is an lpar used to virtualize physical adapters such as ethernet using Shared Ethernet Adapter, Fibre Channel adapter using NPIV and Physical volumes using Virtual SCSI.
This is needed to share the physical adapters between the lpars. To overcome the limitations in number of physical adapters.
 
What are the different versions?
VIO 1.5, 2.1, 2.2.1
 
how do you login to VIO Server?
use ssh padmin@vioservername and once logged in you can execute all the VIO commands. Once you login as padmin, execute "oem_setup_env" command as padmin to login as root. padmin shell is a restricted shell. Not all unix commands will work as padmin. Underlying operating system in VIO is AIX. Most of the AIX commands will work once you login as root using "oem_setup_env" but IT IS NOT RECOMMENDED by IBM
 
What hardware is supported?
Starting from POWER5, then POWER6 and POWER7. Different editions are explained below. To use advanced features such as  Active Memory Sharing and Live Partition Mobility you need Enterprise Edition

Basic commands which can be run as padmin:
license -accept : Use this command to accept license after the installation or migration or upgrade.
 of VIO server
ioslevel : To find out the current VIO version or level
mkvdev -lnagg : Command to create Ethernet link aggregation; Mainly to bundle two or more ethernet connections
mkvdev -sea ent0 -vadapter ent1 -defaultid 1 -default ent1 : Command to create Shared Ethernet Adapter
mkvdev -dev hdisk2 -vadapter vhost2 : Command to create Virtual SCSI mapping to share a disks (physical volume)
lsmap -all : To list all Virtual SCSI mapping i.e. vhost - hdisk
vfcmap : To create Virtual Fibre Channel adapter mapping to share fibre channel adapter (HBA)
lsmap -all -npiv : To list all Virtual Fibre Channel adapter mapping (NPIV Configuration)
lsnports : to verify if a fibre channel adapter supports NPIV
backupios -file /home/padmin/mnt/mksysb-backup -mksysb : To backup VIO to a file /home/padmin/mnt/mksysb-backup
updateios : To upgrade IOS level of VIO server Steps explained below
shutdown -force : to shutdown
shutdown -force -restart : to reboot
Migration and Upgrade methods:
Migration can be performed from version 1.5 to 2.1 using VIO MIGRATION DVD MEDIA. steps are explained in the below URL
For Upgrade from 2.1 to 2.2 or within 2.2 level, steps are explained in the below URL.
Example Steps:
Login to VIO server as padmin
Login as root "oem_setup_env"
mount nfsserver:/viopatches /mnt
exit to padmin "exit"
updateios -commit
updateios -install -accept -dev /mnt/viofixpackFP24
shutdown -force -restart
license -accept
ioslevel
updateios -commit
 
POWERVM:
PowerVM, formerly known as Advanced Power Virtualization (APV), is a chargeable feature of IBM POWER5, POWER6 and POWER7 servers and is required for support of micro-partitions and other advanced features. Support is provided for IBM i, AIX and Linux.
 
Description

IBM PowerVM has the following components:
A "VET" code, which activates firmware required to support resource sharing and other features.
Installation media for the Virtual I/O Server (VIOS), which is a service partition providing sharing services for disk and network adapters.
 
IBM PowerVM comes in three editions.

1.) IBM PowerVM Express
 Only supported on "Express" servers (e.g. Power 710/730, 720/740, 750 and Power Blades).
 Limited to three partitions, one of which must be a VIOS partition.
 No support for Multiple Shared Processor Pools.

This is primarily intended for "sandbox" environments

2.) IBM PowerVM Standard
 Supported on all POWER5, POWER6 and POWER7 systems.
 Unrestricted use of partitioning - 10x LPARs per core (20x LPARs for Power7+ servers) (up to a maximum of 1,000 per system).
 Multiple Shared Processor Pools (on POWER6 and POWER7 systems only).

This is the most common edition in use on production systems.

3.) IBM PowerVM Enterprise
 Supported on POWER6 and POWER7 systems only.
 As PowerVM Standard with the addition of Live Partition Mobility (which allows running virtual machines to migrate to another system) and Active Memory Sharing (which intelligently reallocates physical memory between multiple running virtual machines).

Devices: in aix



  1. Physical Devices: Actual hardware that is connected in some way to the system
  2. Ports: The physical connectors / adaptors in the system where physical devices attached.  Most ports are programmable by system software to allow attachment of many different types of devices.
  3. Device Drivers: Software in the kernel that controls the activity on a port and format of the data that is sent to device.
  4. Logical Devices: Software interfaces (special files) that present a means of accessing a physical device to the user and application programs.
       Data appended to logical devices will be sent to appropriate device driver.
       Data read from logical devices will e read from appropriate device driver.
  1. /dev:  The directory which contains all the logical devices that can be directly access by the user.

Listing of /dev directory
# ls –l /dev

brw-rw-rw   root system tdo
crw-rw-rw   root system td1

Block Device: Block device is random access device, buffering is used to provide a block at a time of access. Usually disks file systems only.

Character Device: Character device is sequential stream oriented device which provide no buffering.

Types of devices:
i)                    Predefined devices (All system supported devices)
ii)                  Defined devices (All configured devices)

Listing all devices
# lsdev  
Listing all supported devices (predefined)
# lsdev –P –H ( -P pulls data from predefined database, -H pulls data for header)
class   type   subclass     description
tape    4mm     scsi          4.0 GB mm tape
lsdev –Pc tape (c stands for class)
Using with smit listing all supported devices (predefined)
smit devices > list devices > list all supported devices

Listing all configured devices (customized)
lsdev – C – H ( -C pulls data from customized database, -H pulls data for header)
Using with smit listing all configured devices (customized)
smit devices > list devices > list all defined devices
Device configuration:
mkdev -l or cfgmgr 
Remove the device:
rmdev -l or rmdev -dl 
To list the attributes of the device
lsattr -El tape
To list all the devices currently connected to the system
lscfg  
To see the disk size
            # bootinfo –s hdisk0
            

Device states:

Paging Space: in aix


When ever real memory fill or more than 80% using then paging space coming to sense.

Paging space is called swap space nothing but a logical volume.

To view the current paging spaces:
lsps – a

What is logical volume for paging space?

# lsvg –l rootvg
Hd6 (this is the default logical volume)

If real memory is <256 2="" can="" create="" mb="" memory="" of="" paging="" real="" space.="" span="" the="" then="" times="" you="">

If real memory is >= 256 MB then you can create 75% of the real memory paging space.

To create the paging space:
# mkps –s 2 rootvg

“2” no of LVs

# mkps –s 2 pagingDB1 rootvg

“paggingDB1” name for paging space.

 To automatic activate paging space:
chps –a y pagingOO

To increase paging space:
# chps –s 5 pagingOO

To decrease paging space:
# chps –d 5 pagingOO

paging space attributes can be stored in /etc/swapspaces file
 To remove paging space:
# rmps –pagingOO

Before removing the paging space you have to deactivate paging space to remove;

To deactivate the paging space # swapoff pagingOO
entries will be removed from /etc/swapspaces file

To activate the paging space # swapon -a pagingOO

You can also create/ change paging space using with smit
smit mkps
# smit chps



Rules to create paging space to best performance:
1)      Create one paging space per one disk
2)      Please make sure the size of both paging spaces created on two different disks.
3)      In case of two paging spaces then that are should be in rootvg
4)     Do not put paging space on currently heavily utilized disks.

Job Scheduling /corntab, at in aix


Job Scheduling:

corntab is most used to schedule the jobs in AIX;

To view the list of jobs currently scheduled:
# corntab –l

To create now job scheduling:
# corntab –e (It will open vi editor and after making changes save and quit)

For example: September 21 12:30 AM represents 30:00:21:09:*

#corntab -e 
30:00:21:09 touch /root/abc.txt
 
Minutes(0-59): Hours(0-23): Date(1-31): Month(1-12): Weak Day(0-6)

Corntab attributes can be stored in /var/adm/corn file, and log will store at /var/adm/corn/log

As a root user can also run corntab
# su - root 
You can also schedule a job for the user # corntab -eu
to view the jobs of particular user # corntab -lu

To remove all the scheduled jobs # corntab -r 

To remove all the scheduled jobs of particular user # corntab -ru

To check all the users of corntab # cat /var/adm/corn/corntab/ 

At command also used to schedule the job, but it executed once only

To view all the job scheduled with at command:
at –l

To create job schedule with at command:
at 11:11 today
# at 10:30
touch /root/abc.txt
You can also deny or allow the corn and at commands using with edit the following files;

#cd  /var/adm.corn
 This command output will list default files corn.deny and corn.allow

# vi corn.deny



#vi corn.allow



/var/adm/corn/corn.allow
/var/adm/corn/corn.deny
/var/adm/corn/at.allow
/var/adm/corn/at.deny

P5 Systems information

Please check the bellow table of P5 systems hardware configurations: 

P5-510P5-520P5-550P5-570
1 or 2 way processor1 or 2 way processor1,2 or 4 way processor2,4,8,12,16 way processor
1.5 or 1.65 GHZ processor1.5 or 1.65 GHZ processor1.5 or 1.65 GHZ processor1.65 or 1.9 GHZ processor
512MB-32GB Memory1-32GB Memory512MB-64MB Memory512 MB Memory
Four (4) Internal disks up to 1.2 TBEight (8) Internal disks up to 2.4 TBEight (8) Internal disks up to 2.4 TBSix (6) Internal disks up to 7.2 TB
No additional drawersSupport 4 I/O drawersSupport 8 I/O drawersSupport 20 I/O drawers
3 System PCI slots6 System PCI slots5 System PCI slots6 System PCI slots
Supports up to 20 LPARsSupports up to 20 LPARsSupports up to 40 LPARsSupports up to 160 LPARs


P5-575P5-590P5-595
8 way single core processor8,16,24 or 32 way processor16-64 way processor
1.9 GHZ processor1.65 GHZ processor1.65- 1.9 GHZ processor
2GB -256GB Memory8GB -15GB Memoryup to 2TB Memory
Two(2) Internal disks up to 600 GB128 disks 9.3 TB
One optional I/O drawer
-20 PCI slots
-16 Disks 1.1 TB
Supports up to 8 I/O drawersSupports up to 12 I/O drawers
4 System PCI slots160 System PCI slots
Supports up to 80 LPARsSupports up to 254 LPARsSupports up to 254 LPARs

NIM Master setup 02

Hi,


Please follow the screen shots of NIM master or server setup.

First of all you have to update your system before configuring NIM Master.

# smit install_lattest

input CD to CD ROM >>> SOFTWARE TO install >> Network Install Manager -Master Tools and Network Install Manager -SPOT

after system updated then you can configure NIM Master.

# smit nim_config_env OR smit nimconfig

Intialize the NIM Master:
*Primary network interface for NIM Master :                       [eno]
*Input device for installation image:                                       [cd0]
*LPP_SOURCE Name:                                                      [lpp_source6100]
*LPP_SOURCE Directory                                                  [/export/lpp_source]
     Create new filesystem for LPP_SOURCE?                     [yes]
     Filesystem SIZE (MB)                                                    [650]
     VOLUME GROUP for new filesyste                              [rootvg]
*SPOT Name                                                                     [spot6100]
*SPOT Directory                                                                [/export/spot]
      Create new filesystem for SPOT?                                  [yes]
      Filesystem SIZE (MB)                                                   [350]
     VOLUME GROUP for new filesyste                              [rootvg]
....
....
Define NIM System Bundles?                                             [yes]
Define NIM bosinst_data?                                                  [yes]
Prepend level to resource name                                           [no]
*Remove newly added NIM definitions and failsystems
if any part of this operation fails?                                         [yes]

NIM - RTE Installation

RTE Installation


Run Time Environment Install (RTE) install of a NIM client system using the manual initiation method. It can be used to install AIX on a system where AIX is not currently installed. Default is to install the contents of the BOS.autoi bundle; the user can specify additional bundles or file sets to be installed.

Required Steps:

1)      Define a client on NIM Master.
2)      Prepare NIM Master to supply RTE install resources to a client system
3)      Initiate a manual install from NIM client

Requires a minimum set of resources:
lpp_source
spot

1)      Define a client on NIM Master.


 Make the entry of the client IP and hostname on the /etc/hosts file of the NIM server
            # cat /etc/hosts

10.135.0.173        PACS_CDR_KOLKATA_HA
(IP Address)   (Machine Name)

      Define the machine.
# smit nim








 







PACS_CDR_KOLKATA_HA:
class          = machines
type           = standalone
connect        = shell
platform       = chrp
netboot_kernel = mp
if1            = PACS_NIM PACS_CDR_KOLKATA_HA 0
cable_type1    = tp
Cstate         = BOS installation has been enabled
prev_state     = ready for a NIM operation
Mstate         = not running
boot           = boot
mksysb         PACS_CDR_KOLKATA_HA_mksysb
nim_script     = nim_script
spot           PACS_CDR_KOLKATA_HA_SPOT
control        = master

Client Machine is ready for installation on the NIM Server






2. Prepare NIM Master to supply RTE install resources to a client system



#smit nim_bosinst
Select a TARGET for the operationà 

PACS_CDR_KOLKATA_HA machines       standalone

Select the installation TYPE   Ã 
rte- Install from installation images
mksysb - Install from a mksysb
spot- Install a copy of SPOT resource

Select rte –Install from Installation images

The lpp_source to use for the installation

PACS_CDR_KOLKATA_HA_LPPSOURCE     resources       lpp_source

The SPOT to use for the installation

PACS_CDR_KOLKATA_HA_SPOT     resources       spot

Select the MKSYSB to use for the installation

PACS_CDR_KOLKATA_HA_mksysb     resources       mksysb






 


# lsnim -l PACS_CDR_KOLKATA_HA
PACS_CDR_KOLKATA_HA:
class          = machines
type           = standalone
connect        = shell
platform       = chrp
netboot_kernel = mp
if1            = PACS_NIM PACS_CDR_KOLKATA_HA 0
cable_type1    = tp
Cstate         = BOS installation has been enabled
prev_state     = ready for a NIM operation
Mstate         = not running
boot           = boot
mksysb         PACS_CDR_KOLKATA_HA_mksysb
nim_script     = nim_script
spot           PACS_CDR_KOLKATA_HA_SPOT
control        = master

# cat /etc/bootptab | grep -i tftpboot
           PACS_CDR_KOLKATA_HA:bf=/tftpboot/PACS_CDR_KOLKATA_HA:ip=10.135.0.173:ht=ethernet:sa=10.135.0.178:sm=255.255.255.0:

Client Machine is ready for installation on the NIM Server


3. Initiate a manual install from NIM client 

Starting the Installation on client:

            Take the console of the machine through HMC

            Start the client LPAR on SMS mode
            Select --- 2.   Setup Remote IPL (Initial Program Load)

  

Select --- NIC Adapters  ( e.g  2.  Port 1 - IBM 4 PORT PCIe 10/10  U78C0.001.DBJP321-P2-C1-T1  e41f13fafcf1 )

Select Internet Protocol Version.        1.   IPv4 - Address Format 123.231.111.222 

Select Network Service.   1.   BOOTP


Set Network parameters IP Parameters


Ping Success – Means nim server is accessible from the client, hence cable and lan is correct.

 if Ping Success Start the machine from network boot
           
 Select Boot Options



Multiboot -- 1.   Select Install/Boot Device


Select Device Type -->  6.   Network


Select Network Service. -->   1.   BOOTP


Select Device --> DBJP321-P2-C1-T1

Select Task -->  2.   Normal Mode Boot

BOOTING----------->>>>>>>>>>>>


You can change the disk selections form the above 2nd options --    2 Change/Show Installation Settings and Install


    1 Disk(s) where you want to install ...... hdisk1        - Note if mksysb image is taken from un mirrored root vg, else we have to select 2 disks               


Select the disk
            >>>  1  hdisk0   08-00-00        286102   none            Yes    No
            >>>  2  hdisk1   09-00-00        286102   none            Yes    No

Select -->  >>> 0 Install with the settings listed above.




Installation Running, check the same on Server
bash-3.2# lsnim -l PACS_CDR_KOLKATA_HA
PACS_CDR_KOLKATA_HA:
   class          = machines
   type           = standalone
   connect        = shell
   platform       = chrp
   netboot_kernel = mp
   if1            = PACS_NIM PACS_CDR_KOLKATA_HA 0
   cable_type1    = tp
   Cstate         = Base Operating System installation is being performed
   prev_state     = BOS installation has been enabled
   Mstate         = in the process of booting
   info           = BOS install 16% complete : 12% of mksysb data restored.
   boot           = boot
   image_data     = PACS_CDR_KOLKATA_HA_image
   mksysb         = PACS_CDR_KOLKATA_HA_mksysb
   nim_script     = nim_script
   spot           = PACS_CDR_KOLKATA_HA_SPOT
   cpuid          = 00F6F6B04C00
   control        = master
   Cstate_result  = success
bash-3.2#