Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Thursday, 26 December 2019

Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots



Red Hat has released its most awaited OS RHEL 8 on 7th May 2019. RHEL 8 is based on Fedora 28 distribution and Linux kernel version 4.18.  One of the important key features in RHEL 8 is that it has introduced “Application Streams” which allows developers tools, frameworks and languages to be updated frequently without impacting the core resources of base OS. In other words, application streams will help to segregate the users space packages from OS Kernel Space.

Apart from this, there are many new features which are noticed in RHEL 8 like:
Apart from this, there are many new features which are noticed in RHEL 8 like:
XFS File system supports copy-on-write of file extents
Introduction of Stratis filesystem, Buildah, Podman, and Skopeo
Yum utility is based on DNF
Chrony replace NTP.
Cockpit is the default Web Console tool for Server management.
OpenSSL 1.1.1 & TLS 1.3 support
PHP 7.2
iptables replaced by nftables

Minimum System Requirements for RHEL 8:
4 GB RAM
20 GB unallocated disk space
64-bit x86 or ARM System

Note: RHEL 8 supports the following architectures:

AMD or Intel x86 64-bit
64-bit ARM
IBM Power Systems, Little Endian & IBM Z

In this article we will demonstrate how to install RHEL 8 step by step with screenshots.






RHEL 8 Installation Steps with Screenshots
Step:1) Download RHEL 8.0 ISO file
Download RHEL 8 iso file from its official web site,
https://access.redhat.com/downloads/
I am assuming you have the active subscription if not then register yourself for evaluation and then download ISO file
Step:2) Create Installation bootable media (USB or DVD)
Once you have downloaded RHEL 8 ISO file, make it bootable by burning it either into a USB drive or DVD. Reboot the target system where you want to install RHEL 8 and then go to its bios settings and set the boot medium as USB or DVD,

Step:3) Choose “Install Red Hat Enterprise Linux 8.0” option
When the system boots up with installation media (USB or DVD), we will get the following screen, choose “Install Red Hat Enterprise Linux 8.0” and hit enter,

Step:4) Choose your preferred language for RHEL 8 installation
In this step, you need to choose a language that you want to use for RHEL 8 installation, so make a selection that suits to your setup.

Click on Continue
Step:5) Preparing RHEL 8 Installation
In this step we will decide the installation destination for RHEL 8, apart from this we can configure the followings:
Time Zone
Kdump (enabled/disabled)
Software Selection (Packages)
Networking and Hostname
Security Policies & System purpose

By default, installer will automatically pick time zone and will enable the kdump, if wish to change the time zone then click on “Time & Date” option and set your preferred time zone and then click on Done.

To configure IP address and Hostname click on “Network & Hostname” option from installation summary screen,
If your system is connected to any switch or modem, then it will try to get IP from DHCP server otherwise we can configure IP manually.
Mention the hostname that you want to set and then click on “Apply”. Once you are done with IP address and hostname configuration click on “Done”




To define the installation disk and partition scheme for RHEL 8, click on “Installation Destination” option,

Click on Done
As we can see I have around 60 GB free disk space on sda drive, I will be creating following customize lvm based partitions on this disk,
/boot        =  2GB  (xfs file system)
/                =  20 GB (xfs file system)
/var          =  10 GB (xfs file system)
/home      =  15 GB (xfs file system)
/tmp         =  5 GB (xfs file system)
Swap         = 2 GB (xfs file system)
Note: If you don’t want to create manual partitions then select “Automatic” option from Storage Configuration Tab












Let’s create our first partition as /boot of size 2 GB, Select LVM as mount point partitioning scheme and then click on + “plus” symbol,

Click on “Add mount point”


To create next partition as / of size 20 GB, click on + symbol  and specify the details as shown below,

Click on “Add mount point”




As we can see installer has created the Volume group as “rhel_rhel8“, if you want to change this name then click on Modify option and specify the desired name and then click on Save

Now onward all partitions will be part of Volume Group “VolGrp”

Similarly create next three partitions /home, /var and /tmp of size 15GB, 10 GB and 5 GB respectively
/home partition:



/var partition:

/tmp partition:



Now finally create last partition as swap of size of 2 GB,

Click on “Add mount point”













Once you are done with partition creations, click on Done on Next screen, example is shown below

In the next window, choose “Accept Changes”

Step:6) Select Software Packages and Choose Security Policy and System purpose

After accepting the changes in above step, we will be redirected to installation summary window.
By default, installer will select “Server with GUI” as software packages and if you want to change it then click on “Software Selection” option and choose your preferred “Basic Environment”

Click on Done
If you want to set the security policies during the installation, the choose the required profile from Security polices option else you can leave as it is.
From “System Purpose” option specify the Role, Red Hat Service Level Agreement and Usage. Though You can leave this option as it is.

Click on Done to proceed further.












Step:7) Choose “Begin Installation” option to start installation
From the Installation summary window click on “Begin Installation” option to start the installation,

As we can see below RHEL 8 Installation is started & is in progress

Set the root password,

Specify the local user details like its Full Name, user name and its password,

Once the installation is completed, installer will prompt us to reboot the system,

Click on “Reboot” to restart your system and don’t forget to change boot medium from bios settings so that system boots up with hard disk.
Step:8) Initial Setup after installation
When the system is rebooted first time after the successful installation then we will get below window there we need to accept the license (EULA),

Click on Done,
In the next Screen click on “Finish Configuration”










Step:8) Login Screen of RHEL 8 Server after Installation
As we have installed RHEL 8 Server with GUI, so we will get below login screen, use the same user name and password that we created during the installation

After the login we will get couple of Welcome Screen and follow the screen instructions and then finally we will get the following screen,

Click on “Start Using Red Hat Enterprise Linux”

This confirms that we have successfully installed RHEL 8, that’s all from this article. We will be writing articles on RHEL 8 in the coming future till then please do share your feedback and comments on this article.

Saturday, 19 October 2019

RMC (Resource Monitoring and Control):



RMC is a distributed framework and architecture that allows the HMC to communicate with a managed logical partition. RMC daemons should be running on AIX partition in order to be able to do DLPAR operations on HMC.

For example "Dynamic LPAR Resource Manager" is an RMC daemon that runs inside the AIX (and VIO server). The HMC uses this capability to remotely execute partition specific commands.

The daemons in the LPARs and the daemons on the HMC must be able to communicate to the AIX partition through an external network not through the Service Processor. An external network that the partition has access to and the HMC has acces to as well.

For example, if HMC has a connection to a 9.x.x.x network and I put my AIX partition to that 9.x.x.x network and as long there is network connectivity (HMC is allowed to communicate to that partition over that network) and RMC daemon is running on the partition, then DLPAR operations are available.

In order for RMC to work, port 657 upd/tcp must be open in both directions between the HMC public interface and the lpar.

The RMC daemons are part of the Reliable, Scalable Cluster Technology (RSCT) and are controlled by the System Resource Controller (SRC). These daemons run in all LPARs and communicate with equivalent RMC daemons running on the HMC. The daemons start automatically when the operating system starts and synchronize with the HMC RMC daemons.

Note: Apart from rebooting, there is no way to stop and start the RMC daemons on the HMC!

----------------------------------------

HMC and LPAR authentication (RSCT authentication)
(RSCT authentication is used to ensure the HMC is communicating with the correct LPAR.)

Authentication is the process of ensuring that another party is who it claims to be.
Authorization is the process by which a cluster software component grants or denies resources based on certain criteria.
The RSCT component that implements authorization is RMC. It uses access control list (ACL) files to control user access to resources.


The RSCT authorization process in detail:
1. On the HMC: DMSRM pushes down a secret key and HMC IP address to NVRAM where AIX LPAR exists.

2. On the AIX LPAR: CSMAgentRM, reads the key and HMC IP address out from NVRAM. It will then authenticate the HMC. This process is repeated every five minutes on a LPAR to detect new HMCs.

3. On the AIX LPAR: After authenticating the HMC, CSMAgentRM will contact the DMSRM on the HMC to create a ManagedNode resource then it creates a ManagementServer resource on AIX.

4. On the AIX LPAR: After the creation of these resources on the HMC and AIX, CSMAgentRM grants HMC permission to access necessary resources on the LPAR and changes its ManagedNode Status to 1 on the HMC.

5. On the HMC: After the ManagedNode Status is changed to 1, a session is established with the LPAR to query operating system information and DLPAR capabilities, and then waits for the DLPAR commands from users.

----------------------------------------

RMC Domain Status

When partitions have active RMC connections, they become managed nodes in a Management Domain. The HMC is then the Management Control Point (MCP) of that Management Domain. You can then use the rmcdomainstatus command to check the status of those managed nodes (i.e. your partitions).

As root on the HMC or on the AIX LPAR you can execute the rmcdomainstatus command as follows:

# /usr/sbin/rsct/bin/rmcdomainstatus -s ctrmc

From HMC: You should get a list of all the partitions that the HMC server can reach on the public network on port 657.

Management Domain Status: Managed Nodes
  O a  0xc8bc2c9647c1cef3  0003  9.2.5.241
  I a  0x96586cb4b5fc641c  0002  9.2.5.33


From LPAR: You should get a list of all the Management Control Points

Management Domain Status: Management Control Points
   I A  0xef889c809d9617c7 0001  9.57.24.139


1. column:
-I: Indicates that the partition is "Up" as determined by the RMC heartbeat mechanism (i.e. an active RMC connection exists).
-O: Indicates that the RMC connection is "Down", as determined by the RMC heartbeat mechanism.

2. column:
-A: Indicates that there are no messages queued to the specified node
-a: Same as A, but the specified node is executing a version of the RMC daemon that is at a lower code level than the local RMC daemon.

more info: https://www-304.ibm.com/support/docview.wss?uid=isg3T1011508

----------------------------------------

If rmcdomainstatus shows "i" at (1st column):

Indicates that the partition is Pending Up. Communication has been established, but the initial handshake between two RMC daemons has not been completed (message authentication is most likely failing.)
Authentication problems will occur when the partition identity do not match each other's trusted host list:

# /usr/sbin/rsct/bin/ctsvhbal        <--list and="" both="" command="" current="" for="" hmc="" identities="" logical="" on="" partition="" run="" span="" the="" this="">
# /usr/sbin/rsct/bin/ctsthl -l       <--list host="" list="" on="" partition="" span="" the="" trusted="">

On the HMC, there is an entry for the partition. On the partition, there is an entry for the HMC. The HOST_IDENTITY value must match one of the identities listed in the respective ctsvhbal command output.

----------------------------------------

Things to check at the HMC:

- checking the status of the managed nodes: /usr/sbin/rsct/bin/rmcdomainstatus -s ctrmc  (you must be root on the HMC)

- checking connection between HMC and LPAR:
hscroot@hmc10:~> lspartition -dlpar
<#0> Partition:<2 10.10.50.18="" aix10.domain.com="">
       Active:<1>, OS:DCaps:<0x4f9f>, CmdCaps:<0x1b 0x1b="">, PinnedMem:<1452>
<#1> Partition:<4 10.10.50.71="" aix20.domain.com="">
       Active:<0>, OS:DCaps:<0x0>, CmdCaps:<0x1b 0x1b="">, PinnedMem:<656>

For correct DLPAR function:
- the partition must return with the correct IP of the lpar.
- the active value (Active:...) must be higher than zero,
- the decaps value (DCaps:...) must be higher 0x0

(The first line shows a DLPAR capable LPAR, the second line is anon-working LPAR)

- another way to check RMC connection: lssyscfg -r lpar -F lpar_id,name,state,rmc_state,rmc_ipaddr -m p750
(It should list "active" for the LPARs with active RMC connection.)



----------------------------------------

Things to check at the LPAR:

- checking the status of the managed nodes: /usr/sbin/rsct/bin/rmcdomainstatus -s ctrmc

- Checking RMC status:
# lssrc -a | grep rsct
 ctrmc            rsct             8847376      active          <--it a="" is="" rmc="" span="" subsystem="">
 IBM.DRM          rsct_rm          6684802      active          <--it command="" dlpar="" executing="" for="" is="" on="" partition="" span="" the="">
 IBM.DMSRM        rsct_rm          7929940      active          <--it for="" is="" of="" partitions="" span="" statuses="" tracking="">
 IBM.ServiceRM    rsct_rm          10223780     active
 IBM.CSMAgentRM   rsct_rm          4915254      active          <--it and="" between="" for="" handshaking="" hmc="" is="" nbsp="" partition="" span="" the="">
 ctcas            rsct                          inoperative     <--it for="" is="" security="" span="" verification="">
 IBM.ERRM         rsct_rm                       inoperative
 IBM.AuditRM      rsct_rm                       inoperative
 IBM.LPRM         rsct_rm                       inoperative
 IBM.HostRM       rsct_rm                       inoperative     <--it for="" information="" is="" obtaining="" os="" span="">

You will see some active and some missing (The key for DLPAR is the IBM.DRM)

- Stopping and starting RMC without erasing configuration:

# /usr/sbin/rsct/bin/rmcctrl -z    <--it daemons="" span="" stops="" the="">
# /usr/sbin/rsct/bin/rmcctrl -A    <--adds and="" daemons="" entry="" etc="" inittab="" it="" span="" starts="" the="" to="">
# /usr/sbin/rsct/bin/rmcctrl -p    <--enables client="" connections="" daemons="" for="" remote="" span="" the="">

(This is the correct method to stop and start RMC without erasing the configuration.)
Do not use stopsrc and startsrc for these daemons; use the rmcctrl commands instead!

- recfgct: deletes the RMC database, does a discovery, and recreates the RMC configuration
# /usr/sbin/rsct/install/bin/recfgct
(Wait several minutes)
# lssrc -a | grep rsct

(If you see IBM.DRM active, then you have probably resolved the issue)

- lsrsrc "IBM.ManagementServer"    <--it hmcs="" rmc="" shows="" span="" via="">

Sample LVM Procedures

:

                        Filesystem Procedures

Procedure to create a filesystem using JFS:
·      See below the procedure for creating a logical volume and a filesystem using JFS:


Procedure to extend the size of filesystem using JFS:
1.    "df" to see the filesystem, it's current size, % utilization and the name of it's logical volume
2.    "lslv " to show information about the logical volume including it's volume group name.
3.    "lsvg " to show information about the volume group, including number of free pp's and the pp size
4.    If there are not enough free pp's then see below for procedure to add a disk to a volume group.
5.    "chfs -a size= +4194304 " to grow the filesystem by 2 GB (4194304=2*1024*1024*1024/512)
·      NOTE:  Growing the file system will automatically grow the logical volume
6.    df" shows the file system's current size is 2 GB more than before.
Troubleshooting extending the size of a filesystem using JFS:
·      Error Message:  0516-787 extendlv: Maximum allocation for logical volume is 512.
·      Maximum number of LPs for the logical volume has been exceeded - must increase the allocation
·      Calculate the number of LPs needed = LV Size in MB / LP size in MB
·      chlv -x
Procedure to remove a file system
1.    Unmount the filesystem
2.    Remove the logical volume "rmlv "
3.    Remove the filesystem information from /etc/filesystems
Procedure to reduce the size of a file system - shareold is 8mb and needs to be reduced to 4mb
1.    Create the file system
1.    crfs -v jfs -m /usr/sharenew -g rootvg -a size=8192
2.    this makes a logical volume in the root volume group of 4MB that uses jfs
2.    Mount the volume
1.    mount /usr/sharenew
3.    Move the files from the old file system (/usr/shareold)
1.    cd /usr/shareold
2.    tar cf - | (cd /usr/sharenew; tar xvf -)
3.    cd
4.    Unmount the file systems
1.    umount /usr/sharenew
2.    umount /usr/shareold
5.    Remove the old file system and it's logical volume
1.    rmfs /usr/shareold
6.      
1.    chfs -m /usr/shareold /usr/sharenew
7.    Mount the new filesystem
1.    mount /usr/shareold
8.    Delete the temporary mount point
1.    rmdir /usr/share

                        Logical Volume Procedures

Procedure to create a logical volume and filesystem in a volume group using JFS:
1.    lsvg to determine the size of the PP
2.    lslv in similar logical volumes to determine if mirroring is in effect
3.    Calculate the number of PPs needed for the logical volume
1.    bc
2.    scale=2
3.    /
4.    quit
4.    mklv -y  "" <# of LPS>  --> creates the logical volume
5.    crfs -v jfs -d -m / -A yes   --> makes the filesystem, creates the mountpoint and puts it in /etc/filesystems
6.    mount /  --> mounts the new fileystem
7.    df /  --> verifies the mount and the size of the new filesystem
8.    Check the ownership and permissions of the new mount point
1.    ls -ld
2.    chown owner:group
3.    chmod XXX
9.    If mirroring is in effect, then mirror this logical volume to another disk (original and 1 mirror):
1.    mklvcopy -s y 2

Check to see if  all of the logical volumes in a volume group are mirrored
·      lsvg -l

Mirror a logical volume after the fact
·       mklvcopy -s y 2

                        Volume Group Procedures

Procedure to create a volume group:
1.    lsdev -C -c disk  -> lists available disks (and the hdisk#) on the server
2.    mkvg -y "" hdisk#  --> creates the volume group on the named hard disk
3.    varyonvg   --> activates the volume group
Procedure to add a disk to a volume group (extend the volume group)
·      extendvg
·      Verify the disk has been successfully added to the vg
·      lsvg -p

Procedure to mirror the rootvg:
1.    lspv  --> determine the hdisk#
2.    extendvg rootvg hdisk  --> add the hdisk to the volume group
3.    lspv  -->  verify that the hdisk has been successfully added to the volume group
4.    chvg -Q 'n' rootvg  -->  change the quorum so that the vg will stay active if one of the mirrors fail
5.    mirrorvg -S -c 2 rootvg  --> mirror all of the logical volumes in the volume group
6.    lsvg -l rootvg  --> verify successful mirroring (pps will appear "stale" until synchronization is complete).
7.    bosboot -a  -->  update the boot image information
8.    bootlist -m normal -o hdisk0 hdisk1  --> create a new bootlist
9.    bootlist -m normal -o  --> verify the bootlist is correct
Procedure to increase the number of LP's available
Assume we receive an error that the maximum number of LP's had been exceeded, and the maximum number of LP's defined was 1100:
1.    "lsvg " to show the total PP's available in the volume group =1250
2.    "lsvg -l " to show the total PP's used in all logical volumes in that volume group (showed sys1log, the jfs log was using 2 PP's)
3.    "chlv -x 1248 " to change the maximum number of LP's from 1100 to 1248 (1250 PP's in the volume group - 2 PP's used by the jfs log  = 1248 available)

                        Physical Disk Procedures

Procedure to find disks/vpaths that are unallocated
·      lsvpcfg
·      This will show disks/vpaths and the volume group they are allocated to
·      lspv|grep None
·      This will show pvs and whether they are asssociated with a volume group
·      Note:  For vpaths, the hdisks will show as none, but they may be allocated to a vpath - you must grep each hdisk with the lsvpcfg

Procedure to make a new lun available to AIX
·      Allocate the new lun on the SAN
·      Run "cfgmgr"
·      Verify the new vpatch/hdisk by running "lsvpcfg"
·      There should be a new vpath and it should be available with no volume group - if not, rerun cfgmgr

Procedure to list the PVs in a volume group:
·      lsvg -p