Q: - How to detect CPU
architecture/bitmode (32-bit or 64-bit) for Linux ?
# cat /proc/cpuinfo | grep flags
# cat /proc/cpuinfo | grep flags
you will find one of them with name "tm(transparent
mode)" or
"rm(real mode)" or "lm(long mode)"
1. rm tells ,it is a 16 bit processor
2. tm tells, it is a 32 bit processor
3. lm tells, it is a 64 bit processor
"rm(real mode)" or "lm(long mode)"
1. rm tells ,it is a 16 bit processor
2. tm tells, it is a 32 bit processor
3. lm tells, it is a 64 bit processor
Q: - What is the
difference between SSH and Telnet ?
The Primary difference between SSH and Telnet is of security i.e
in ssh data transfer between the systems is in encrypted form so it is
difficult for the hackers to understand what is going on network.
In Telnet data transfer between the systems is in plain
text.
SSH uses a public key for authentication while Telnet does not
use any authentication.
Due to the security measures that were necessary for SSH to be
used in public networks, each packet contains less data to make room for the
data of the security mechanisms. In order to transmit the same amount of data,
you would need to take-up a lot more bandwidth. This is called overhead..
SSH adds a bit more overhead to the bandwidth compared to
Telnet.
Q: - What is difference
between AT and CRON?
Cron command is used to schedule the task daily at the same time
repeatedly ,
at command is used to schedule the task only once i.e to run
only one time.
Q: - What is network
bonding in Linux and steps to configure network bonding ?
Network interface card (NIC) bonding (also referred to as NIC teaming) is the bonding together of two or more physical NICs so that they appear as one logical device. This allows for improvement in network performance by increasing the link speed beyond the limits of one single NIC and increasing the redundancy for higher availability. For example, you can use two 1-gigabit NICs bonded together to establish a 2-gigabit connection to a central file server.
Network interface card (NIC) bonding (also referred to as NIC teaming) is the bonding together of two or more physical NICs so that they appear as one logical device. This allows for improvement in network performance by increasing the link speed beyond the limits of one single NIC and increasing the redundancy for higher availability. For example, you can use two 1-gigabit NICs bonded together to establish a 2-gigabit connection to a central file server.
When bonded together, two or more physical NICs can be assigned
one IP address. And they will represent the same MAC address. If one of the
NICs fails, the IP address remains accessible because it is bound to the local
NIC rather than to a single physical NIC.
Steps to configure :
Steps to configure :
Step #1: Create a bond0
configuration file
Red Hat Linux stores
network configuration in /etc/sysconfig/network-scripts/ directory. First, you
need to create bond0 config file:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append following lines to it:
DEVICE=bond0
IPADDR=192.168.1.20
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.20
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
Replace above IP address with your actual IP address. Save file
and exit to shell prompt.
Step #2: Modify eth0 and
eth1 config files:
Open both configuration using vi text editor and make sure file
read as follows for eth0 interface
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Open eth1 configuration file using vi text editor:
# vi /etc/sysconfig/network-scripts/ifcfg-eth1Make sure file
read as follows for eth1 interface:
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Save file and exit to shell prompt.
Step # 3: Load bond
driver/module
Make sure bonding module is loaded when the channel-bonding
interface (bond0) is brought up. You need to modify kernel modules
configuration file:
# vi /etc/modprobe.conf
Append following two lines:
Append following two lines:
alias bond0 bonding
options bond0 mode=balance-alb miimon=100
options bond0 mode=balance-alb miimon=100
Step # 4: Test configuration
First, load the bonding module:
# modprobe bonding
Restart networking service in order to bring up bond0 interface:
# service network restart
# modprobe bonding
Restart networking service in order to bring up bond0 interface:
# service network restart
Verify everything is working:
# less /proc/net/bonding/bond0Output:
# less /proc/net/bonding/bond0Output:
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:59
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:
6
No comments:
Post a Comment