Tanti Technology

My photo
Bangalore, karnataka, India
Multi-platform UNIX systems consultant and administrator in mutualized and virtualized environments I have 4.5+ years experience in AIX system Administration field. This site will be helpful for system administrator in their day to day activities.Your comments on posts are welcome.This blog is all about IBM AIX Unix flavour. This blog will be used by System admins who will be using AIX in their work life. It can also be used for those newbies who want to get certifications in AIX Administration. This blog will be updated frequently to help the system admins and other new learners. DISCLAIMER: Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility. If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.

Friday 5 March 2021

Kubernetes interview

 1. How is Kubernetes related to Docker? ↑

Docker provides the lifecycle management of containers and a Docker image builds the runtime containers. But, since these individual containers have to communicate, Kubernetes is used. So, Docker builds the containers and these containers communicate with each other via Kubernetes. So, containers running on multiple hosts can be manually linked and orchestrated using Kubernetes.


2. What is Container Orchestration? ↑

Consider a scenario where you have 5-6 microservices for an application. Now, these microservices are put in individual containers, but won’t be able to communicate without container orchestration. So, as orchestration means the amalgamation of all instruments playing together in harmony in music, similarly container orchestration means all the services in individual containers working together to fulfill the needs of a single server.


3. What are the features of Kubernetes? ↑

Automated Scheduling - Self healing capabilities

Automated Rollouts and rollback - Horizaontal Scaling and Load Balancing

4. How does Kubernetes simplify containerized Deployment? ↑

As a typical application would have a cluster of containers running across multiple hosts, all these containers would need to talk to each other. So, to do this you need something big that would load balance, scale & monitor the containers. Since Kubernetes is cloud-agnostic and can run on any public/private providers it must be your choice simplify containerized deployment.


5. What is Google Container Engine? ↑

Google Container Engine (GKE) is an open source management platform for Docker containers and the clusters. This Kubernetes based engine supports only those clusters which run within the Google’s public cloud services.


6. What is Heapster? ↑

Heapster is a cluster-wide aggregator of data provided by Kubelet running on each node. This container management tool is supported natively on Kubernetes cluster and runs as a pod, just like any other pod in the cluster. So, it basically discovers all nodes in the cluster and queries usage information from the Kubernetes nodes in the cluster, via on-machine Kubernetes agent.


7. What is Minikube? ↑

Minikube is a tool that makes it easy to run Kubernetes locally. This runs a single-node Kubernetes cluster inside a virtual machine.


8. What is Kubectl? ↑

Kubectl is the platform using which you can pass commands to the cluster. So, it basically provides the CLI to run commands against the Kubernetes cluster with various ways to create and manage the Kubernetes component.


9. What is the syntax for the Kube-proxy command? ↑

The syntax for Kubectl is kubectl [command] [TYPE] [NAME] [flags]


10. What is the syntax for the Kubectl command? ↑

The syntax to configure Proxy is: kube-proxy [flags]


11. What is Kubelet? ↑

This is an agent service which runs on each node and enables the slave to communicate with the master. So, Kubelet works on the description of containers provided to it in the PodSpec and makes sure that the containers described in the PodSpec are healthy and running.


12. What are the different components of Kubernetes Architecture? ↑

The Kubernetes Architecture has mainly 2 components – the master node and the worker node. The master and the worker nodes have many inbuilt components within them. The master node has the kube-controller-manager, kube-apiserver, kube-scheduler, etcd. Whereas the worker node has kubelet and kube-proxy running on each node.


13. What do you understand by Kube-proxy? ↑

Kube-proxy can run on each and every node and can do simple TCP/UDP packet forwarding across backend network service. So basically, it is a network proxy which reflects the services as configured in Kubernetes API on each node. So, the Docker-linkable compatible environment variables provide the cluster IPs and ports which are opened by proxy.


14. What is the Kubernetes controller manager? ↑

Multiple controller processes run on the master node but are compiled together to run as a single process which is the Kubernetes Controller Manager. So, Controller Manager is a daemon that embeds controllers and does namespace creation and garbage collection. It owns the responsibility and communicates with the API server to manage the end-points.


15. What are the different types of controller manager running on the master node? ↑

Node Controller - Replication controller

Service account and token controller - Endpoints controller

16. What is ETCD? ↑

Etcd is written in Go programming language and is a distributed key-value store used for coordinating between distributed work. So, Etcd stores the configuration data of the Kubernetes cluster, representing the state of the cluster at any given point in time.


17. What do you understand by load balancer in Kubernetes? ↑

A load balancer is one of the most common and standard ways of exposing service. There are two types of load balancer used based on the working environment i.e. either the Internal Load Balancer or the External Load Balancer. The Internal Load Balancer automatically balances load and allocates the pods with the required configuration whereas the External Load Balancer directs the traffic from the external load to the backend pods.


18. Write a command to create and fetch the deployment. ↑

To create: kubectl create –f Deployment.yaml –record


To fetch: kubectl get deployments


19. What is Ingress network? ↑

Ingress network is a collection of rules that acts as an entry point to the Kubernetes cluster. This allows inbound connections, which can be configured to give services externally through reachable URLs, load balance traffic, or by offering name-based virtual hosting. So, Ingress is an API object that manages external access to the services in a cluster, usually by HTTP and is the most powerful way of exposing service.


20. What do you understand by Cloud controller manager? ↑

The Cloud Controller Manager is responsible for persistent storage, network routing, abstracting the cloud-specific code from the core Kubernetes specific code, and managing the communication with the underlying cloud services. It might be split out into several different containers depending on which cloud platform you are running on and then it enables the cloud vendors and Kubernetes code to be developed without any inter-dependency. So, the cloud vendor develops their code and connects with the Kubernetes cloud-controller-manager while running the Kubernetes.


21. What are the different types of cloud controller manager? ↑

Node Controller - Route Controller

Volume Controller - Service Controller

22. What is a Headless Service? ↑

Headless Service is similar to that of a ‘Normal’ services but does not have a Cluster IP. This service enables you to directly reach the pods without the need of accessing it through a proxy.


23. What are federated clusters? ↑

Multiple Kubernetes clusters can be managed as a single cluster with the help of federated clusters. So, you can create multiple Kubernetes clusters within a data center/cloud and use federation to control/manage them all at one place.


24. What is a pod? ↑

A pod is the most basic Kubernetes object. A pod consists of a group of containers running in your cluster. Most commonly, a pod runs a single primary container.


25. What is the difference between a daemonset, a deployment, and a replication controller? ↑

A daemonset ensures that all nodes you select are running exactly one copy of a pod. A deployment is a resource object in Kubernetes that provides declarative updates to applications. It manages the scheduling and lifecycle of pods. It provides several key features for managing pods, including pod health checks, rolling updates of pods, the ability to roll back, and the ability to easily scale pods horizontally.


The replication controller specifies how many exact copies of a pod should be running in a cluster. It differs from a deployment in that it does not offer pod health checks, and the rolling update process is not as robust.


26. What is a sidecar container, and what would you use it for? ↑

A sidecar container is a utility container that is used to extend support for a main container in a Pod. Sidecar containers can be paired with one or more main containers, and they enhance the functionality of those main containers. An example would be using a sidecar container specifically to process system logs or for monitoring.


27. How can you separate resources? ↑

You can separate resources by using namespaces. These can be created either using kubectl or applying a YAML file. After you have created the namespace you can then place resources, or create new resources, within that namespace. Some people think of namespaces in Kubernetes like a virtual cluster in your actual Kubernetes cluster.


28. What are K8s? ↑

K8s is another term for Kubernetes.


29. What is a node in Kubernetes? ↑

A node is the smallest fundamental unit of computing hardware. It represents a single machine in a cluster, which could be a physical machine in a data center or a virtual machine from a cloud provider. Each machine can substitute any other machine in a Kubernetes cluster. The master in Kubernetes controls the nodes that have containers.


30. What does the node status contain? ↑

The main components of a node status are Address, Condition, Capacity, and Info.


31. What process runs on Kubernetes Master Node? ↑

The Kube-api server process runs on the master node and serves to scale the deployment of more instances.


32. What is the job of the kube-scheduler? ↑

The kube-scheduler assigns nodes to newly created pods.


33. What is a cluster of containers in Kubernetes? ↑

A cluster of containers is a set of machine elements that are nodes. Clusters initiate specific routes so that the containers running on the nodes can communicate with each other. In Kubernetes, the container engine (not the server of the Kubernetes API) provides hosting for the API server.


34. What is a Namespace in Kubernetes? ↑

Namespaces are used for dividing cluster resources between multiple users. They are meant for environments where there are many users spread across projects or teams and provide a scope of resources.


35. Name the initial namespaces from which Kubernetes starts? ↑

Default

Kube – system

Kube – public

36. What are the different services within Kubernetes? ↑

Cluster IP service

Node Port service

External Name Creation service and

Load Balancer service

37. What is ClusterIP? ↑

The ClusterIP is the default Kubernetes service that provides a service inside a cluster (with no external access) that other apps inside your cluster can access.


38. What is NodePort? ↑

The NodePort service is the most fundamental way to get external traffic directly to your service. It opens a specific port on all Nodes and forwards any traffic sent to this port to the service.


39. What is Kube-proxy? ↑

Kube-proxy is an implementation of a load balancer and network proxy used to support service abstraction with other networking operation. Kube-proxy is responsible for directing traffic to the right container based on IP and the port number of incoming requests.


40. How can you get a static IP for a Kubernetes load balancer? ↑

A static IP for the Kubernetes load balancer can be achieved by changing DNS records since the Kubernetes Master can assign a new static IP address.


41. What is the difference between config map and secret? ↑

Config maps ideally stores application configuration in a plain text format whereas Secrets store sensitive data like password in an encrypted format. Both config maps and secrets can be used as volume and mounted inside a pod through a pod definition file.


42. If a node is tainted, is there a way to still schedule the pods to that node? ↑

When a node is tainted, the pods don't get scheduled by default, however, if we have to still schedule a pod to a tainted node we can start applying tolerations to the pod spec.


43. Can we use many claims out of a persistent volume? ↑

The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims.


44. What is Kube-proxy? ↑

Kube-proxy is an implementation of both a network proxy and a load balancer. It is used to support service abstraction used with other networking operations. It is responsible for directing traffic to the container depend on IP and the port number.


45. What are the tools that are used for container monitoring? ↑

Heapster

cAdvisor

Prometheus

InfluxDB

Grafana

46. What are the important components of node status? ↑

Condition

Capacity

Info

Address

47. What is minikube? ↑

Minikube is a software that helps the user to run Kubernetes. It runs on the single nodes that are inside VM on your computer. This tool is also used by programmers who are developing an application using Kubernetes.


48. Explain Prometheus in Kubernetes. ↑

Prometheus is an application that is used for monitoring and alerting. It can be called out to your systems, grab real-time metrics, compress it, and stores properly in a database.


49. List tools for container orchestration. ↑

Docker swarm

Apache Mesos

Kubernetes.

50. Define Stateful sets in Kubernetes. ↑

The stateful set is a workload API object that is used to manage the stateful application. It can also be used to manage the deployments and scaling the sets of pods. The state information and other data of stateful pods are store in the disk storage, which connects with stateful set.


51. Explain Replica set. ↑

A Replica set is used to keep replica pods stable. It enables us to specify the available number of identical pods. This can be considered a replacement for the replication .controller.


52. Why uses Kube-apiserver? ↑

Kube-apiserver is an API server of Kubernetes that is used to configure and validate API objects, which include services, controllers, etc. It provides the frontend to the cluster's shared region using which components interact with each other.


53. Explain the types of Kubernetes pods. ↑

There are two types of pods in Kubernetes:


Single Container Pod: It can be created with the run command.

Multicontainer pods: It can be created using the "create" command in Kubernetes.

54. What are the labels in Kubernetes? ↑

Labels are a collection of keys that contain some values. The key values are connected to pods, replication controllers, and associated services. Generally, labels are added to some object during its creation time. They can be modified by the users at run time.


55. What is Sematext Docker Agent? ↑

Sematext Docker agent is a log collection agent with events and metrics. It runs as a small container in each Docker host. These agents gather metrics, events, and logs for all cluster nodes and containers.


56. Define OpenShift. ↑

OpenShift is a public cloud application development and hosting platform developed by Red Hat. It offers automation for management so that developers can focus on writing the code.


57. What is ContainerCreating pod? ↑

A ContainerCreating pod is one that can be scheduled on a node but can’t start up properly.


58. What do you mean by Persistent Volume Claim? ↑

Persistent Volume Claim is actually the storage provided to the pods in Kubernetes after the request from Kubernetes. User is not expected to have knowledge in the provisioning and the claims has to be created where the pod is created and in the same namespace.


59. What will happen while adding new API to Kubernetes? ↑

If you add a fresh API to Kubernetes, the same will provide extra features to Kubernetes. So, adding a new API will improve the functioning ability of Kubernetes. But, this will increase the cost and maintenance of the entire system. So, there is a need to maintain the cost and complexity of the system. This can be achieved by defining some sets for the new API.


60. How do you make changes in the API? ↑

Changes in the API server has to be done by the team members of Kubernetes. They are responsible to add a new API without affecting the functions in the existing system.


61. What is kubectl drain? ↑

kubectl drain command is used to drain a specific node during maintenance. Once this command is given, the node goes for maintenance and is made unavailable to any user. This is done to avoid assigning this node to a new container. The node will be made available once it completes maintenance.


62. Define Autoscaling in Kubernetes. ↑

One of the important feature of Kubernetes is Autoscaling. Autoscaling can be defined as scaling the nodes according to the demand for service response. Through this feature, cluster increases the number of nodes as per the service response demand and decreases the nodes in case of the decrease in service response requirement. This feature is supported currently in Google Container Engine and Google Cloud Engine and AWS is expected to provide this feature at the earliest.


63. What is the “Master”? ↑

Master refers to a central point of control, which gives a unified view of a cluster. There is a single master node, which controls different minions. Master servers then work together to accept user requests and determine the best means of scheduling the workload containers, authenticate clients and nodes as well as adjust on the cluster wide networking and managing the scaling and health checking of responsibilities.


64. What is a Swarm in Docker? ↑

The docker Swarm is a clustering and scheduling tool for the Docker containers. When it comes to Swarm, the IT administrators and developers would establish and manage a cluster of Docker nodes as part of the single virtual system.


65. What is Kubernetes Log? ↑

Kubernetes container logs are much similar to Docker container logs. But, Kubernetes allows users to view logs of deployed pods i.e running pods.


66. What are the types of Kubernetes Volume? ↑

The types of Kubernetes Volume are:


EmptyDir

GCE persistent disk

Flocker

HostPath

NFS

ISCSI

rbd

PersistentVolumeClaim

downwardAPI

67. Explain PVC. ↑

The full form of PVC stands for Persistent Volume Claim. It is storage requested by Kubernetes for pods. The user does not require to know the underlying provisioning. This claim should be created in the same namespace where the pod is created.


68. What is the Kubernetes Network Policy? ↑

Network Policy defines how the pods in the same namespace would communicate with each other and the network endpoint.

135 Frequently Asked Docker Interview Questions and Answers

135 Frequently Asked Docker Interview Questions and Answers

1. What is Docker?
Answer: Docker is an open-source container service designed to facilitate applications deployment inside the software containers. It will rely on Linux Kernel Features like namespaces and cgroups ensuring which resource isolation and application packaging along with its dependencies. Docker was licensed under Apache License 2.0 in the binary form and fully written in the Go programming language. It can support several operating systems like Linux, Cloud, Windows, and Mac OS and different platforms like ARM architecture and x86-64 windows platforms.

2. Why use Docker?
Answer: 1. A user can quickly build, ship, and run its applications.
2. A single operating system kernel will run all containers.
3. Docker container is more light-weight than the virtual machines.
4. A user will deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.

3. List the most used commands of Docker.
Answer: 1. ps lists the running containers.
2. dockerd launches Docker Daemon.
3. build is used to build an image from a DockerFile.
4. create is used to create a new image form container’s changes.
5. pull is used to download a specific image or a repository.
6. run is used to run a container.
7. logs display the logs of a container.
8. rm removes one or more containers.
9.rmi removes one or more images.
10. stop is used to stop one or more container.
11. kill is used to kill all running containers.

4. Does the data get lost, if the Docker container exits?
Answer: No. any data which application will write to disk can get well preserved in its container until we will explicitly delete the container and the file system will persist even after the container halts.

5. What is Dockerfile and its use?
Answer: DockerFile is a text document which is used to assemble a Docker image. It is consist of a list of Docker commands and operating system commands for building an image. These commands will execute automatically in sequence in the Docker environment and create a new Docker image.

6. How Docker is advantageous over Hypervisors?
Answer: Docker is advantageous in following way
1. It is lightweight.
2. More efficient in terms of resources.
3. It uses very fewer resources and also the underlying host kernel rather than developing its hypervisor.

7. How to create a Docker container?
Answer: A Docker Container will be created by running any specific Docker image.
Use the following command to create a container .
docker run -t -i command name
If to verify that whether the container has created or whether that is running or not, use the following command as this command will lists out all the running Docker containers on the host along with their status.
docker ps -a

8. Can json be used instead of yaml for compose file?
Answer: Yes, json will be used instead of yaml for compose file.

9. How do I run multiple copies of Compose file on the same host?
Answer: Compose will use the project name which will allow to create unique identifiers for all of a project’s containers and other resources. To run multiple copies of a project, set a custom project name using the -a command-line option or using COMPOSE_PROJECT_NAME environment variable.

10. What are the components of Docker Architecture?

Answer: Docker Client (docker) – it will enable a user for Docker interaction. It will communicate with more than one Docker Daemon. It can use Docker API and can send command ( docker run) to Docker Daemon ( dockerd ) which carries them out.

Docker Daemon ( dockerd ) – It will give a complete environment to execute and run applications. It is consists of images, containers, volumes and responsible for all container-related actions. It can pull and create the container images as what the client requests. A Daemon will communicate with other daemons for its service management.

Docker registry -is versioning, storage, and distribution system for Docker images. It allows Docker users to pull images locally, and push new images to the registry. Docker Hub is a public registry instance of Docker used by default while installing Docker Engine.

Docker Image- is a lightweight, standalone, executable package of Docker stored in a Docker Registry. It can be used for creating a container. It will consist of everything required to run an application- code, a runtime, system libraries, system tools, environment variables, configuring files, and settings.

Docker Container– It is a standardized unit of software used for deploying a particular application or environment. It is launched by running an image. It can package up code and all of its dependencies therefore apps will run quickly and reliably from one computing environment to another.

11. On what platforms does Docker run?
Answer: Docker will run on various platforms as follows:
Linux

Ubuntu 12.04, 13.04 et al
Fedora 19/20+
RHEL 6.5+
CentOS 6+
Gentoo
ArchLinux
openSUSE 12.3+
CRUX 3.0+
Microsoft Windows

Windows Server 2016
Windows 10
Cloud

Amazon EC2
Google Compute Engine
Microsoft Azure
Rackspace
 

12. What is the purpose of Docker_Host?
Answer: It will contain container, images, and Docker daemon. It will offer a complete environment to execute and run application.

13. What is Docker Engine?
Answer: Docker Engine is a Client-Server application which is installed on the host machine. It will allows to develop, assemble, ship, and run applications anywhere. It can be available for Linux or Windows Server. Its major components are as follows:
1. Server is a long-running program which is called a Daemon process (docker )
2. REST API specifies interfaces which docker will use to communicate with Daemon and instruct it what to do.
3. CLI (Command Line Interface) – It will use the Docker REST API to manage and interact with the Daemon through its scripting commands.

14. What is the Lifecycle of Docker container?
Answer: The Lifecycle of Docker Container with CLI is as following:
1. Create a Container.
2. Run the created Container.
3. Pause the processes running inside the Container.
4. Unpause the processes running inside the Container.
5. Start the Container, if exists in a stopped state.
6. Stop the Container as well as the running processes.
7. Restart the Container as well as the running processes.
8. Kill the running Container.
9. Destroy the Container, only if it exists in a stopped state.

15. What is Kubernetes and Docker?
Answer: Docker is a platform and tool to build, distribute, and run Docker containers.
Kubernetes is a container orchestration system for Docker containers which is more extensive than Docker Swarm and will be meant to coordinate clusters of nodes at scale in production in an efficient manner.

16. When should I use Docker? When To Use Docker?
Answer: In following cases should I use Docker:
1. Use Docker as version control system for entire app’s operating system.
2. Use Docker when want to distribute/collaborate on app’s operating system with a team.
3. Use Docker to run code on laptop in the same environment as we have on your server (try the building tool).

17.What is the advantage of Docker?
Answer: The most important advantages to a Docker-based architecture is actually standardization. Docker will provide repeatable development, build, test, and production environments. Every team member can work on a production parity environment by Standardizing service infrastructure across the entire pipeline.

18. Why is Docker needed?
Answer: The Docker is required to ease the creation, deploy and the delivery of an application using the called Containers. A Docker Container has just the minimum set of operating system not a full operating system, a software required for the application to run and rely on the host Linux Kernel itself.

19. Is Kubernetes better than Docker?
Answer: Kubernetes and Dicker isn’t an alternative of each other. Quite the contrary; Kubernetes will run without Docker and Docker will function without Kubernetes. Kubernetes will benefit greatly from Docker and vice versa. Docker is a standalone software which will be installed on any computer to run containerized applications.

20. What is the difference between Docker and Openshift?
Answer: The main difference between Docker and Openshift is that Docker as a project will focuse on the runtime container only, whereas OpenShift as a system will include both the runtime container along the REST API, coordination, and web interfaces for deploying and manage individual containers. A cartridge has similarity to a docker image.

21. What are the disadvantages of Docker?
Answer: Following are disadvantages with Docker:
1. Containers will not run at bare-metal speeds.
2. Containers consume resources more efficiently than virtual machines but still it is subject to performance overhead due to overlay networking, interfacing between containers and the host system and so on.

22. What is the most popular use of Docker?
Answer: The most common technologies running in Docker are:
1. NGINX: Docker is mostly used for deploying and run HTTP servers.
2. Redis: This popular key-value store is a regular feature a top the list of container images.
3. Postgres: The open source relational database is steadily increasing in popularity.

23. Should I use Docker for development?
Answer: The development environment is the similar as the production environment. We will deploy and it can “just work”. If it is hard time to build something by build or compile, build it inside Docker.

24. Is Docker a VM?
Answer: In a virtual machine, valuable resources will be emulated for the guest OS and hypervisor, that will make it possible to run many instances of one or more operating systems in parallel on a single machine or host. Docker containers are executed with the Docker engine not with the hypervisor.

25. Why is Docker so popular?
Answer: Docker is popular because it has revolutionized development. Docker, and the containers it will make possible, has revolutionized the software industry and in five short years their popularity as a tool and platform has increased. The main reason is that containers will create vast economies of scale.

26. How do I download Docker?
Answer: Following way Docker can be downloaded:
1. Install Docker for Mac
2. run it
3. Double-click Docker.dmg for openning the installer, then drag Moby the whale to the Applications folder.
4. Double-click Docker.app in the Applications folder to start Docker.
5. Click the whale ( ) to get Preferences and other options.
6. Select About Docker to verify that you have the latest version.

27. Do we need Docker?
Answer: Docker will shine compared to virtual machines when it will come to performance because containers will share the host kernel and will not emulate a full operating system. Docker does impose performance costs. If we required to get the best possible performance out of server, we may required to avoid Docker.

28. What can you do with Docker?
Answer: We are just some of the use cases which will provide a consistent environment at low overhead with the enabling technology of Docker.
1. Simplifying Configuration.
2. Code Pipeline Management.
3. Developer Productivity.
4. App Isolation.
5. Server Consolidation.
6. Debugging Capabilities.
7. Multi-tenancy.

29. What are Docker images?
Answer: A Docker image is a file which is comprised of multiple layers, used for executing code in a Docker container. An image will be essentially built from the instructions for a complete and executable version of an application, which will relies on the host OS kernel.

30. Is Docker a Microservice?
Answer: Docker will Benefits for Microservices. Docker, as a containerization tool, will be often compared to virtual machines. Virtual machines are introduced to optimize the use of computing resources. We will run several VMs on a single server and deploy each application instance on a separate virtual machine.

31. How much does Docker cost?
Answer: If we required to run Docker in production, however, the company will encourage users to sign up for a subscription package for an enterprise version of the platform. Docker will offer three enterprise editions of its software. Pricing which is start at $750 per node per year.

32.What is docker in AWS?
Answer: Docker is a software platform which will allow to build, test, and deploy applications quickly. Running Docker on AWS will provide developers and admins a highly reliable, low-cost way for building, ship, and run distributed applications at any scale.

33. Is Kubernetes free?
Answer: Pure open source Kubernetes is free and it can be downloaded from its repository on GitHub. Administrators should build and deploy the Kubernetes release to a local system or cluster or to a system or cluster in a public cloud, such as AWS, Google Cloud Platform (GCP) or Microsoft Azure.

34. Difference between virtualization and containerization?
Answer: Containers will provide an isolated environment to run the application. The entire user space will be explicitly dedicated to the application. Any changes which are made inside the container will never reflected on the host or even other containers running on the same host. Containers will be an abstraction of the application layer. Each container has a different application.
In virtualization, hypervisors will provide an entire virtual machine to the guest including Kernal. Virtual machines will be an abstraction of the hardware layer. Each VM is a physical machine.

35. Explain how you can clone a Git repository via Jenkins?
Answer: To clone a Git repository via Jenkins, we have to enter the e-mail and user name for Jenkins system. To do that we have to switch into job directory and execute the “git config” command.

36. What is a Docker Container and its advantages?
Answer: Docker containers will include the application and all of its dependencies. It will share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers will not required any specific infrastructure, they will run on any infrastructure, and in any cloud. Docker containers will be basically runtime instances of Docker images.
Following are major advantage of using Docker Container:-
1. It will offer an efficient and easy initial set up.
2. It will allow to describe application lifecycle in detail.
3. Simple configuration and interacts with Docker Compose.
4. Documentation will provide every bit of information.

37. Explain Docker Architecture?
Answer: Docker Architecture will consist of a Docker Engine that is a client-server application:-
1. A server which is a type of long-running program which is called a daemon process ( the docker command ).
2. A REST API which will specify interfaces that programs will use to talk the daemon and instruct it what to do.
3. A command-line interface (CLI) client (the docker command) The CLI will use the Docker REST API to control or interact with Docker daemon applications use the underlying API and CLI.

38. What is Docker Hub?
Answer: Docker hub is a cloud-based registry that will help us to link code repositories. It will allows us to build, test, store image in the Docker cloud. We can deploy the image to host with the help of the Docker hub.

39. What are the important features of Docker?
Answer: Following are the essential features of Docker:
1. Easy Modeling
2. version Control
3. Placement/Affinity
4. Application Agility
5. Developer Productivity
6. Operational Efficiencies

40. What are the main drawbacks of Docker?
Answer: Following are the disadvantages of Docker which we should keep in mind

1. It will not provide a storage option.
2. Offer a poor monitoring option.
3. No automatic rescheduling of inactive Nodes.
4. Complicated automatic horizontal scaling set up.

41. Tell us something about Docker Compose.
Answer: Docker Compose is a YAML file which will contain details about the service, network, and volumes to set up the Docker application. Therefore, we will use Docker compose for creating separate containers, host them and get them to communicate with other containers.

42. What is Docker Swarm?
Answer: Docker Swarm is native clustering for Docker. It will turn a pool of Docker hosts into a single, virtual Docker host. Docker Swarm will serve the standard Docker API, any tool which already communicates with a Docker daemon will be use Swarm to transparently scale to multiple hosts.

43. What is Docker Engine?
Answer: Docker daemon or Docker engine will represent the server. The docker daemon and the clients will run on the same or remote host, which will communicate through command-line client binary and full RESTful API.

44. Explain Registries
Answer: Two types of registry are
1. Public Registry
2. Private Registry
Docker’s public registry is called Docker hub, which will allow to store images privately. In Docker hub, we will store millions of images.

45. What command should you run to see all running container in Docker?
Answer: $docker ps

46. Write the command to stop the Docker Container.
Answer: $ sudo docker stop container name

47. What is the command to run the image as a container?
Answer: $ sudo docker run -i -t alpine /bin/bash

48. Explain Docker object labels.
Answer: Docker object labels is a method to apply metadata to docker objects which will include images, containers, volumes, network, swarm nodes, and services.

49. Write a Docker file to create and copy a directory and built it using python modules?
Answer: FROM pyhton:2.7-slim
WORKDIR /app

COPY . /app
docker build –tag

50. Where the docker volumes are stored?
Answer: We required to navigate
/var/lib/docker/volumes

51. How do you run multiple copies of Compose file on the same host?
Answer: Compose will use the project name which will allow us for creating unique identifiers for all of a project’s containers and other resources. To run multiple copies of a project, set a custom project name using the -a command-line option or using COMPOSE_PROJECT_NAME environment variable.

52. Did Docker come up with the ‘container’ technology?
Answer: No, Docker did not come up with the container technology. Multiple other development tools offer containers similar to Docker.

53. How is Docker better than other tools that use containers, then?
Answer: Docker will utilise the cloud to run its container-related operations – which is not used by many other development tools. Docker will become much more flexible and adaptable to different scenarios using the cloud, which might come up during the development or shipment processes. This is the main reason to use the Docker when compared to other container-based developer tools.

54. What is a Dockerfile?
Answer: A Dockerfile is a set of instructions. Developers provided Docker with such instructions therefore the program could do the job correctly, with those specific parameters in mind.

55. What are the three main types of Docker components?
Answer: The Client, the Host, and the Registry.
The client is the component which will issue “run” and “build” commands to the host. The host is where all of the containers and images will be created. They will be then sent to the registry, for execution.

56. Will you lose all of your work if you accidentally exit a container?
Answer: No, We won’t lose any information, data and other parameters if we accidentally exit the Docker container. The only way to lose progress would be to issue a specific command to delete the container – exiting it won’t do the files within any harm.

57. Can you use any other files for composing instead of the default YAML?
Answer: Yes, The more popular version to use than YAML is the good-old JSON.

58. What is ‘NameSpaces’ used for?
Answer: NameSpaces will isolate the Docker containers from other activities or tampering with them.

59. What is the single most important requirement for building a Docker container?
Answer: The most important requirement for building a container with Docker is the default image. This default image will vary depending on the code that we are using. To find out and access the default image, we should go to the Docker Hub and search for the specific domain that we required. After we find the image, all that’s left to do is deal with the documentation and that’s it – we can create a Docker container.

60. How does Docker manage ‘Dockerized nodes’?
Answer: A Dockerized node can be any machine which has Docker installed and running. Docker will manage both in-house and cloud-based nodes. Therefore, whether the node will exist in the area of the main computer running Docker or it is present on the cloud – it will not matter. Docker can manage it without a problem.

61. What are the main factors that dictate the number of containers you can run?
Answer: There is no defined limit of containers that we can run with Docker. The limitations may start due to hardware.
Two factors might limit the number of containers that we can run – the size of app and CPU strength. If application isn’t ginormous and we have a never-ending supply of CPU power, we could probably run a huge amount of Docker container simultaneously.

62. How is Docker different from Hypervisor?
Answer: Hypervisor will require to have extensive hardware to function properly, while Docker will run on the actual operating system. This will allow Docker to be exceptionally fast and perform tasks in a fluid manner – something which Hypervisor tends to lack.

63.Can I use JSON instead of YAML for my compose file in Docker?
Answer: YES, We can very comfortably use JSON instead of the default YAML for Docker compose file. In order to use JSON file with compose, we required to specify the filename to use as the following:
docker-compose -f docker-compose.json up

64. Tell us how you have used Docker in your past position?
Answer: We could also explain the ease that this technology has brought in the automation of the development to production lifecycle management. We can also discuss about any other integrations that we might have worked along with Docker like Puppet, Chef or even the most popular of all technologies – Jenkins.

65.How to create Docker container?
Answer: We will create a Docker container out of any specific Docker image of choice and the same can be achieved using the command given below:
docker run -t -i command name
The command above can create the container . In order to check whether the Docker container will be created and whether it is running or not, we could make use of the following command. This command can list out all the Docker containers along with its statuses on the host that the Docker container runs.
docker ps –a

66. How to stop and restart the Docker container?
Answer: The following command will be used to stop a certain Docker container with the container id as
CONTAINER_ID:
docker stop CONTAINER_ID
The following command will be used to restart a certain Docker container with the container id as
CONTAINER_ID:
docker restart CONTAINER_ID

67. How far do Docker containers scale?
Answer: The Web deployments such as Google, Twitter and best examples in the Platform Providers such as Heroku, dotCloud run on Docker that can scale from the ranges of hundreds of thousands to millions of containers running in parallel, provided the condition which the OS and the memory will not run out from the hosts which runs all these innumerable containers hosting your applications.

68. What platforms does Docker run on?
Answer: Docker will currently available on the following platforms and also on the following Vendors or Linux:
1. Ubuntu 12.04, 13.04
2. Fedora 19/20+
3. RHEL 6.5+
4 CentOS 6+
5. Gentoo
6. ArchLinux
7. openSUSE 12.3+
8. CRUX 3.0+
Docker will currently available and run on the following Cloud environment setups as following:
1. Amazon EC2
2. Google Compute Engine
3. Microsoft Azure
4. Rackspace
Docker is extending it will support to Windows and Mac OSX environments and support on Windows has been on the growth in a very drastic manner.

69. What is the advantage of Docker over hypervisors?
Answer: Docker is lightweight and more efficient in resource uses because it will use the underlying host kernel rather than creating its hypervisor.

70. Is Container technology new?
Answer: No, container technology is not new. Different variations of containers technology ate out there in *NIX world for a long time. Solaris container (aka Solaris Zones)-FreeBSD Jails-AIX Workload Partitions (aka WPARs)-Linux OpenVZ are examples.

71. How is Docker different from other container technologies?
Answer: Docker is a quite fresh project. It was created in the Era of Cloud, Therefore a lot of things are better than in other container technologies. Following are the features which are good in Docker:
1. Docker will run on any infrastructure, we can run docker on laptop, or can run it in the cloud.
2. Docker has a Container HUB which is a repository of containers that can download and use. We can even share containers with any applications.
3. 3. Docker is quite well documented.

72. What are the networks that are available by default?
Answer:

bridge It is the default network all containers connect to if we will not specify the network.
none connects to a container-specific network stack which will lack a network interface
host connects to the host’s network stack – there will be no isolation between the host machine and the container, as far as network is concerned
 

73. Difference between Docker Image and container?
Answer: The runtime instance of docker image is the Docker container. Docker Image will not have a state, and its state will never change as it will just set of files whereas docker container will have its execution state.

74. What is the use case for Docker?
Answer: There are use cases where we can use Docker in production.

75. How exactly are containers (Docker in our case) different from hypervisor virtualization (vSphere)? What are the benefits?
Answer: To run an application in a virtualized environment (example for vSphere), we first required for creating a VM, install an OS inside and only then deploy the application. To run the same application in docker, all we required is to deploy that application in Docker. There is no need for additional OS layer. We just deploy the application with its dependent libraries, docker engine (kernel, etc.) which will provide the rest.
Another benefit of Docker is speed of deployment.
ACME inc. will require to virtualize application GOOD APP for testing purposes.

76. Docker is the new craze in virtualization and cloud computing. Why are people so excited about it?
Answer: Docker is fast, easy for using and a developer-centric DevOps-ish tool. it is easy to package and ship code. Developers will want tools which abstract away a lot of the details of that process. They will just want for seeing their code working. That will lead to all sorts of conflicts with Sys Admins when code will be shipped around and turns out not to work somewhere other than the developer’s environment. Docker will turn to work around that by making code as portable as possible and making that portability user-friendly and simple.

77. Do you think open source development has heavily influenced cloud technology development?
Answer: I think open source software will be closely tied to cloud computing. Both in terms of the software running in the cloud and the development models which have enabled the cloud. Open source software will be cheap, it’s low friction both from an efficiency and a licensing perspective.

78. Can you give us a quick rundown of what we should expect from your Docker presentation at OSCON this year?
Answer: It is aimed at Developers and SysAdmins who will want to get started with Docker in very hands on way. We’ll teach the basics of how to use Docker and how to integrate it into daily workflow.

79. How is Docker different from other container technologies?
Answer: Docker containers are easy to deploy in the cloud. It is capable of getting more applications running on the same hardware compared to other technologies such as Kubernete, Amazon Elastic Contain, etc. Therefore making learners who take Kubernetes Training Hyderabad and developers create, ready-to-run containerized applications and make them manage, deploy and share easily.

80. Mention some commonly used Docker Commands?
Answer: Some among the most commonly used Docker Commands will be as follows:

Command             Description

Dockerd                 Launch the Docker Daemon
Info                         Displays  information System-Wide
Version                  Displays the Docker Version information
Build                       Builds images for Docker files
Inspect                   Returns low-level information on an image or container
History                   Shows Image History
Commit                  Creates new images from Container changes
Attach                     Attaches to a running container
Load                        Load an image from STDIN or tar archive
Create                     Create a new container
Diff                          Inspect changes on a container’s file system
Kill                          Kill a running container

81. What is a Docker Hub?
Answer: Docker Hub can be considered as a cloud registry which lets us link the code repositories, create the images, and test them. We will also store our pushed images, or we will link to the Docker Cloud, therefore that the images will be deployed to the host. We have a centralized container image discovery resource which will be used for the collaboration of our teams, automating the workflow and distribution, and changing management by creating the development pipeline.
Docker Vs VM (Virtual Machine)

Virtual Machines Vs Docker Containers
Virtual Machines Docker Containers
Need more resources Less resources are used
Process isolation is done at hardware level Process Isolation is done at Operating System level
Separate Operating System for each VM Operating System resources can be shared within Docker
VMs can be customized. Custom container setup is easy
Takes time to create Virtual Machine Creation of docker is very quick
Booting takes minutes CBooting is done within seconds
82. Do I lose my data when the Docker container exits?
Answer: There is no loss of data when any of Docker containers will exit as any of the data that application can write to the disk in order to preserve it. This can be done until the container is explicitly deleted. The file system for the Docker container will persist even after the Docker container is halted.

What, in your opinion, is the most exciting potential use for Docker?
Answer: The most exciting potential use of Docker is its build pipeline. Most of the Docker professionals will be seen using hyper-scaling with containers, and indeed get a lot of containers on the host which it can run on. These will be known to be blatantly fast. Most of the development – test build pipeline will be completely automated using the Docker framework.

84. Why is Docker the new craze in virtualization and cloud computing?
Answer: Docker will be the newest and the latest craze in the world of Virtualization and also Cloud computing because it will be an ultra-lightweight containerization app which is brimming with potential to prove its mettle.

85. Why do my services take 10 seconds to recreate or stop?
Answer: Docker compose stop attempts to stop a specific Docker container by sending a SIGTERM message. Once this message will be delivered, it CaaS wait for the default timeout period of 10 seconds and once the timeout period will be crossed, it will then send out a SIGKILL message for the container – in order to kill it forcefully. If we are actually waiting for the timeout period, then it will mean that the containers will not shutting down on receiving SIGTERM signals / messages.
In an attempt to solve this issue, the following is what we will do:
1. We can ensure that we are using the JSON form of the CMD and also the ENTRYPOINT in dockerfile.
2. Use [“program”, “argument1”, “argument2”] instead of sending it as a plain string as like this – “program argument1 argument2”.
3. Using the string form, which will make Docker run the process using bash which will not handle signals properly. Compose always uses the JSON form.
4. If it is possible then modify the application which we are intended to run by adding an explicit signal handler for the SIGTERM signal
5. Also set the stop_signal to a proper signal which the application will understand and also know how to handle it.

86. How do I run multiple copies of a Compose file on the same host?
Answer: Docker’s compose will makes use of the Project name for creating unique identifiers for all of the project’s containers and resources. In order to run multiple copies of the same project, we will required to set a custom project name using the –p command line option or we could use the COMPOSE_PROJECT_NAME environment variable for this purpose.

87. What’s the difference between up, run, and start?
Answer: On any given scenario, we will always want docker-compose up. Using the command UP, we will start or restart all the services which are defined in a docker-compose.yml file. In the “attached” mode, that will also the default mode – we will be able to see all the log files from all the containers. In the “detached” mode, it can exit after starting all the containers that will continue to run in the background showing nothing over in the foreground.
Using docker-compose run command; we can be able to run the one-off or the ad-hoc tasks which will be required to be run as per the Business needs and requirements. This will require the service name to be provided which we would want to run and based on that, it can only start those containers for the services which the running service will depend on. Using the run command, we can run tests or perform any of the administrative tasks as like removing / adding data to the data volume container. It will also very similar to the docker run –ti command that opens up an interactive terminal to the containers an exit status which will matche with the exit status of the process in the container.
Using the docker-compose start command; we will only restart the containers which were previously created and were stopped. This command will never create any new Docker containers on its own.

88. What’s the benefit of “Dockerizing?”
Answer: Dockerizing enterprise environments will help teams to leverage over the Docker containers to form a service platform such as a CaaS (Container as a Service). It will provide teams that necessary agility, portability and also lets them control staying within their own network / environment.
Most of the developers opt for using Docker and Docker alone because of the flexibility and also the ability that it will provide to quickly build and ship applications to the rest of the world. Docker containers will be portable and these will run on any environment without making any additional changes when the application developers will have to move between Developer, Staging and Production environments. This whole process will be seamlessly implemented without the need of performing any recoding activities for any of the environments. These not only will help to reduce the time between these lifecycle states, but also will ensure that the whole process can be performed with utmost efficiency. There is every possibility for the Developers for debugging any certain issue, fix it and will also update the application with it and propagate this fix to the higher environments with utmost ease.
The operations teams will handle the security of the environments while also allow the developers build and ship the applications in an independent manner. The CaaS platform which is provided by Docker framework will deploy on-premise and can also loaded with full of enterprise level security features like role-based access control, integration with LDAP or any Active Directory, image signing and etc. Operations teams will have heavily rely on the scalability provided by Docker and will also leverage over the Dockerized applications across any environments.
Docker containers will be so portable that it will allow teams to migrate workloads which will run on an Amazon’s AWS environment to Microsoft Azure without change its code and also with no downtime at all. Docker will allow teams to migrate these workloads from their cloud environments to their physical datacenters and vice versa. This also will enable the Organizations to focus on the infrastructure from the gained advantages both monetarily and also the self-reliability over Docker. The lightweight nature of Docker containers are compared to traditional tools such as virtualization, combined with the ability for Docker containers to run within VMs, allowing teams for optimizing their infrastructure by 20X, and save money in the process.

89. How many containers can run per host?
Answer: Depending on the environment where Docker hosts the containers, there will be as many containers as the environment will support. The application size, available resources such as CPU, memory will decide on the number of containers which will run on an environment. Though containers will create newer CPU on their own but they will definitely provide efficient ways of utilizing the resources. The containers themselves are super lightweight and will only last as long as the process they are running.

90. Is there a possibility to include specific code with COPY/ADD or a volume?
Answer: This will be easily achieved by adding either the COPY or the ADD directives in dockerfile. This can count to be useful if we want to move code along with any of Docker images, example, sending code an environment up the ladder – Development environment to the Staging environment or from the Staging environment to the Production environment. Having said that, we might come across situations where we will required to use both the approaches. We will have the image include the code using a COPY, and use a volume in Compose file to include the code from the host during development. The volume will override the directory contents of the image.

91. Will cloud automation overtake containerization any sooner?
Answer: Docker containers will gain the popularity each passing day and definitely can be a quintessential part of any professional Continuous Integration / Continuous Development pipelines. There is equal responsibility on all the key stakeholders at each Organization for taking up the challenge of weighing the risks and gains on adopting technologies which are budding up on a daily basis. Docker is extremely effective in Organizations that appreciate the consequences of Containerization.

92. Is there a way to identify the status of a Docker container?
Answer: We will identify the status of a Docker container by running the command ‘docker ps –a’, that will in turn list down all the available docker containers with its corresponding statuses on the host. From there we will easily identify the container of interest to check its status correspondingly.

93. What are the differences between the ‘docker run’ and the ‘docker create’?
Answer: The most important difference is that by using the ‘docker create’ command we will create a Docker container in the Stopped state. We will also give it with an ID which can be stored for later usages as well.

This will be achieved by using the command ‘docker run’ with the option –cidfile FILE_NAME as like this:
‘docker run –cidfile FILE_NAME’

94. Can you remove a paused container from Docker?
Answer: It is not possible for removing a container from Docker which is just paused. It is a must which a container will be in the stopped state, before it will be removed from the Docker container.

95. Is there a possibility that a container can restart all by itself in Docker?
Answer: No, it is not possible. The default –restart flag is set for never restart on its

96. What is the preferred way of removing containers – ‘docker rm -f’ or ‘docker stop’ then followed by a ‘docker rm’?
Answer: The best and the preferred way for removing containers from Docker is to use the ‘docker stop’, as it will allow sending a SIG_HUP signal to its recipients providing them the time which is required for performing all the finalization and cleanup tasks. Once this activity can be completed, we will then comfortably remove the container using the ‘docker rm’ command from Docker and thereby updating the docker registry as well.

97. Difference between Docker Image and container?
Answer: Docker container is the runtime instance of docker image.
Docker Image can not have a state and its state never changes because it will be just set of files whereas docker container will have its execution state.

98. What are the main drawbacks of Docker?
Answer: Following are drawbacks of Docker :
1. Not provide a storage option
2. Provides a poor monitoring option.
3. No automatic rescheduling of inactive Nodes
4. Complicated automatic horizontal scaling set up

99. What is Docker Engine?
Answer: Docker daemon or Docker engine will represent the server. The docker daemon and the clients will run on the same or remote host, that will communicate through command-line client binary and full RESTful API.

100. Explain Registries.
Answer: There are following types of registry :
1. Public Registry
2. Private Registry
Docker’s public registry is called Docker hub, which can allow to store images privately. In Docker hub, we can store millions of images.

101. What command should you run to see all running container in Docker?
Answer: $ docker ps

102. Write the command to stop the docker container.
Answer: $ sudo docker stop container name

103. What is the command to run the image as a container?
Answer: $ sudo docker run -i -t alpine /bin/bash

104. What are the common instruction in Dockerfile?
Answer: The common instruction in Dockerfile are FROM, LABEL, RUN, and CMD.

105. What is memory-swap flag?
Answer: Memory-swap is a modified flag which has meaning if- memory is set. Swap will allow the container to write express memory requirements on disk when the container has exhausted all the RAM which is available to it.

106. Explain Docker Swarm?
Answer: Docker Swarm is native gathering for docker which can help to a group of Docker hosts into a single and virtual docker host. It will offer the standard docker application program interface.

107. How can you monitor the docker in production environments?
Answer: Docker states and Docker Events can be used for monitoring docker in the production environment.

108. What the states of Docker container?
Answer: Following are Important states of Docker container:
1. Running
2. Paused
3. Restarting
4. Exited

109. What is Virtualization?
Answer:Virtualization is a method to logically divide mainframes to allow multiple applications for running simultaneously.
This scenario will changed when companies and open source communities can be able to offer a method of handling privileged instructions. It will allow multiple OS to run simultaneously on a single x86 based system

110. What is Hypervisor?Answer: The hypervisor will allow to create a virtual environment in that the guest virtual machines can operate. It will control the guest systems and checks if the resources are allocated to the guests as per requirement.

111. Explain Docker object labels.
Answer: Docker object labels is a method to apply metadata to docker objects including, images, containers, volumes, network, swam nodes, and services.

112. Write a Docker file to create and copy a directory and built it using python modules?
Answer: FROM pyhton:2.7-slim

WORKDIR /app

COPY . /app

docker build –tag

113. Where the docker volumes are stored?
Answer: /var/lib/docker/volumes

114. List out some important advanced docker commands
Answer:

Command Description
docker info Information Command
docker pull Download an image
docker stats Container information
Docker images List of images downloaded

115. How does communication happen between Docker client and Docker Daemon?
Answer: We will communicate between Docker client and Docker Daemon with the combination of Rest API, socket.IO, and TCP.

116. Explain Implementation method of Continuous Integration(CI) and Continues Development (CD) in Docker?
Answer: We required to do the following things:
1. Runs Jenkins on docker
2. We will run integration tests in Jenkins using docker-compose

117. What are the command to control Docker with System?
Answer: systemctl start/stop docker
service docker start/stop.

118. How to use JSON instead of YAML compose file?
Answer: docker-compose -f docker-compose.json up

119. What is the command you need to give to push the new image to Docker registry?
Answer: docker push myorg/img

120. How to include code with copy/add or volumes?
Answer: In docker file, we required to use COPY or ADD directive. This is useful to relocate code. However, we should use a volume if we want to make changes.

121. Explain the process of scaling your Docker containers.
Answer: The Docker containers can be scaled to any level starting from a few hundred to even thousands or millions of containers. The only condition for this is that the containers required the memory and the OS at all times, and there should not be a constraint when the Docker is getting scaled.

122. What are the steps for the Docker container life cycle?
Answer: Following are the steps for Docker life cycle:
1. Build
2. Pull
3. Run

123. How can you run multiple containers using a single service?
Answer: By using docker-compose, we can run multiple containers using a single service. All docker-compose files can use yaml language.

124. What is CNM?
Answer: CNM will stand for Container Networking Model. It will form the basis of container networking in a Docker environment. This docker’s approach will provide container networking with support for multiple network drivers.

125. Does Docker offer support for IPV6?
Answer: Yes, Docker will provide support IPv6. IPv6 networking is supported only on Docker daemons which will run on Linux hosts. However, if we want to enable IPv6 support in the Docker daemon, we required to modify /etc/docker/daemon.json and set the ipv6 key to true.

126. What are a different kind of volume mount types available in Docker?
Answer: Bind mounts- It will be stored anywhere on the host system

127. How to configure the default logging driver under Docker?
Answer: We required to set the value of log-driver to the name of the logging drive the daemon.jason.fie to configure the Docker daemon to default for a specific logging driver.

128. Explain Docker Trusted Registry?
Answer: Docker Trusted Registry is the enterprise-grade image storage toll for Docker. We should install it after firewall so that we can securely manage the Docker images we will use in the applications.

129. What are Docker Namespaces?
Answer: The Namespace in Docker is a technique that will offer isolated workspaces called the Container. Namespaces can also offer a layer of isolation for the Docker containers.

130. What are the three components of Docker Architecture?
Answer: Following are three components of Docker Architecture:
1. Client
2. Docker-Host
3. Registry

131. What is client?
Answer:Docker will provide Command Line Interface tools to the client to interact with Docker daemon.

132. What is the method for creating a Docker container?
Answer:We will use any of the specific Docker images to create a Docker container using the below command.
docker run -t -i command name
This command not will create the container but also start it.

133. What is the best place to find decent examples of ‘compose files’?
Answer:Most of the high-key companies which require Docker experts use a specific tool to manage their internal workings. That tool is called GitHub.
Other than all of the main functions which it will perform, it is also a great place to find the before-mentioned compose files for Docker containers.

134. How do you think Docker will change virtualization and cloud environments?
Answer:There are a lot of workloads which Docker is ideal for. Both in the hyper-scale world of many containers and in the dev-test-build use case. I will fully expect a lot of companies and vendors to embrace Docker as an alternative form of virtualization on both bare metal and in the cloud.

135. What are the various states that a Docker container can be in at any given point in time?
Answer:There will be four states which a Docker container will be in, at any provided point in time. Those states will be as given as follows:
• Running
• Paused
• Restarting
• Exited

GIT Interview Questions and Answers

 GIT Interview Questions and Answers

 


1. How can we see n most recent commits in GIT?

We can use git log command to see the latest commits. To see the three most recent commits we use following command:


git log -3

 


2. How can we know if a branch is already merged into master in GIT?

We can use following commands for this purpose:


git branch --merged master : This prints the branches merged into master

git branch --merged lists : This prints the branches merged into HEAD (i.e. tip of current branch)

git branch --no-merged : This prints the branches that have not been merged

By default this applies only to local branches.


We can use -a flag to show both local and remote branches.

Or we can use -r flag to show only the remote branches.

 


3. What is the purpose of git stash drop?

In case we do not need a specific stash, we use git stash drop command to remove it from the list of stashes.

By default, this command removes to latest added stash

To remove a specific stash we specify as argument in the git stash drop command.

 


4. What is the HEAD in GIT?

A HEAD is a reference to the currently checked out commit.

It is a symbolic reference to the branch that we have checked out.

At any given time, one head is selected as the ‘current head’ This head is also known as HEAD (always in uppercase).

 


5. What is the most popular branching strategy in GIT?

There are many ways to do branching in GIT. One of the popular ways is to maintain two branches:


master: This branch is used for production. In this branch HEAD is always in production ready state.

develop: This branch is used for development. In this branch we store the latest code developed in project. This is work in progress code.

Once the code is ready for deployment to production, it is merged into master branch from develop branch.


6. What is SubGit?

SubGit is software tool used for migrating SVN to Git. It is very easy to use. By using this we can create a writable Git mirror of a Subversion repository.

It creates a bi-directional mirror that can be used for pushing to Git as well as committing to Subversion.

SubGit also takes care of synchronization between Git and Subversion.

 


7. What is the use of git instaweb?

Git-instaweb is a script by which we can browse a git repository in a web browser.

It sets up the gitweb and a web-server that makes the working repository available online.

 


8. What are git hooks?

Git hooks are scripts that can run automatically on the occurrence of an event in a Git repository. These are used for automation of workflow in GIT.

Git hooks also help in customizing the internal behavior of GIT.

These are generally used for enforcing a GIT commit policy.

 


9. What is GIT?

GIT is a mature Distributed Version Control System (DVCS). It is used for Source Code Management (SCM).

It is open source software. It was developed by Linus Torvalds, the creator of Linux operating system.

GIT works well with a large number of IDEs (Integrated Development Environments) like- Eclipse, InteliJ etc.

GIT can be used to handle small and large projects.

 


10. What is a repository in GIT?

A repository in GIT is the place in which we store our software work.

It contains a sub-directory called .git. There is only one .git directory in the root of the project.

In .git, GIT stores all the metadata for the repository. The contents of .git directory are of internal use to GIT.

 


11. What are the main benefits of GIT?

There are following main benefits of GIT:


Distributed System: GIT is a Distributed Version Control System (DVCS). So you can keep your private work in version control but completely hidden from others. You can work offline as well.

Flexible Workflow: GIT allows you to create your own workflow. You can use the process that is suitable for your project. You can go for centralized or master-slave or any

other workflow.

Fast: GIT is very fast when compared to other version control systems.

Data Integrity: Since GIT uses SHA1, data is not easier to corrupt.

Free: It is free for personal use. So many amateurs use it for their initial projects. It also works very well with large size project.

Collaboration: GIT is very easy to use for projects in which collaboration is required. Many popular open source software across the globe use GIT.

 


12. What are the disadvantages of GIT?

GIT has very few disadvantages. These are the scenarios when GIT is difficult to use. Some of these are:


Binary Files: If we have a lot binary files (non-text) in our project, then GIT becomes very slow. E.g. Projects with a lot of images or Word documents.

Steep Learning Curve: It takes some time for a newcomer to learn GIT. Some of the GIT commands are non-intuitive to a fresher.

Slow remote speed: Sometimes the use of remote repositories in slow due to network latency. Still GIT is better than other VCS in speed.

 


13. What are the main differences between GIT and SVN?

The main differences between GIT and SVN are:


Decentralized: GIT is decentralized. You have a local copy that is a repository in which you can commit. In SVN you have to always connect to a central repository for check-in.

Complex to learn: GIT is a bit difficult to learn for some developers. It has more concepts and commands to learn. SVN is much easier to learn.

Unable to handle Binary files: GIT becomes slow when it deals with large binary files that change frequently. SVN can handle large binary files easily.

Internal directory: GIT creates only .git directory. SVN creates .svn directory in each folder.

User Interface: GIT does not have good UI. But SVN has good user interfaces.

 


14. How will you start GIT for your project?

We use git init command in an existing project directory to start version control for our project.

After this we can use git add and git commit commands to add files to our GIT repository.

 


15. What is git clone in GIT?

In GIT, we use git clone command to create a copy of an existing

GIT repository in our local.

This is the most popular way to create a copy of the repository among developers.

It is similar to svn checkout. But in this case the working copy is a full-fledged repository.

 


16. How will you create a repository in GIT?

To create a new repository in GIT, first we create a directory for the project. Then we run ‘git init’ command.

Now, GIT creates .git directory in our project directory. This is how our new GIT repository is created.

 


17. What are the different ways to start work in GIT?

We can start work in GIT in following ways:


New Project: To create a new repository we use git init command.

Existing Project: To work on an existing repository we use git clone command.

 


18. GIT is written in which language?

Most of the GIT distributions are written in C language with Bourne shell. Some of the commands are written in Perl language.


 


19. What does ‘git pull’ command in GIT do internally?

In GIT, git pull internally does a git fetch first and then does a git merge.

So pull is a combination of two commands: fetch and merge.

We use git pull command to bring our local branch up to date with its remote version.

 


20. What does ‘git push’ command in GIT do internally?

In GIT, git push command does following two commands:


fetch: First GIT, copies all the extra commits from server into local repo and moves origin/master branch pointer to the end of commit chain.

merge: Then it merges the origin/master branch into the master branch. Now the master branch pointer moves to the newly created commit. But the origin/master pointer remains there.

 


21. What is git stash?

In GIT, sometimes we do not want to commit our code but we do not want to lose also the unfinished code. In this case we use git stash command to record the current state of the working directory and index in a stash. This stores the unfinished work in a stash, and cleans the current branch from uncommitted changes.

Now we can work on a clean working directory.

Later we can use the stash and apply those changes back to our working directory.

At times we are in the middle of some work and do not want to lose the unfinished work, we use git stash command.

 


22. What is the meaning of ‘stage’ in GIT?

In GIT, stage is a step before commit. To stage means that the files are ready for commit.

Let say, you are working on two features in GIT. One of the features is finished and the other is not yet ready. You want to commit and leave for home in the evening. But you can commit since both of them are not fully ready. In this case you can just stage the feature that is ready and commit that part. Second feature will remain as work in progress.

 


23. What is the purpose of git config command?

We can set the configuration options for GIT installation by using git config command.


 


24. How can we see the configuration settings of GIT installation?

We can use ‘git config --list’ command to print all the GIT configuration settings in GIT installation.


 


25. How will you write a message with commit command in GIT?

We call following command for commit with a message:


$/> git commit –m

 


26. What is stored inside a commit object in GIT?

GIT commit object contains following information:


SHA1 name: A 40 character string to identify a commit

Files: List of files that represent the state of a project at a specific point of time

Reference: Any reference to parent commit objects

 


27. How many heads can you create in a GIT repository?

There can be any number of heads in a repository.

By default there is one head known as HEAD in each repository in GIT.


 


28. Why do we create branches in GIT?

If we are simultaneously working on multiple tasks, projects, defects or features, we need multiple branches. In GIT we can create a separate branch for each separate purpose.

Let say we are working on a feature, we create a feature branch for that. In between we get a defect to work on then we create another branch for defect and work on it. Once the defect work is done, we merge that branch and come back to work on feature branch again.

So working on multiple tasks is the main reason for using multiple branches.

 


29. What are the different kinds of branches that can be created in GIT?

We can create different kinds of branches for following purposes in GIT:


Feature branches: These are used for developing a feature.

Release branches: These are used for releasing code to production.

Hotfix branches: These are used for releasing a hotfix to production for a defect or emergency fix.

 


30. How will you create a new branch in GIT?

We use following command to create a new branch in GIT:


$/> git checkout –b

 


31. How will you add a new feature to the main branch?

We do the development work on a feature branch that is created from master branch. Once the development work is ready we use git merge command to merge it into master branch.


 


32. What is a pull request in GIT?

A pull request in GIT is the list of changes that have been pushed to GIT repository. Generally these changes are pushed in a feature branch or hotfix branch. After pushing these changes we create a pull request that contains the changes between master and our feature branch. This pull request is sent to reviewers for reviewing the code and then merging it into develop or release branch.


 


33. What is merge conflict in GIT?

A merge conflict in GIT is the result of merging two commits.

Sometimes the commit to be merged and current commit have changes in same location. In this scenario, GIT is not able to decide which change is more important. Due to this GIT reports a merge conflict. It means merge is not successful. We may have to manually check and resolve the merge conflict.

 


34. How can we resolve a merge conflict in GIT?

When GIT reports merge conflict in a file, it marks the lines as follows:


Example:


the business days in this week are

<<<<<<< HEAD five ======= six >>>>>>> branch-feature

To resolve the merge conflict in a file, we edit the file and fix the conflicting change. In above example we can either keep five or six.

After editing the file we run git add command followed by git commit command. Since GIT is aware that it was merge conflict, it links this change to the correct commit.


 


35. What command will you use to delete a branch?

After the successful merge of feature branch in main branch, we do not need the feature branch.


To delete an unwanted branch we use following command:


git branch –d

 


36. What command will you use to delete a branch that has unmerged changes?

To forcibly delete an unwanted branch with unmerged changes, we use following command:


git branch –D

 


37. What is the alternative command to merging in GIT?

Another alternative of merging in GIT is rebasing. It is done by git rebase command.


 


38. What is Rebasing in GIT?

Rebasing is the process of moving a branch to a new base commit.

It is like rewriting the history of a branch.

In Rebasing, we move a branch from one commit to another. By this we can maintain linear project history.

Once the commits are pushed to a public repository, it is not a good practice to use Rebasing.

 


39. What is the ‘Golden Rule of Rebasing’ in GIT?

The golden rule of Rebasing is that we should never use git rebase on public branches. If other people are using the same branch then they may get confused by looking at the changes in Master branch after GIT rebasing.

Therefore, it is not recommended to do rebasing on a public branch that is also used by other collaborators.

 


40. Why do we use Interactive Rebasing in place of Auto Rebasing?

By using Interactive rebasing we can alter the commits before moving them to a new branch.

This is more powerful than an automated rebase. It gives us complete control over the branch’s commit history.

Generally, we use Interactive Rebasing to clean up the messy history of commits just before merging a feature branch into master.

 


41. What is the command for Rebasing in Git?

Git command for rebasing is:


git rebase

 


42. What is the main difference between git clone and git remote?

The main difference between git clone and git remote is that git clone is used to create a new local repository whereas git remote is used in an existing repository.

git remote adds a new reference to existing remote repository for tracking further changes.

git clone creates a new local repository by copying another repository from a URL.

 


43. What is GIT version control?

GIT version control helps us in managing the changes to source code over time by a software team. It keeps track of all the changes in a special kind of database. If we make a mistake, we can go back in time and see previous changes to fix the mistake.

GIT version control helps the team in collaborating on developing a software and work efficiently. Every one can merge the changes with confidence that everything is tracked and remains intact in GIT version control. Any bug introduced by a change can be discovered and reverted back by going back to a working version.

 


44. What GUI do you use for working on GIT?

There are many GUI for GIT that we can use. Some of these are:


GitHub Desktop

GITX-dev

Gitbox

Git-cola

SourceTree

Git Extensions

SmartGit

GitUp

 


45. What is the use of git diff command in GIT?

In GIT, git diff command is used to display the differences between 2 versions, or between working directory and an index, or between index and most recent commit.

It can also display changes between two blob objects, or between two files on disk in GIT.

It helps in finding the changes that can be used for code review for a feature or bug fix.

 


46. What is git rerere?

In GIT, rerere is a hidden feature. The full form of rerere is “reuse recorded resolution”.

By using rerere, GIT remembers how we’ve resolved a hunk conflict. The next time GIT sees the same conflict, it can automatically resolve it for us.

 


47. What are the three most popular version of git diff command?

Three most popular git diff commands are as follows:


git diff: It displays the differences between working directory and the index.

git diff –cached: It displays the differences between the index and the most recent commit.

git diff HEAD: It displays the differences between working directory and the most recent commit

 


48. What is the use of git status command?

In GIT, git status command mainly shows the status of working tree.


It shows following items:


The paths that have differences between the index file and the current HEAD commit.

The paths that have differences between the working tree and the index file

The paths in the working tree that are not tracked by GIT.

Among the above three items, first item is the one that we commit by using git commit command. Item two and three can be committed only after running git add command.


 


49. What is the main difference between git diff and git status?

In GIT, git diff shows the differences between different commits or between the working directory and index.

Whereas, git status command just shows the current status of working tree.

 


50. What is the use of git rm command in GIT?

In GIT, git rm command is used for removing a file from the working tree and the index.

We use git rm –r to recursively remove all files from a leading directory.

 


51. What is the command to apply a stash?

Sometimes we want to save our unfinished work. For this purpose we use git stash command. Once we want to come back and continue working from the last place where we left, we use git stash apply command to bring back the unfinished work.


So the command to apply a stash is:


git stash apply

Or we can use


git stash apply

 


52. Why do we use git log command?

We use git log command to search for specific commits in project history.

We can search git history by author, date or content. It can even list the commits that were done x days before or after a specific date.

 


53. Why do we need git add command in GIT?

GIT gives us a very good feature of staging our changes before commit. To stage the changes we use git add command. This adds our changes from working directory to the index.

When we are working on multiple tasks and we want to just commit the finished tasks, we first add finished changes to staging area and then commit it. At this time git add command is very helpful.

 


54. Why do we use git reset command?

We use git reset command to reset current HEAD to a specific state.

By default it reverses the action of git add command.

So we use git reset command to undo the changes of git add command.

 


55. What does a commit object contain?

Whenever we do a commit in GIT by using git commit command, GIT creates a new commit object. This commit objects is saved to GIT repository.


The commit object contains following information:


HASH: The SHA1 hash of the Git tree that refers to the state of index at commit time.

Commit Author: The name of person/process doing the commit and date/time.

Comment: Some text messages that contains the reason for the commit .

 


56. How can we convert git log messages to a different format?

We can use pretty option in git log command for this.


git log – pretty

This option converts the output format from default to other formats.


There are pre-built formats available for our use.


git log –pretty=oneline

For Example:


git log --pretty=format:"%h - %an, %ar : %s"

ba72a6c - Dave Adams, 3 years ago : changed the version number

 


57. What are the programming languages in which git hooks can be written?

Git hooks are generally written in shell and PERL scripts. But these can be written in any other language as long as it has an executable.

Git hooks can also be written in Python script.

 


58. What is a commit message in GIT?

A commit message is a comment that we add to a commit. We can provide meaningful information about the reason for commit by using a commit message.

In most of the organizations, it is mandatory to put a commit message along with each commit.

Often, commit messages contain JIRA ticket, bug id, defect id etc. for a project.

 


59. How GIT protects the code in a repository?

GIT is made very secure since it contains the source code of an organization. All the objects in a GIT repository are encrypted with a hashing algorithm called SHA1.

This algorithm is quite strong and fast. It protects source code and other contents of repository against the possible malicious attacks.

This algorithm also maintains the integrity of GIT repository by protecting the change history against accidental changes.

 


60. How GIT provides flexibility in version control?

GIT is very flexible version control system. It supports non-linear development workflows. It supports flows that are compatible with external protocols and existing systems.

GIT also supports both branching and tagging that promotes multiple kinds of workflows in version control.

 


61. How can we change a commit message in GIT?

If a commit has not been pushed to GitHub, we can use git commit --ammend command to change the commit message.

When we push the commit, a new message appears on GitHub.

 


62. Why is it advisable to create an additional commit instead of amending an existing commit?

Git amend internally creates a new commit and replaces the old commit. If commits have already been pushed to central repository, it should not be used to modify the previous commits.

It should be generally used for only amending the git comment.

 


63. What is a bare repository in GIT?

A repository created with git init –bare command is a bare repository in GIT.

The bare repository does not contain any working or checked out copy of source files. A bare repository stores git revision history in the root folder of repository instead of in a .git subfolder.

It is mainly used for sharing and collaborating with other developers.

We can create a bare repository in which all developers can push their code.

There is no working tree in bare repository, since no one directly edits files in a bare repository.

 


64. How do we put a local repository on GitHub server?

To put a local repository on GitHub, we first add all the files of working directory into local repository and commit the changes.

After that we call git remote add command to add the local repository on GitHub server.

Once it is added, we use git push command to push the contents of local repository to remote GitHub server.

 


65. How will you delete a branch in GIT?

We use git branch –d command to delete a branch in GIT.

In case a local branch is not fully merged, but we want to delete it by force, then we use git branch –D command.

 


66. How can we set up a Git repository to run code sanity checks and UAT tests just before a commit?

We can use git hooks for this kind of purpose. We can write the code sanity checks in script. This script can be called by pre-commit hook of the repository.

If this hook passes, then only commit will be successful.

 


67. How can we revert a commit that was pushed earlier and is public now?

We can use git revert command for this purpose.

Internally, git revert command creates a new commit with patches that reverse the changes done in previous commits.

The other option is to checkout a previous commit version and then commit it as a new commit.

 


68. In GIT, how will you compress last n commits into a single commit?

Tom compress last n commits a single commit, we use git rebase command. This command compresses multiple commits and creates a new commit. It overwrites the history of commits.

It should be done carefully, since it can lead to unexpected results.


 


69. How will you switch from one branch to a new branch in GIT?

In GIT, we can use git checkout command to switch to a new branch.


 


70. How can we clean unwanted files from our working directory in GIT?

GIT provides git clean command to recursively clean the working tree. It removes the files that are not under version control in GIT.

If we use git clean –x, then ignored files are also removed.

 


71. What is the purpose of git tag command?

We use git tag command to add, delete, list or verify a tag object in GIT.

Tag objects created with options –a, -s, -u are also known as annotated tags.

Annotated tags are generally used for release.

 


72. What is cherry-pick in GIT?

A git cherry-pick is a very useful feature in GIT. By using this command we can selectively apply the changes done by existing commits.

In case we want to selectively release a feature, we can remove the unwanted files and apply only selected commits.

 


73. What is shortlog in GIT?

A shortlog in GIT is a command that summarizes the git log output.

The output of git shortlog is in a format suitable for release announcements.

 


74. How can you find the names of files that were changed in a specific commit?

Every commit in GIT has a hash code. This hash code uniquely represents the GIT commit object.

We can use git diff-tree command to list the name of files that were changed in a commit.


The command will be as follows:


git diff-tree -r

By using -r flag, we just get the list of individual files.


 


75. How can we attach an automated script to run on the event of a new commit by push command?

In GIT we can use a hook to run an automated script on a specific event. We can choose between pre-receive, update or post-receive hook and attach our script on any of these hooks.

GIT will automatically run the script on the event of any of these hooks.

 


76. What is the difference between pre-receive, update and post-receive hooks in GIT?

Pre-receive hook is invoked when a commit is pushed to a destination repository. Any script attached to this hook is executed before updating any reference. This is mainly used to enforce development best practices and policies.

Update hook is similar to pre-receive hook. It is triggered just before any updates are done. This hook is invoked once for every commit that is pushed to a destination repository.

Post-receive hook is invoked after the updates have been done and accepted by a destination repository. This is mainly used to configure deployment scripts. It can also invoke Continuous

Integration (CI) systems and send notification emails to relevant parties of a repository.

 


77. Do we have to store Scripts for GIT hooks within same repository?

A Hook is local to a GIT repository. But the script attached to a hook can be created either inside the hooks directory or it can be stored in a separate repository. But we have to link the script to a hook in our local repository.

In this way we can maintain versions of a script in a separate repository, but use them in our repository where hooks are stored.

Also when we store scripts in a separate common repository, we can reuse same scripts for different purposes in multiple repositories.

 


78. How can we determine the commit that is the source of a bug in GIT?

In GIT we can use git bisect command to find the commit that has introduced a bug in the system.

GIT bisect command internally uses binary search algorithm to find the commit that introduced a bug.

We first tell a bad commit that contains the bug and a good commit that was present before the bug was introduced.

Then git bisect picks a commit between those two endpoints and asks us whether the selected commit is good or bad.

It continues to narrow down the range until it discovers the exact commit responsible for introducing the bug.

 


79. How can we see differences between two commits in GIT?

We can use git diff command to see the differences between two commits. The syntax for a simple git diff command to compare two commits is:


git diff <commit#1> <commit#2>

 


80. What are the different ways to identify a commit in GIT?

Each commit object in GIT has a unique hash. This hash is a 40 characters checksum hash. It is based on SHA1 hashing algorithm.

We can use a hash to uniquely identify a GIT commit.

Git also provides support for creating an alias for a commit. This alias is known as refs. Every tag in GIT is a ref. These refs can also be used to identify a commit. Some of the special tags in GIT are HEAD, FETCH_HEAD and MERGE_HEAD.

 


81. When we run git branch , how does GIT know the SHA-1 of the last commit?

GIT uses the reference named HEAD for this purpose. The HEAD file in GIT is a symbolic reference to the current branch we are working on.

A symbolic reference is not a normal reference that contains a SHA-1 value. A symbolic reference contains a pointer to another reference.


When we open head file we see:


$ cat .git/HEAD

ref: refs/heads/master

If we run git checkout branchA, Git updates the file to look like this:


$ cat .git/HEAD

ref: refs/heads/branchA

 


82. What are the different types of Tags you can create in GIT?

In GIT, we can create two types of Tags.


Lightweight Tag: A lightweight tag is a reference that never moves. We can make a lightweight tag by running a command similar to following:


$ git update-ref refs/tags/v1.0

dad0dab538c970e37ea1e769cbbde608743bc96d

Annotated Tag: An annotated tag is more complex object in GIT. When we create an annotated tag, GIT creates a tag object and writes a reference to point to it rather than directly to the commit.

We can create an annotated tag as follows:


$ git tag -a v1.1 1d410eabc13591cb07496601ebc7c059dd55bfe9 -m 'test tag'

 


83. How can we rename a remote repository?

We can use command git remote rename for changing the name of a remote repository. This changes the short name associated with a remote repository in your local. Command would look as follows:


git remote rename repoOldName repoNewName

 


84. Some people use git checkout and some use git co for checkout. How is that possible?

We can create aliases in GIT for commands by modifying the git configuration.


In case of calling git co instead of git checkout we can run following command:


git config --global alias.co checkout

So the people using git co have made the alias for git checkout in their own environment.


 


85. How can we see the last commit on each of our branch in GIT?

When we run git branch command, it lists all the branches in our local repository. To see the latest commit associated with each branch, we use option –v.


Exact command for this is as follows:


git branch –v

 


It lists branches as:


issue75 83b576c fix issue

* master 7b96605 Merge branch 'issue75'

testing 972ac34 add dave to the developer list

 


86. Is origin a special branch in GIT?

No, origin is not a special branch in GIT.

Branch origin is similar to branch master. It does not have any special meaning in GIT.

Master is the default name for a starting branch when we run git init command.

Origin is the default name for a remote when we run git clone command.

If we run git clone -o myOrigin instead, then we will have myOrigin/master as our default remote branch.

 


87. How can we configure GIT to not ask for password every time?

When we use HTTPS URL to push, the GIT server asks for username and password for authentication. It prompts us on the terminal for this information.

If we don’t want to type username/password with every single time push, we can set up a “credential cache”.

It is kept in memory for a few minutes. We can set it by running:

git config --global credential.helper cache

 


88. What are the four major protocols used by GIT for data transfer?

GIT uses following major protocols for data transfer:


Local

HTTP

Secure Shell (SSH)

Git

 


89. What is GIT protocol?

Git protocol is a mechanism for transferring data in GIT. It is a special daemon. It comes pre-packaged with GIT. It listens on a dedicated port 9418. It provides services similar to SSH protocol.

But Git protocol does not support any authentication.

So on plus side, this is a very fast network transfer protocol. But it lacks authentication.

 


90. How can we work on a project where we do not have push access?

In case of projects where we do not have push access, we can just fork the repository. By running git fork command, GIT will create a personal copy of the repository in our namespace. Once our work is done, we can create a pull request to merge our changes on the real project.


 


91. What is git grep?

GIT is shipped along with a grep command that allows us to search for a string or regular expression in any committed tree or the working directory.

By default, it works on the files in your current working directory.

 


92. How can your reorder commits in GIT?

We can use git rebase command to reorder commits in GIT. It can work interactively and you can also select the ordering of commits.


 


93. How will you split a commit into multiple commits?

To split a commit, we have to use git rebase command in interactive mode. Once we reach the commit that needs to be split, we reset that commit and take the changes that have been reset. Now we can create multiple commits out of that.


 


94. What is filter-branch in GIT?

In GIT, filter-branch is another option to rewrite history. It can scrub the entire history. When we have large number of commits, we can use this tool.

It gives many options like removing the commit related changes to a specific file from history.

You can even set you name and email in the commit history by using filter-branch.

 


95. What are the three main trees maintained by GIT?

GIT maintains following three trees:


HEAD: This is the last commit snapshot.

Index: This is the proposed next commit snapshot.

Working Directory: This is the sandbox for doing changes.

 


96. What are the three main steps of working GIT?

GIT has following three main steps in a simple workflow:


Checkout the project from HEAD to Working Directory.

Stage the files from Working Directory to Index.

Commit the changes from Index to HEAD.

 


97. What are ours and theirs merge options in GIT?

In GIT, we get two simple options for resolving merge conflicts: ours and theirs

These options tell the GIT which side to favor in merge conflicts.

In ours, we run a command like git merge -Xours branchA

As the name suggests, in ours, the changes in our branch are favored over the other branch during a merge conflict.

 


98. How can we ignore merge conflicts due to Whitespace?

GIT provides an option ignore-space-change in git merge command to ignore the conflicts related to whitespaces.


The command to do so is as follows:


git merge -Xignore-space-change whitespace

 


99. What is git blame?

In GIT, git blame is a very good option to find the person who changed a specific line. When we call git blame on a file, it displays the commit and name of a person responsible for making change in that line.


Following is a sample:


$ git blame -L 12,19 HelloWorld.java

^1822fe2 (Dave Adams 2016-03-15 10:31:28 -0700 12) public class HelloWorld {

^1822fe2 (Dave Adams 2016-03-15 10:31:28 -0700 13)

^1822fe2 (Dave Adams 2016-03-15 10:31:28 -0700 14) public static void main(String[] args) {

af6560e4 (Dave Adams 2016-03-17 21:52:20 -0700 16) // Prints "Hello, World" to the terminal window.

a9eaf55d (Dave Adams 2016-04-06 10:15:08 -0700 17) System.out.println("Hello, World");

af6560e4 (Dave Adams 2016-03-17 21:52:20 -0700 18) }

af6560e4 (Dave Adams 2016-03-17 21:52:20 -0700 19) }

 


100. What is a submodule in GIT?

In GIT, we can create sub modules inside a repository by using git submodule command.

By using submodule command, we can keep a Git repository as a subdirectory of another Git repository.

It allows us to keep our commits to submodule separate from the commits to main Git repository.