How To Install Docker on CentOS 7 headless



Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system.

Docker Image

An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.

A docker/container is a runtime instance of an image. It runs completely isolated from the host environment by default and run apps natively on the host machine’s kernel. The container image is an executable which never need to be installed on the host system. Hence, it contains all its dependencies and there is no configuration entanglement so you can run a containerized app anywhere.

I was not impressed with the install instructions given on Docker website for linux command line installation.

Here is the simple set of instructions that I followed to install Docker on my Centos 7 and got it up and running in no time!



System Requirements: 64-bit Centos 7



Run this command to add the official Docker repository, download the latest version of Docker, and install it:

curl -fsSL | sh

snippet of output

# Executing docker install script, commit: fc04d2c
+ sudo -E sh -c 'yum install -y -q yum-utils'
Package yum-utils-1.1.31-42.el7.noarch already installed and latest version
+ sudo -E sh -c 'yum-config-manager --add-repo'
Loaded plugins: fastestmirror
adding repo from:
grabbing file to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
+ '[' edge '!=' stable ']'
+ sudo -E sh -c 'yum-config-manager --enable docker-ce-edge'
Loaded plugins: fastestmirror

After installation has completed, start the Docker daemon:

sudo systemctl start docker

Verify that it’s running:

sudo systemctl status docker


● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2018-03-06 15:36:00 PST; 6s ago
Main PID: 2513 (dockerd)
Memory: 19.6M
CGroup: /system.slice/docker.service
├─2513 /usr/bin/dockerd
└─2517 docker-containerd --config /var/run/docker/containerd/containerd.toml

Mar 06 15:35:59 madhuj dockerd[2513]: time="2018-03-06T15:35:59-08:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc"
Mar 06 15:35:59 madhuj dockerd[2513]: time="2018-03-06T15:35:59-08:00" level=info msg="containerd successfully booted in 0.015250s" module=containerd
Mar 06 15:36:00 madhuj dockerd[2513]: time="2018-03-06T15:36:00.034436419-08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Mar 06 15:36:00 madhuj dockerd[2513]: time="2018-03-06T15:36:00.036682427-08:00" level=info msg="Loading containers: start."
Mar 06 15:36:00 madhuj dockerd[2513]: time="2018-03-06T15:36:00.373054005-08:00" level=info msg="Default bridge (docker0) is assigned with an IP address Daemon...d IP address"
Mar 06 15:36:00 madhuj dockerd[2513]: time="2018-03-06T15:36:00.624304025-08:00" level=info msg="Loading containers: done."

Lastly, make sure it starts at every server reboot:

sudo systemctl enable docker

Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/docker.service.

You can try logging in with your docker credentials

$ sudo docker login

Try running hello-world docker image:

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:083de497cff944f969d8499ab94f07134c50bcf5e6b9559b27182d3fa80ce3f7
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.


Adding SSH keys to Git

SSH keys serve as a means of identifying yourself to an SSH server using public-key cryptography and challenge-response authentication. One of the advantages of this method over traditional password authentication is that you can be authenticated by the server without ever having to send your password over the network.

I run my tests on multiple platforms several times in a day and having SSH keys set up saves me lot of time while cloning a repo or pushing my changes. You need to set up SSH keys for every machine.

Linux SSH keys set up

Generate SSH keys using terminal command prompt:

qa@QA-Centos6-32bit QAtests]$ ssh-keygen -t rsa -b 4096 -C ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/qa/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/qa/.ssh/id_rsa.
Your public key has been saved in /home/qa/.ssh/
The key fingerprint is:
The key's randomart image is:
+--[ RSA 4096]----+
| ... . |
| .E..o o |
| = = + |
| . B O |
| + S o |
| + * |
| + . |
| . . |
| |

Copy the keys

[qa@QA-Centos6-32bit QAtests]$ cat /home/qa/.ssh/

Add keys to your Github account

Go to Settings -> SSH and GPG Keys



Click on SSH keys and copy paste the SSH key


Voila! No more username and password prompts:

[qa@QA-Centos6-32bit QAtests]$ git clone
Initialized empty Git repository in /home/qa/QAtests/QA/.git/
The authenticity of host ' (' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ',' (RSA) to the list of known hosts.
remote: Counting objects: 1684, done.
remote: Compressing objects: 100% (126/126), done.
remote: Total 1684 (delta 170), reused 190 (delta 120), pack-reused 1438
Receiving objects: 100% (1684/1684), 7.82 MiB | 1.66 MiB/s, done.
Resolving deltas: 100% (1054/1054), done.

Collect hardware info about Linux OS

While using AWS instances that I created few days ago, I wanted to collect information about the hardware so that I could execute the right tests based on the platform.

To view information about your CPU, use the lscpu command as it shows information about your CPU architecture such as number of CPU’s, cores, CPU family model, CPU caches, threads, etc from sysfs and /proc/cpuinfo.

[ec2-user@ip-172-30-0-103 ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 1
On-line CPU(s) list: 0
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 2400.061
BogoMIPS: 4800.13
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0

Architecture shows its a 64-bit platform.

Python: Add files to path

Here is an example if you would like to access methods from a file located in another directory:

Lets say you are working in dir A with sub-dirs B and C. If you are executing tests from sub-dir B and would like to access methods from sub-dir C, the following commands helped me in getting the path in order:

import sys, os

# get the current path of dir B

current_dir_path = os.path.dirname(os.path.realpath(__file__))

# get the path of dir A
levelup_dir_path = os.path.dirname(current_dir_path)

# Create a path where your helper file is located
helper_path = os.path.join(levelup_dir_path, 'C', 'test_helper_folder')

# Append the current path

# Access the methods from test helper file

from test_helper_file import methodA

How to create Pull Requests in Github?

As an Automation Engineer, you might be using a source/version control system (VCS) for your automated tests. Every company has its own set of tools and VCS is one of the most important tools that the team picks. There are several popular ones such as Perforce, CVS, SVN, Git. Every VCS has its own pros and cons – it really depends on what works for your team. During my last couple of jobs, Git has been gaining popularity as it is based on a distributed, decentralized model which works for lot of companies:

  1. Git is faster than centralized VCSs such as Perforce as you can have the entire project history at hand in seconds.
  2. It is easy to keep track of merges as with Git, result of a merge is actually a new commit, which knows what its ancestors are.
  3. It is easy to manage disk space with Git if you are working with a huge source tree. You can create 100 branches and work with one and get rid of others. With Perforce, every branch is a copy on your disk.
  4. It is easy to cherry pick changes to build it in your own branch from other team members.

What is a Pull Request?

From Github: “Pull requests let you tell others about changes you’ve pushed to a GitHub repository. Once a pull request is sent, interested parties can review the set of changes, discuss potential modifications, and even push follow-up commits if necessary.”

PRs are commonly used by teams sharing a single repository and using branches to develop features. Team members use PRs to manage changes from contributors as they are useful in providing a way to notify repo contributors about changes one has made and in initiating code review about a set of changes before being merged into the main branch.

Creating a Pull Request

One of the workflows is creating a PR from a branch within a repo.

Get the latest code:

git pull origin master

Create a branch and commit changes to your branch and push it to github

git checkout -b mybranch
git push origin mybranch

Go to Github and create a Pull request


Select your branch



Select the reviewer and add comments


Mount a drive on Linux

In order to mount a drive on Linux (centos), use the following commands:

Edit fstab file

$ sudo sh -c cat > /etc/fstab
If you get the following error:
-bash: /etc/fstab: Permission denied


sudo bash -c 'cat >> /etc/fstab'

And use the following code:

# /etc/fstab
# Created by anaconda on Tue Jun 13 10:27:25 2017
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
/dev/mapper/cl-home /home xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
## other NAS entries ## /mnt/tmp nfs defaults 0 0

Use Ctrl-C to save the file

Make sure you have create the folders to which the drives need to be mapped (for e.g./mnt/tmp).

From the root folder, run the following command:

$ sudo mount -a

If you get the following error:

mount: wrong fs type, bad option, bad superblock on,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount. helper program)

You are probably missing nfs-utilities libraries. To get them, try

$ sudo yum install nfs-utils

And run sudo mount command again.

Components of a good bug report

A good bug report can improve over all communication of an Agile process. For e.g. A good bug report can help:

    • Managers in triaging bugs faster
    • Developers in reproducing the bugs faster
    • QA in verification of fixed bugs faster
    • Business Analyst in providing bug/feature clarification faster
    • Technical support in updating the customer about the open bugs faster

To submit a good report is quite tedious. It takes discipline experience and a solid checklist to write a good bug report every time a bug is found. Here is a quick checklist what a good bug report may contain: 

    • A summary/header
    • A clear description
    • Step by step recreation instructions
    • What error the steps produced
    • What were the actual results
    • Screen captures
    • Error and debugging logs
    • Setting a priority
    • Setting a critical status
    • Assigning the issue to the proper person or group
    • The area of the application/system where the issue was found.
    • The type of issue found
    • Linking to other relevant issues

And the list can go on depending on unique requirements of the company or application under test.

Author – Madhu Jain