How updates are rolled in production with zero downtime

In a production or service based company it matters how soon you update or roll out new services or products. This signifies your services are efficient , fast and up to date with regards to today’s tech.

Previously, rolling out updates was not a easy task as it is today. Earlier companies had to restart the whole web server and then changes were seen but now we have more efficient devops way to do things. Let us see how.

So, here is what we will going to setup:

  1. Create container image that has Linux and other basic configuration required to run Slave for Jenkins ( example here we require kubectl to be configured ).
  2. When we launch the job it should automatically starts another job on slave based on the label provided for dynamic approach.
  3. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins.
  4. Job1 : Pull the Github repo automatically when some developers push repo to Github and perform the following operations as:
    1. Create the new image dynamically for the application and copy the application code into that corresponding docker image
    2. Push that image to the docker hub (Public repository)
      ( Github code contain the application code and Dockerfile to create a new image )
  5. Job2 ( Should be run on the dynamic slave of Jenkins configured with Kubernetes kubectl command): Launch the application on the top of kubernetes cluster performing following operations:
    1. If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then roll out the updates.
    2. Expose the services if not otherwise roll out the updates.

We will be using ramped deployment strategy sometimes also known as slow deployment.

My Environment

I have 3 vms:

  • rhel 8 clone having jenkins and docker installed used as jenkins server and docker client.
  • rhel 8 having docker installed being used as docker server.
  • minikube vm for running kubernetes cluster.

For doing this task we first have to setup our docker host and client.

Docker Host and Client setup

Start docker service in one of the vm, In my case rhel 8 is docker host and rhel 8 clone is client.

Start the docker service in your vm and edit the service file as shown below.

Above, we are adding a tcp rule to allow traffic on current IP from 4243 port.

Now in docker client add docker host IP and export it. To make this permanent add this line to /root/.bashrc or /etc/bashrc

export DOCKER_HOST=<docker_host_ip>:4243

Also make sure that docker service has been stopped in docker client if docker is installed in client also.

Now, if we run any docker command from rhel 8 clone(client) it will execute on rhel 8(host).

Dynamic jenkins cluster setup

We are setting up dynamic jenkins cluster using docker by creating a custom image for kubernetes in which kubectl will be configured so that we can use that as slave.

As soon any job will run tagged with this slave we are creating a new docker slave will run on docker host and will be deleted automatically after the job completion.

FROM centos

ENV pass jenkins
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
 
RUN chmod +x ./kubectl
RUN cp ./kubectl /usr/local/bin/kubectl
RUN cp ./kubectl /usr/bin/
RUN mkdir -p /var/run/sshd

COPY ca.crt /root/
COPY client.key /root/
COPY client.crt /root/
RUN mkdir /root/.kube/
COPY config /root/.kube/

RUN yum install java-1.8.0-openjdk.x86_64 -y
RUN yum install openssh-server -y
RUN ssh-keygen -A
RUN yum install sudo -y
RUN yum install net-tools -y
RUN yum install git -y

RUN echo root:redhat | chpasswd
EXPOSE 22
CMD /usr/sbin/sshd

now build this dockerfile

docker build -t <imagename>:<tag> .

Start the jenkins service …

systemctl start jenkins

Open jenkins dashboard and navigate to manage jenkins then..

From above images we have configured dynamic cluster setup.

To use kubectl in other system you have to transfer configuration and certification.

To know and setup minikube check: https://kubernetes.io/docs/tasks/tools/install-minikube/

To setup kubectl : https://kubernetes.io/docs/tasks/tools/install-kubectl/

Jenkins Jobs

Now it’s time to create job chain of job 1 and 2.

JOB_1

This will pull the github repo which will have our website code as well as Dockerfile for custom web server and build the dockerfile and push the image to docker hub.

Here, we can use poll scm in jenkins to monitor updates on github or we can also create github webhooks. For using webhook on VM you can use ngrok.

JOB_2

This job pull the code for kubernetes deployment from github repo and will deploy on kubernetes cluster and also will rollout updates if any available.

code under build:

cp -rf webdeploy.yml . /root/
if sudo kubectl get deploy | grep myweb-deploy
then
echo "deploymeny exists ...rolling out updaytes"
sudo kubectl rollout restart deployment/myweb-deploy
sudo kubectl rollout status deployment/myweb-deploy
sudo kubectl get service myweb-deploy
else
sudo kubectl create -f /root/webdeploy.yml
sudo kubectl expose deployment myweb-deploy --port=80 --type=NodePort
sudo kubectl get service myweb-deploy
fi

Note:

Also our webdeploy.yml contains:

apiVersion: apps/v1
kind: Deployment
metadata: 
  name: myweb-deploy
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      env: production
  strategy: 
    type: RollingUpdate
  template: 
    metadata: 
      name: myweb-prod
      labels: 
        env: production
    spec: 
      containers: 
      - image: mykgod/httpd-php-server
        name: myweb-con

And our build pipeline looks like:

Before rollout:

After rollout:

Conclusion

Here, we successfully achieved how to roll out new updates for a website which is running on kubernetes cluster with custom made image of web server and also run our jobs on dynamic jenkins cluster.

Thanks! Comment down any suggestion or queries.

EBS and Security Groups in AWS

Cloud Trail

CloudTrail is an management and governance service which keeps the logs/history of whatever we do in aws, these logs are know as events. It create a trail for recording these events with events we can do lots of things.

We can find more details under view event.

To look events using aws cli open cmd prompt and …

aws cloudtrail lookup-events
# to pick only 1st event we can use --query
aws cloudtrail lookup-events --query "Events[0]"

EBS

EBS stands for elastic block storage.

We need storage devices to store the data such as pen drives, hard disks, etc. These devices are also knows as block storage devices and the most common use case of these devices to create partition or simply installing OS in it. Or we can say that storage devices in which we can create partitions are referred as block storage devices.

Other than this where we store data normally, where we already get formatted storage to store data is known as object storage like google drive, drop box, etc.

There are 3 kinds of storage system :

  1. Object storage/system
  2. Block storage/system
  3. File system

In AWS we have object storage service known as S3 (simple storage service) and in OpenStack we have service for object storage as Swift.

Ephemeral v/s Persistent Storage

Hard disks and pen drives are similar devices we use both to save data and can create partitions in both the devices. But what if I say that your pen drive is a persistent storage device and hard disk is temporary one in some way. Let’s see what I meant by that…

If I have an operating system on my hard disk and also have my data in it, suppose some movies, shows etc.. and somehow my OS get flush or get terminated then I won’t be able to see that data after installing the same OS again this signifies that my hard disk is an temporary storage. In contrast if I had my data on pen drive my data would still be with me after getting OS terminated. In thus case my hard disk is a temporary storage and this temporary storage in technical world is called ephemeral storage and pen drive can be called as persistent storage.

EBS in ec2

So, while creating ec2 instance by default they attach an hard disk to our instance which is from an EBS.

This default storage is called root storage/device. It is like an hard disk that means it is ephemeral in nature but we can add an persistent volume.

Creating an EBS

Before we start creating EBS let’s talk about security group.

Security Groups

While creating ec2 instance ec2 services add an additional security program/service to instance which is same as firewall in our pc known as Security Group (SG).

Suppose, I have a website running on a ec2 instance and a client want to connect to that website. So, first that client have to go through firewall/SG to connect to that website. All the traffic that come from outside the server environment like client is known as Inbound/Ingress traffic and all the traffic going out from the environment is known as Outbound/Egress traffic.

In security groups there are rules which control the access of traffic, that means we can create our own rules in SG to control which traffic to allow.

Bu default while running ec2 instance SG have config ssh inbound rule so we can connect to instance through any shell.

If I shh through my cmd prompt to this ip it will surely connect but if I ping this IP it will not ping cause we haven’t specified the rule for that. Now lets add rule for ICMP as ping uses ICMP protocol.

Now, If I try to ping through my system to instance IP…

Now lets ssh into our instance and switch user to root.

Run :

fdisk -l 

to check hard disk attached to instance and partitions available.

Now we know that this volume is ephemeral in nature lets add persistent one to store our important data.

Remember to create volume in the same region where your instance is runnig.

Now attach your volume.

Now if you see again with fdisk -l a new device will appear.

fdisk -l

Now let’s create a partition…

fdisk /dev/xvdf
# to create partiton

now format that partition so we can use that

 mkfs.ext4 /dev/xvdf1

Let’s download apache webserver in our rhel instance by :

yum install httpd -y

and create a index.html in it ..

cd /var/www/html
touch index.html
echo "hi this is index page" >> index.html

now let’s mount our persistent volume to /var/www/html …this directory is the default directory of httpd web server where we keep out site code.

mount /dev/xvdf1 /var/www/html

You can check mounted directory by …

df -hT

We are almost set! We know how to add rules to security group we need to add http rule so that we can access our website.

Now, try to access your web page trough IP… facing any issue run :

setenforce 0

Conclusion

We learnt what are security groups, basic of cloud trail and how to add rules in them and learnt how EBS works and what is temp and permanent storage and did a small hands-on launching a web server in rhel instance. We added persistence storage to /var/www/html so after instance gets terminated data would not get formatted.

Thank! Drop a review.

AWS CLI : intro

AWS command line interface is a tool to manage aws services through command line.

https://aws.amazon.com/cli/ : aws cli download

We commonly use aws using IAM user. After creating IAM user download credentials which are downloaded in .csv format.

After downloading aws cli and installing it open command prompt in windows or shell in linux.

To check whether aws cli installed or not run:

aws --version

To configure user in aws cli run :

aws configure

Put your downloaded credentials of user in output of above command to set that user in aws cli to manage services.

To check all metadata of ec2 instances we have:

aws ec2 describe-instances

To see volumes info for ec2 instances we have:

aws ec2 describe-volumes

For key related operations we have :

aws ec2 describe-key-pairs
aws ec2 describe-key-pairs --query KeyPairs
aws ec2 describe-key-pairs --query KeyPairs[0] #for 1st key pair
aws ec2 describe-key-pairs --query KeyPairs[0]."KeyName"
#to extract key name
aws ec2 create-key-pair --help

Instance Related Queries to pull out some specific info :

aws ec2 describe-instances
aws ec2 describe-instances --query Reservations[0].Instances[0]."PublicIpAddress"
aws ec2 describe-instances --query Reservations[0].Instances[1].["PublicIpAddress","KeyName"]
aws ec2 describe-instances --query Reservations[1].Instances[0].["PublicIpAddress","KeyName"]
aws ec2 describe-instances --query Reservations[*].Instances[0].["PublicIpAddress","KeyName","InstanceId","Tags"]
aws ec2 describe-instances --query Reservations[*].Instances[0].["PublicIpAddress","KeyName","InstanceId",Tags[*].Values]

ML Integrated Operations : Project

Machine Learning/Deep Learning is the most used technology in today’s era but many machine learning and deep learning models didn’t upgrade to the fullest and get remained in 2nd class. This happens due to manual working of humans like changing parameters, adding layers we can’t predict how much to do what …but machines and programs can. So to overcome this we have to automate the manual work by making our hyper-parameters and other small stuff dynamics using some operational tools.

This practice of making ML/DL model suffer in continuous integration and continuous delivery is in laymen terms known as MLops.

Today I have a task to see the same in action integrated with metrics monitoring.

Task

  • Create container image that’s has Python3 and Keras or numpy installed using dockerfile
  • When we launch this image, it should automatically starts train the model in the container.
  • Create a job chain of job1, job2, job3, job4 and job5 using build pipeline plugin in Jenkins
  • Pull the Github repo automatically when some developers push repo to Github.
  • By looking at the code or program file, Jenkins should automatically start the respective machine learning software installed interpreter install image container to deploy code and start training( eg. If code uses CNN, then Jenkins should start the container that has already installed all the software required for the cnn processing).
  • Train your model and predict accuracy or metrics.
  • If metrics accuracy is less than 80% , then tweak the machine learning model architecture.
  • Retrain the model or notify that the best model is being created
  • If container where app is running. fails due to any reason then this job should automatically start the container again from where the last trained model left.

Dockerfile

# docker build -t image_name:tag .

We can build our dockerfile by above command.

JOB 1

This job will pull the github repo and copy it to a directory which will be mounted as a volume to our container.

I have used PollScm which enables jenkins to go and check github repo any changes have occur or not as soon as new changes takes place it will pull the changes.

In place of pollscm we can use GitHub webhooks and can use tunneling using ngrok as shown in previous blog.

JOB 2

This job launches the desired environment by looking at the model

Code under Build

if sudo grep -iE 'vgg|imagenet' /root/MLops/model/model.py
then
    echo "opening keras environmnet"
    if sudo docker ps -a | grep testkeras
    then
    sudo docker rm -r testkeras
    sudo docker run -dit --name testkeras -v /root/MLops/model/:/root/model/ keras:3
    sudo docker exec testkeras python3 /root/model/model.py
    else
    sudo docker run -dit --name testkeras -v /root/MLops/model/:/root/model/ keras:3
    sudo docker exec testkeras python3 /root/model/model.py
    fi
else
    echo "opening ML environment"
    if sudo docker ps -a | grep testml
    then
    sudo docker rm -f testml
    sudo docker run -dit --name testml -v /root/MLops/model/:/root/model/ ml
    sudo docker exec testkeras python3 /root/model/model.py
    else
    sudo docker run -dit --name testml -v /root/MLops/model/:/root/model/ ml
    sudo docker exec testkeras python3 /root/model/model.py
    fi
fi

JOB 3

This job is the ace. It predicts the accuracy and validation accuracy, if accuracy and validation accuracy is less than 80% it tweaks the model using a python script and some bash commands as shown below.

By this we are adding an additional dense layer to the model and changing hyper parameters like number of epochs, batch size, neurons and learning rate. After tweaking the model it will train the model and predict accuracy again

This will keep happening until it achieves 80% accuracy or higher and creates the best model.

Code Under Build

#!/bin/bash
a=$(sudo docker exec testkeras cat /root/model/accuracy.txt)
va=$(sudo docker exec testkeras cat /root/model/val_accuracy.txt)
echo $a
echo $va
epochs=3
batch=40
count=0
n1=300
n2=200
while [[ $a -le 0.80 || $va -le 0.80 || $count -le 20 ]]
do
    $((epochs=epochs+2))
    $((count=count+1))
    $((batch=batch-3))
    $((n1=n1+5))
    $((n2=n2+10))
    sudo docker exec testkeras python3 /root/model/hypertuner.py
    sudo sed -i '/epochs_x=/c\epochs_x='$epochs /root/MLops/model/model.py
    sudo sed -i '/batch_size_x=/c\batch_size_x='$batch /root/MLops/model/model.py
    sudo sed -i '/n1=/c\n1='$n1 /root/MLops/model/model.py
    sudo sed -i '/n2=/c\n2='$n2 /root/MLops/model/model.py
    sudo docker exec testkeras python3 /root/model/model.py
    a=$(sudo docker exec testkeras cat /root/model/accuracy.txt)
    va=$(sudo docker exec testkeras cat /root/model/val_accuracy.txt)
done

The file hypertuner.py : https://github.com/mykg/ML-Ops-project/blob/master/hypertuner.py

JOB 4

This is a monitoring job that keeps monitoring the environment if it fails due to any reason it will launch the new environment within a second with the same config as previous. And for any reason a job fails it will send an error mail to the engineer.

If you are having trouble in sending email then visit : https://www.youtube.com/watch?v=DULs4Wq4xMg

Build Pipeline View

build pipeline

Making Model Service Permanent

You can make your model execution permanent, every time container starts the training of model begins. We can do that in dockerfile using CMD but it works when we already have model at image building time. So we can permanent it by adding execution to bashrc.

docker exec testkeras cat python3 /root/model/model.py >> /root/.bashrc

I used python:latest image for building the dockerfile, by default it starts with python 3 interpreter so above method may not work. But if you are using some other image whose entrypoint is a bash shell then you can go for above.

We can change entrypoint of python3:latest image from python 3 interpreter to bash as I have done in first image of Dockerfile.

Metrics Monitoring

This is an addition.

Here, I am monitoring metrics using Prometheus, Grafana and MLflow.

Metrics monitoring is very important while analyzing stats as it gives very beautiful representation of data.

Prometheus

It is a famous metrics monitoring system and time series database.

To know more : https://prometheus.io/

Here, I am monitoring my localhost(rhel), docker daemon and prometheus.

Grafana

Grafana is widely used with prometheus because of it’s visuals and graph monitoring

It is monitoring my localhost ram/cpu usage, docker daemon usage and prometheus.

To know more : https://grafana.com/

MLflow

MLflow offers wide range of APIs to monitor metrics and other experiments of ML/DL model.

To know more about mlflow go to: https://github.com/mlflow/mlflow/ and https://mlflow.org/docs/latest/quickstart.html

accuracy metrics

Above image shows the accuracy metrics graph monitored by mlflow ui.

metrics

Above image has all the metrics in one place showing stats of accuracy, validation accuracy, loss, validation loss.

Conclusion

We have achieved a complete end-to-end automated CI/CD pipeline in CNN/ML in which model is trained through transfer learning on pre-trained weights of VGG16.

In this model, all one has to do just push the changes in our model and rest of the work that includes training the model, predicting the accuracy, and tuning the hyper parameters if the accuracy is found to be less than 80%. This automation system automatically makes the required changes in the model such that its accuracy improves. There is absolutely no human involvement needed !

Thank you for coming this far and comment down your views

Project : Jenkins + Docker + Dockerfile + GitHub

Task 2

Here, I have a task to solve :

  1. Create container image that’s has Jenkins installed using dockerfile
  2. When we launch this image, it should automatically starts Jenkins service in the container.
  3. Create a job chain of job1, job2, job3 and job4 using build pipeline plugin in Jenkins
  4. Job1 : Pull the Github repo automatically when some developers push repo to Github.
  5. Job2 : By looking at the code or program file, Jenkins should automatically start the respective language interpreter install image container to deploy code ( eg. If code is of PHP, then Jenkins should start the container that has PHP already installed ).
  6. Job3 : Test your app if it is working or not.
  7. Job4 : if app is not working , then send email to developer with error messages.
  8. Create One extra job job5 for monitor : If container where app is running. fails due to any reson then this job should automatically start the container again.

Dockerfile

Dockerfile

Dockerfile is a way to create your own docker images.

In above image I have used centos:7 image and installed Jenkins.

  • FROM : from which image to build your custom image.
  • RUN : which commands to run while building your image
  • CMD : when you will launch your container through your custom image, this CMD block will run (this signifies what to execute after boot).

For more information about dockerfile visit : https://docs.docker.com/engine/reference/builder/

Now lets build the image :

docker build -t jenkins:v5 .

Create a folder to mount on container. This folder will contain the files which are stored by jenkins.

mkdir /root/jenkins_cont/jenkins

Running the container

docker container run -it --name jenkins_server -p 8090:8080 -v /root/jenk_cont/jenkins/:/root/.jenkins/ jenkins:v5

We opened the port 8090:8080 (8090 in host and 8080 in container) so we can access container from outside world and jenkins by default runs on 8080.

Copy the default password, also we can find that password from our mounted directory like this:

cat /root/jenk_cont/jenkins/secrets/initialAdminPassword

Accessing the jenkins container

Go to browser of your windows or VM (host) and paste your host IP with port number.

Paste the password you copied.

Now your jenkins container is ready to use. You can also change the password as shown below. And then login again with your changed password.

The main problem is that we are in container and when we build any job it will run inside the container for that we will use ssh to run commands on host system. Therefore we have to Install SSH plugins into jenkins as shown in below images…

Setting UP SSH in jenkins

Go to Manage Jenkins–>Configure System–> Under Ssh remote hosts add your host as shown below…

JOB : 1

Job:1 will pull the code as soon as developer push the code to GitHub

For this to be done 1st we will download the GitHub plugin in jenkins

github plugin

Now lets create our 1st job.

JOB : 2

This job builds desired container by looking at the code. For example if we have html code then it will run httpd container if we have php code then it will run php container. We can configure job 2 as shown in below images…

The code under build is :

cd /root/jenk_cont/web-code/
x=$(ls | grep php)
if echo $x == *.php
then
   if docker ps | grep webapp
   then
     echo "already running"
   else
     docker run -dit -p 8082:80 --name webapp -v /root/jenk_cont/web-code/:/var/www/html/ vimal13/apache-webserver-php
    fi
else
if docker ps | grep webapp
  then
     echo "already running"
   else
     docker run -dit -p 8082:80 --name webapp -v /root/jenk_cont/web-code/:/usr/local/apache2/htdocs httpd:latest
fi
fi

JOB : 3 , 4

This job is made to test the code. If code under the container is working then no problem at all but if any error occurs an email should be sent to developer with the error message.

I have combined job 3 and 4 in single.

So, to do that 1st we have to setup the email configuration as shown below… go to Manage Jenkins –> Configure System and scroll below..

Now, to config the job 3 ….

In above image we are checking the the code of website by :

status=$(curl -siw "%{http_code}" -o /dev/null/ localhost:8082)
if echo $status == 200; then exit 0; else exit 1; fi

If you are having trouble in sending email then visit : https://www.youtube.com/watch?v=DULs4Wq4xMg

Also i have tested the email verification by intentionally making a mistake and found the email as shown below :

JOB : 5

This will keep an eye on our container which is having website/app in it. If in case due to some failure container crashes it will launch the same container asap.

For this I am downloading a special plugin as shown below:

This plugin I will use as ….if job fails then trigger job 2.

Under build the code is :

if docker ps | grep webapp
then 
exit 0
else
exit 1
fi

Pipeline View

For this we have to install delivery pipeline plugin or build pipeline.

Now, creating the view for that ….

And at last the pipeline will look like this:

Ask your doubts in comment section

Thank You !

Development Environment using GIT + JENKINS + DOCKER

Here I have created a dummy real world development environment Integrated with GIT + GITHUB + JENKINS + DOCKER

I have created 3 jobs in jenkins which servers 3 different tasks:

Job 1 : testing environment

This job servers the task of pulling the code from github test branch and runs on the testing environment.

The code under build is:

sudo cp * /root/test_env
if sudo docker ps | grep web_test
then
echo "already running"
else
sudo docker container run -dit -p 8082:80 -v /root/test_env/:/usr/local/apache2/htdocs/ --name web_test httpd 
fi

Job 2 : production environment

The code under build is:

sudo cp * /root/production_env/
if sudo docker ps | grep web_prod
then
echo "already running"
else
sudo docker container run -dit -p 8081:80 -v /root/production_env/:/usr/local/apache2/htdocs/ --name web_prod httpd 
fi

Explanation

The job 1 and 2 are similar the difference is job 1 deals with test environment and job 2 deals with production/main environment.

Under Source Code Management we typed our repo url and selected the branch(test/master) we needed.

Then under Build Triggers we selected Poll SCM and gave schedule to our job to check the repo every minute as any changes have happened in repo the job will run automatically.

Moving on to Build we executed a shell script/command which copies the pulled code from github test/master branch and deploys into a docker container (in our case docker container is our environment).

You have noticed in Build that ports given are different, it is obviously because we have created a testing environment same as our production one and two keep them separated we gave different ports and names and due to some other stuff too.

Job 3 : branch merge and final push

This job 3 is manually conducted task done by the QAT (Quality Assurance Team) after assuring all the things are working properly or not.

When QAT is confidant then they run the job 3 and this job 3 task is to merge the test branch to the master branch and deploy it to the production environment.

Demonstration

Here is a quick demonstration of above task:

Demonstration

This is repo though is not much of a use: https://github.com/mykg/test-prod-deploy.git

Drupal: `docker stack deploy` on 3 node swam cluster

Drupal is an opensource platform for web content management and for building websites and much more web related digital experiences.

Here, I am gonna show you how to deploy drupal service. Let’s dig in.

Stack!

I am not gonna go in depth of it, because a quick overview will be fine. Stack is a place where one can deploy docker-compose file on docker swarm cluster as a service.

Stack manages all your containers and their replicas deployed as a service on swarm cluster and it also manages your networks and volumes defined in compose or stack file. It supports YAML file format same as docker-compose.yml …this makes us feel native towards using stack.

Here is my repository on GitHub: https://github.com/mykg/drupal-stack-deploy

Now let’s have a look at stack.yml file at github: https://github.com/mykg/drupal-stack-deploy/blob/master/stack.yml

Quick Explanation of stack.yml

  • version: ‘3’ specifies the compose version we want you use
  • services: under this block we define services to deploy
  • dbos: is the name of the service
  • image: which image you want to use
  • volumes(under services): which volume to attach with which directory of that container
  • environment: specify the environment variables
  • deploy: it handles the variables under deploy condition
  • replicas: we can define number of replicas to be created of that container
  • restart: always will keep container running
  • restart_policy: i have defined if any container falls down it will create another on failure

Under drupal_os: everything is kind of same except:

  • depends_on: here I have specified the container name that it depends on (same as –link option doing manually)
  • ports: here I exposed the ports for drupal to access it from our webpage
  • At last of we have specified the volumes to be created under volumes:

If you want to look more into compose file formatting: https://docs.docker.com/compose/

For environments variables: https://docs.docker.com/compose/environment-variables/

To checkout more environment variables, volumes and database connections available for drupal: https://hub.docker.com/_/drupal

My Environment

  • On bare metal I have Windows 10 pro
  • And 3 RHEL 7.5 vms running on vmware
  • Docker daemon running on all 3 vms
  • 1 is manager node and other 2 as worker nodes

Swarm init

# systemctl start docker
this will start docker daemon
# systemctl enable docker
this will enable service of docker
run these commands in all 3 vms
# docker swarm init 
this will initialize the swarm and make that vm a manager node, a join token will appear, copy this and paste in other 2 vms and then run it.
This will make those 2 vms as worker node
# docker node ls
to check on nodes available and their status

To check on more about swarm cluster visit: https://mynk.home.blog/2020/01/05/how-to-create-a-3-node-swarm-cluster-using-docker-machine/

Deployment

Clone that repository or copy the code from stack.yml: https://github.com/mykg/drupal-stack-deploy

# docker stack deploy -c stack.yml DRUPAL
this will deploy the stack on swarm
# docker service ls
lists the number of services available, it will show 2 services, one of drupal and other of mysql
# docker service ps <service_name>
this will let you know about the service and its replicas and on which node it is running

To access drupal visit: http://localhost:8082 or http://ip-of-any-node:8082 it will look like this:-

Under set up databases… select these as we are running mysql and we have created database named drupal_db and name of that container is the host so that is dbos

Verifying things

We almost did everything, let’s test something… we know that on failure of any container it should be deployed again soon… let’s try that…

# docker container ls
this will show containers running on that vm
# docker container rm -f <container-id>
this will remove that container forcefully 
# docker service ls
run this above command on manager node and you will see a replica of container must be reduced to 1

Now we removed a container from the service ….

# docker container ls 
do this until you see that container again
# docker service ls
you can also check on manager node that replica again restored

This is the beauty of docker swarm cluster or any other container clustering tool.

If you want to check the mysql storing tables or not then:

# mysql -h <ip-of-mysql-container> -u mayank -predhat
In my case user name is mayank and password is redhat....after that choose your database and you can see your tables

I have also created same cluster for wordpress you can check that out here: https://github.com/mykg/wordpress-stack-deploy

Please give me review or you can leave a comment below.

Thanks.

Docker containers in load balancing

Load Balancing

Load balancing is a technique to distribute load/traffic across servers containing the requested material by the client to improve network efficiency and reducing load across one server. For more information you can go to https://www.nginx.com/resources/glossary/load-balancing/.

Building custom network

To create a load balancer for docker containers we have to launch our containers into custom network

docker network create webnet
# it creates a network named webnet with default driver as bridge 
docker network ls
# to check available networks 

Pulling a custom web server

Now, you have to create a custom image of a web server. I used centos:7. You can simply pull my image as given below:

docker pull mykgod/centos7lb:test
# this command will pull this image by docker hub
docker images
# to check images 

Above web server contains a index.php file which shows IP of that container. So after requesting web server, it should show different IPs.

Launching containers

Now we will launch 2 web servers in our custom network webnet.

docker container run -dit --name web1 --network webnet --network-alias webpage mykgod/centos7lb:test
docker container run -dit --name web2 --network webnet --network-alias webpage mykgod/centos7lb:test
# we are giving different container name but same network alias
docker container run -dit --name client --network webnet mykgod/centos7lb:test
# this is client container which will send request

Note: Here –network-alias is working as a load balancer. Also we have to launch a client container because we cannot connect our host to webnet due to public and private IP concept.

Now checking if our setup is working or not

docker container exec client curl webpage
# run above command multiple times 

You can now see every time you run this command you’ll get different IP as per container. To check IP of a web server container you can run…

docker container exec web1 ifconfig 
docker container exec web2 ifconfig 

Now we can say that containers are in load balancing.

I have also automated this work by ansible-playbook you may take a peek at this Github Repo : https://github.com/mykg/docker-containers-in-loadbalancing

How to create a 3 node swarm cluster using docker-machine

Swarm mode is a latest feature of docker, which was released in 2016. Swarm mode is the solution to various problems of docker, like how do we automate container life cycle? how to scale them easily? and a lot more. Swarm mode in docker is an orchestrator for docker containers.

Orchestration is the automated configuration, coordination, and management of computer systems and software. So basically, by swarm you can configure, manage, deploy and play with services in docker container and do this to containers also. It gives you a feel how actually containers works in production environment. You can have a overview at https://docs.docker.com/engine/swarm/ .

Before going further you should know how swarm node works, what are manager nodes? what are worker nodes? what’s the difference between them? and how it all works together?. You should peek into this https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/ for more info.

So here, I am building a 3 node swarm cluster in which 1 will be a host(my laptop basically) and other 2 will be docker-machines. So lets get started with creating 2 docker machines.

Creating docker machines

docker-machine --help
#just to check the command is working and to check the other options of it. And it's a good practice.
docker-machine create --driver=hyperv myvm1
#in above cmd i have used hyper-v as a driver you can use virtual-box, aws, or etc. To use virtual-box #as driver you should install latest virtual-box. 
#in some cases docker requires hyper-v to run so have it to be enabled to use as a driver in vritual-box

Note: By executing above command you may encounter several problems like: you got stuck in (myvm1) Starting VM… (myvm1) Waiting for host to start… loop or something different. So, here some solutions to some problems or can say tips not to encounter a problem.

  • Using hyper-v you may lack in virtual switches so create one (external) to be used by docker-machine.
  • Problem may persist after creating virtual switch so try to change the adapters using that virtual switch. You can do that by going to hyper-v manager.
  • After doing this problems still arises then just google it and try out some randoms.

Create another machine doing the same. So you have 2 machines now and a docker desktop running on your host.

docker-machine ls
#to check machines

Initializing Swarm

So I am gonna make ‘myvm1’ as the manager and ‘host’ and ‘myvm2’ as worker nodes.

#this is how you connect to your machine 
docker-machine env myvm1
#above cmd will result like this 
"export DOCKER_TLS_VERIFY="1"
 export DOCKER_HOST="tcp://192.168.43.228:2376"
 export DOCKER_CERT_PATH="C:\Users\Pilani\.docker\machine\machines\myvm1"
 export DOCKER_MACHINE_NAME="myvm1"
 export COMPOSE_CONVERT_WINDOWS_PATHS="true"
 # Run this command to configure your shell:
 # eval $("C:\Users\Pilani\bin\docker-machine.exe" env myvm1)"

eval $("C:\Users\Pilani\bin\docker-machine.exe" env myvm1)
#above cmd may be different in your case so run what you see there
#do the same for 'myvm2'

We are not actually going into the machines, yes we can ssh into them but I will just operate them from outside. Also the above command does not let us into the machine. It just enables them for us to use.

docker-machine ssh myvm1 "docker info"
#this cmd will run 'docker info' into myvm1
docker-machine ssh myvm1 "docker swarm init"
#this will initialize the swarm in myvm1, you will see a join token command copy it. And...
docker-machine ssh myvm2 "docker swarm join --token SWMTKN-1-3fq53rnc6fuwyobfx8wulfutkm68aqpbbu40ow7eqjwak6z474-7ezn5u7344jemshm853pt1mzy 192.168.43.228:2377"
#now myvm2 will join myvm1 as worker 
docker swarm join --token SWMTKN-1-3fq53rnc6fuwyobfx8wulfutkm68aqpbbu40ow7eqjwak6z474-7ezn5u7344jemshm853pt1mzy 192.168.43.228:2377
#by this host docker will join myvm1 as worker
docker-machine ssh myvm1 "docker node ls"
#above cmd will let you know the nodes connected and their status
#you can promote or demote a node also as may you want 2 managers or so...
docker-machine ssh myvm1 "docker node --help"

You have successfully crated a 3 node swarm cluster. Congrats!

#after you have done you can leave swarm cluster by...
docker swarm leave -f
#run this cmd on each node

NOTE:- You may actually find difficulty to make your host docker as manager and machines as workers or all as managers as I faced it too, it works when all are docker machines but a host docker is not becoming a manger in any case. It may work in your case. So keep trying.

Thanks! for coming up this far and if you find any problem going through all this. Let me know below in comments. I will be more than happy to help.

How to create EC2 instance and install docker in AWS

Docker and AWS are just BOOOOOMMMM!! for me. Everything is damn fast with docker and now I don’t have to waste my laptop’s computing power when I have AWS.

Launching an EC2 Instance

I’ll tell you how to launch an instance in AWS and install docker in it. You just need a free AWS account (which I’ll be using to demonstrate this exercise).

To run an instance you require an vpc (virtual private cloud). But I will do it while creating the instance.

  • Login to your AWS management console
  • Click on services and under compute choose EC2.
  • Under Instance list click on Instance.
  • And then click on Launch Instance.

Don’t worry about that docker_node1 .

  • Here you will be asked to select an AMI. Just select the Amazon Linux 2 AMI (HVM), SSD Volume Type.
  • Now choose free tier eligible.
  • Now click next.
  • Then click on create new vpc. Just don’t worry about other things and move forward.
  • Then click on create vpc.
  • Then fill the details as follows, you can also choose different ipv4 range as suitable.
  • Then hit create and then close.
  • Now you will see an vpc is created with name vpc1.
  • Now go back to where you left your instance.
  • Select your vpc and now we also require a subnet.
  • Click on create a subnet and fill in details as follows.
  • Hit create and then close. Now you will see a subnet name subnet1 is created.
  • Go back where you left.
  • Select your vpc and subnet.
  • Now enable the auto-assign pubic ip option.
  • Now just do next until the 6th step. In 6th step it will create a security group or use can use existing security group.
  • Now review and launch.
  • Now it will ask you to create a key pair. Select create a new key pair give it a name.
  • And finally Launch.

You are done with creating your instance. You can easily connect to your EC2 instance by navigating to connect and you can also find it on https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html , I am using browser-based SSH connection.

Installing Docker

Next step is to install docker on your EC2 instance. Navigate to your terminal and enter below given commands:

[ec2-user@ip-172-31-1-247 ~]$ sudo update -y
#this cmd will update your system
[ec2-user@ip-172-31-1-247 ~]$ sudo yum intall docker -y
#this will install docker community edition into your system
[ec2-user@ip-172-31-1-247 ~]$ sudo systemctl status docker
#this command is use to check the status of any service you will see something like 'Active: inactive'
#this means your docker in not started yet
[ec2-user@ip-172-31-1-247 ~]$ sudo systemctl start docker
#this will start your docker deamon
[ec2-user@ip-172-31-1-247 ~]$ sudo systemctl status docker
#to check again the service now 'Active: active' this means service started correctly

We now add ec2-user to docker group by usermod command. To make it simple by doing the below command we don’t have to use ‘sudo’ again and again.

[ec2-user@ip-172-31-1-247 ~]$ sudo usermod -aG docker ec2-user
[ec2-user@ip-172-31-1-247 ~]$ exit

Now again login to your instance and type command without ‘sudo’.

[ec2-user@ip-172-31-1-247 ~]$ systemctl status docker
[ec2-user@ip-172-31-1-247 ~]$ docker info

Now you have successfully started docker. It’s time to run a container. I am running a nginx server on port 80 of EC2 instance and on port 80 inside the image.

[ec2-user@ip-172-31-1-247 ~]$ docker container run -itd -p 80:80 nginx
#what this command do is that it pulls down the respective image if unable to find locally and runs it #simultaneously 
#docker has changed some commands like before it was 'docker run' and now it's 'docker container run' #both the commands will work fine. You can always use 'docker container run --help' to understand it's #internals 
[ec2-user@ip-172-31-1-247 ~]$ docker container ls
#this will list number of containers running

Now to check whether your nginx container is running or not.

[ec2-user@ip-172-31-1-247 ~]$ curl https:<your-ip>:80
#to see your ip just type 'ifconfig' under eth0 inet will bw your ip

If you see something like this, then congrats you successfully ran a nginx docker container in your EC2 instance.

Conclusion

Important: You probably should shutdown all the tasks and EC2 instance you created during this tutorial and review all work done on AWS so you don’t get charged for it in any way.

There is nothing much to say though it’s my first blog and I just love using container and AWS as both looks fantastic to me. Docker environment is growing at an astonishing rate and soon will replace the heavy servers to light weight containers. It may look uncanny to use, but, as you once get a hang of it you will find everything surfing.

If you have come up this far please leave a review or comment. It means a lot to me.

Design a site like this with WordPress.com
Get started