Featured

How to create EC2 instance and install docker in AWS

Docker and AWS are just BOOOOOMMMM!! for me. Everything is damn fast with docker and now I don’t have to waste my laptop’s computing power when I have AWS.

Launching an EC2 Instance

I’ll tell you how to launch an instance in AWS and install docker in it. You just need a free AWS account (which I’ll be using to demonstrate this exercise).

To run an instance you require an vpc (virtual private cloud). But I will do it while creating the instance.

  • Login to your AWS management console
  • Click on services and under compute choose EC2.
  • Under Instance list click on Instance.
  • And then click on Launch Instance.

Don’t worry about that docker_node1 .

  • Here you will be asked to select an AMI. Just select the Amazon Linux 2 AMI (HVM), SSD Volume Type.
  • Now choose free tier eligible.
  • Now click next.
  • Then click on create new vpc. Just don’t worry about other things and move forward.
  • Then click on create vpc.
  • Then fill the details as follows, you can also choose different ipv4 range as suitable.
  • Then hit create and then close.
  • Now you will see an vpc is created with name vpc1.
  • Now go back to where you left your instance.
  • Select your vpc and now we also require a subnet.
  • Click on create a subnet and fill in details as follows.
  • Hit create and then close. Now you will see a subnet name subnet1 is created.
  • Go back where you left.
  • Select your vpc and subnet.
  • Now enable the auto-assign pubic ip option.
  • Now just do next until the 6th step. In 6th step it will create a security group or use can use existing security group.
  • Now review and launch.
  • Now it will ask you to create a key pair. Select create a new key pair give it a name.
  • And finally Launch.

You are done with creating your instance. You can easily connect to your EC2 instance by navigating to connect and you can also find it on https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html , I am using browser-based SSH connection.

Installing Docker

Next step is to install docker on your EC2 instance. Navigate to your terminal and enter below given commands:

[ec2-user@ip-172-31-1-247 ~]$ sudo update -y
#this cmd will update your system
[ec2-user@ip-172-31-1-247 ~]$ sudo yum intall docker -y
#this will install docker community edition into your system
[ec2-user@ip-172-31-1-247 ~]$ sudo systemctl status docker
#this command is use to check the status of any service you will see something like 'Active: inactive'
#this means your docker in not started yet
[ec2-user@ip-172-31-1-247 ~]$ sudo systemctl start docker
#this will start your docker deamon
[ec2-user@ip-172-31-1-247 ~]$ sudo systemctl status docker
#to check again the service now 'Active: active' this means service started correctly

We now add ec2-user to docker group by usermod command. To make it simple by doing the below command we don’t have to use ‘sudo’ again and again.

[ec2-user@ip-172-31-1-247 ~]$ sudo usermod -aG docker ec2-user
[ec2-user@ip-172-31-1-247 ~]$ exit

Now again login to your instance and type command without ‘sudo’.

[ec2-user@ip-172-31-1-247 ~]$ systemctl status docker
[ec2-user@ip-172-31-1-247 ~]$ docker info

Now you have successfully started docker. It’s time to run a container. I am running a nginx server on port 80 of EC2 instance and on port 80 inside the image.

[ec2-user@ip-172-31-1-247 ~]$ docker container run -itd -p 80:80 nginx
#what this command do is that it pulls down the respective image if unable to find locally and runs it #simultaneously 
#docker has changed some commands like before it was 'docker run' and now it's 'docker container run' #both the commands will work fine. You can always use 'docker container run --help' to understand it's #internals 
[ec2-user@ip-172-31-1-247 ~]$ docker container ls
#this will list number of containers running

Now to check whether your nginx container is running or not.

[ec2-user@ip-172-31-1-247 ~]$ curl https:<your-ip>:80
#to see your ip just type 'ifconfig' under eth0 inet will bw your ip

If you see something like this, then congrats you successfully ran a nginx docker container in your EC2 instance.

Conclusion

Important: You probably should shutdown all the tasks and EC2 instance you created during this tutorial and review all work done on AWS so you don’t get charged for it in any way.

There is nothing much to say though it’s my first blog and I just love using container and AWS as both looks fantastic to me. Docker environment is growing at an astonishing rate and soon will replace the heavy servers to light weight containers. It may look uncanny to use, but, as you once get a hang of it you will find everything surfing.

If you have come up this far please leave a review or comment. It means a lot to me.

WordPress on K8S and AWS RDS using Terraform

In this post I am going to show you how to deploy wordpress on k8s using AWS RDS service as the backend for our wordpress.

Steps we are going to do :

  1. Writing code in terraform which automatically deploys services on k8s server and aws.
  2. Using RDS service on AWS for wordpress application.
  3. Deploying wordpress on minikube or EKS or fargate service. We will do it on minikube.
  4. Exposing the wordpress application to outside world/internet.

Prerequisite:

  • Terraform should be configured with your AWS IAM.
  • Should have working minikube on virtual box.
  • Kubectl should be configured for minikube.

So, we will directly see the terraform code written in parts.

AWS and K8S provider

#Kubernetes Provider
provider "kubernetes" {}

# AWS Provider
provider "aws" {
  profile = "terraform-user"
  region  = "ap-south-1"
}

AWS RDS setup

Amazon Relational Database Service is a distributed relational database service by Amazon Web Services. It is a web service running “in the cloud” designed to simplify the setup, operation, and scaling of a relational database for use in applications.

Data source for vpc and subnet
# VPC data soruce
data "aws_vpc" "def_vpc" {
  default = true
}

# Subnet data source
data "aws_subnet_ids" "vpc_sub" {
  vpc_id = data.aws_vpc.def_vpc.id
}
Security Group for mysql
resource "aws_security_group" "allow_data_in_db" {
  name        = "allow_db"
  description = "mysql access"
  vpc_id      = data.aws_vpc.def_vpc.id

  ingress {
    description = "MySQL"
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "allow_wp_in_db"
  }
}
Subnet for DB
# subnet group for DB
resource "aws_db_subnet_group" "sub_ids" {
  name       = "main"
  subnet_ids = data.aws_subnet_ids.vpc_sub.ids

  tags = {
    Name = "DB subnet group"
  }
}
Creating RDS DB instance
# DB Instances
resource "aws_db_instance" "wp_db" {
  engine                 = "mysql"
  engine_version         = "5.7"
  identifier             = "wordpress-db"
  username               = "admin"
  password               = "dbpass@12X"
  instance_class         = "db.t2.micro"
  storage_type           = "gp2"
  allocated_storage      = 20
  db_subnet_group_name   = aws_db_subnet_group.sub_ids.id
  vpc_security_group_ids = [aws_security_group.allow_data_in_db.id]
  publicly_accessible    = true
  name                   = "wpdb"
  parameter_group_name   = "default.mysql5.7"
  skip_final_snapshot    = true
}

Now, it is time to launch minikube and deploy wordpress pod on top of it.

#deployment
resource "kubernetes_deployment" "wordpress_deploy" {
    depends_on = [
    aws_db_instance.wp_db
    ]
  metadata {
    name = "wordpress"
    labels = {
      app = "wordpress"
    }
  }
  spec {
    selector {
      match_labels = {
        app = "wordpress"
      }
    }
    template {
      metadata {
        labels = {
          app = "wordpress"
        }
      }
      spec {
        container {
          image = "wordpress"
          name  = "wordpress-pod"
          env {
            name = "WORDPRESS_DB_HOST"
            value = aws_db_instance.wp_db.endpoint
            }
          env {
            name = "WORDPRESS_DB_DATABASE"
            value = aws_db_instance.wp_db.name 
            }
          env {
            name = "WORDPRESS_DB_USER"
            value = aws_db_instance.wp_db.username
            }
          env {
            name = "WORDPRESS_DB_PASSWORD"
            value = aws_db_instance.wp_db.password
          }
          port {
        container_port = 80
          }
        }
      }
    }
  }
}
Creating service for wordpress
#service 
resource "kubernetes_service" "wordpress_service" {
    depends_on = [
    kubernetes_deployment.wordpress_deploy,
  ]
  metadata {
    name = "wp-service"
  }
  spec {
    selector = {
      app = "wordpress"
    }
    port {
      port = 80
      target_port = 80
      node_port = 30888
    }

    type = "NodePort"
  }
}

And we are done with code part.

Let’s start our minikube….

minikube start

Run terraform code by….

terraform init
terrafrom apply --auto-approve

As we know our service is running on minikube so :

minikube ip

Also, check on your cloud whether AWS RDS launched :

Open your minikube IP in your browser.

And we are done.

NAT gateway working

In this post we will see how to setup NAT gateway on the previous post of setting up wordpress and mysql : https://mynk.home.blog/2020/10/04/deploying-wordpress-and-mysql-on-ec2-in-custom-vpc-using-terraform/

Most of the process like making vpc, internat gateway etc. would remain the same except we just added the NAT gateway in our VPC.

Workflow

  • Creating a VPC and in this two subnet
    • Public Subnet – accessible from public world
    • Private Subnet – not accessible from public subnet
  • Public facing internet gateway for VPC
  • Routing table for subnet
  • NAT gateway for VPC
  • Launching preconfigured wordpress instance in public subnet
  • Launching preconfigured mysql instance in private subnet

Provider and Key

# configure the provider
provider "aws" {
  region = "ap-south-1"
  profile = "terraform-user"
}

#Creating private key
resource "tls_private_key" "key" {
  algorithm = "RSA"
  rsa_bits = 4096
}

resource "aws_key_pair" "generated_key" {
    key_name = "task3_key"
    public_key = tls_private_key.key.public_key_openssh

    depends_on = [
        tls_private_key.key
    ]
}

#Downloading priavte key
resource "local_file" "file" {
    content  = tls_private_key.key.private_key_pem
    filename = "E:/Terraform/tasks cloud trainig/task3/task3_key.pem"
    file_permission = "0400"

    depends_on = [ aws_key_pair.generated_key ]
}

VPC and Subnet

VPC (virtual private network) is a Lab in which we can create sub-labs known as subnets.

# creating a vpc
resource "aws_vpc" "vpc" {
  cidr_block       = "192.169.0.0/16"
  instance_tenancy = "default"
  enable_dns_hostnames = true

  tags = {
    Name = "vpc"
  }
}

# creating subnet in 1a
resource "aws_subnet" "sub_1a" {
  vpc_id     = "${aws_vpc.vpc.id}"
  cidr_block = "192.169.1.0/24"
  availability_zone = "ap-south-1a" 
  map_public_ip_on_launch  =  true

  tags = {
    Name = "sub_1a"
  }
  depends_on = [ aws_vpc.vpc ]
}

# creating a subnet in 1b
resource "aws_subnet" "sub_1b" {
  vpc_id     = "${aws_vpc.vpc.id}"
  cidr_block = "192.169.2.0/24"
  availability_zone = "ap-south-1b"

  tags = {
    Name = "sub_1b"
  }
  depends_on = [ aws_vpc.vpc ]
}

Internet-Gateway

It allows VPC to connect with public world (internet).

# creating igw
resource "aws_internet_gateway" "igw" {
  vpc_id = "${aws_vpc.vpc.id}"

  tags = {
    Name = "igw"
  }
  depends_on = [ aws_vpc.vpc ]
}

Route Table and association

Route table controls the routing. Each subnet should be associated with route table. By default it route table is attached but we want to add more routes we can, same as I did below to provide subnet 1a public access.

# creating route table
resource "aws_route_table" "r" {
  vpc_id = "${aws_vpc.vpc.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.igw.id}"
  }

  tags = {
    Name = "r"
  }
  depends_on = [ aws_internet_gateway.igw, aws_vpc.vpc ]
}

# associating route table
resource "aws_route_table_association" "a" {
  subnet_id      = aws_subnet.sub_1a.id
  route_table_id = aws_route_table.r.id

  depends_on = [ aws_route_table.r ]
}

NAT gateway

NAT Gateway is a highly available AWS managed service that makes it easy to connect to the Internet from instances within a private subnet in an Amazon Virtual Private Cloud.

We will need elastic IP for it so.

resource "aws_eip" "public_ip" {
  vpc      = true
}
resource "aws_nat_gateway" "NAT-gw" {
  allocation_id = aws_eip.public_ip.id
  subnet_id     = aws_subnet.public.id

  tags = {
    Name = "NAT-GW"
  }
}

Route Table and association for NAT

resource "aws_route_table" "r2" {
  vpc_id =  aws_vpc.vpc.id

route {
    cidr_block = "0.0.0.0/0"
     gateway_id = aws_nat_gateway.NAT-gw.id
}
   
 tags = {
    Name = "NAT_table"
  }
}

resource "aws_route_table_association" "a2" {
  subnet_id      = aws_subnet.private.id
  route_table_id = aws_route_table.r2.id
}

Security Group for wordpress

Security groups are firewall to an instance which controls the port traffic.

# sg for wordpress
resource "aws_security_group" "wp_sg" {
  name        = "wp sg"
  description = "Allow http ssh all"
  vpc_id      = "${aws_vpc.vpc.id}"

  ingress {
    description = "http"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "wp sg"
  }
}

Security Group for mysql

# sg for mysql
resource "aws_security_group" "mysql_sg" {
  name        = "mysql sg"
  description = "ssh and mysql port"
  vpc_id      = "${aws_vpc.vpc.id}"

  ingress {
    description = "mysql port"
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = [aws_vpc.vpc.cidr_block]
  }
  
  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [aws_vpc.vpc.cidr_block]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "mysql sg"
  }
  depends_on = [ aws_vpc.vpc ]
}

EC2 instance

# wordpress ami
resource "aws_instance" "wordpress" {
    depends_on = [   aws_subnet.sub_1a, aws_security_group.wp_sg, ]
    
    ami           = "ami-02b9afddbf1c3b2e5"
    instance_type = "t2.micro"
    key_name = "task3_key"
    vpc_security_group_ids = ["${aws_security_group.wp_sg.id}"]
    subnet_id = aws_subnet.sub_1a.id
    tags = {
        Name = "WordPress"
    }
}

# mysql ami
resource "aws_instance" "mysql" {
    depends_on = [    aws_subnet.sub_1b, aws_security_group.mysql_sg, ]
    ami           = "ami-0d8b282f6227e8ffb"
    instance_type = "t2.micro"
    key_name = "task3_key"
    vpc_security_group_ids = ["${aws_security_group.mysql_sg.id}"]
    subnet_id = aws_subnet.sub_1b.id
    tags = {
        Name = "Mysql"
    }
}

Final Step

terraform init
terraform plan
terraform apply

After launching connect to wordpress instance, open directory /var/www/html/ and open wp-config.php and put your mysql instance private ip and restart httpd service.

systemctl restart httpd

And now open wordpress public dns name or public IP.

I images used are given in the previous post : https://mynk.home.blog/2020/10/04/deploying-wordpress-and-mysql-on-ec2-in-custom-vpc-using-terraform/

Thank You!

Configuring Webserver in AWS EC2 using Ansible

Base Plan

What we are going to do:

  • Provision EC2 instance through ansible
  • Retrieving facts like IP of EC2 instance
  • Building inventory using dynamic inventory
  • Configuring webserver in EC2 instance using ansible
  • Deploying a webpage in the webserver

To make ansible connect with aws we require a python library named boto or boto3 so make sure your linux control node (ansible) have that.

pip3 install boto boto3 -y

Next step, we require dynamic inventory so we need to grab 2 files which are ec2.py and ec2.ini.

https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py

https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini

We need to provide our access key id and secret key id in our ec2.ini and we also need to export them.

$ export AWS_ACCESS_KEY_ID='YOUR_AWS_API_KEY'
$ export AWS_SECRET_ACCESS_KEY='YOUR_AWS_API_SECRET_KEY'

For more detailed explanation and process on dynamic inventory check : Getting Started with Ansible and Dynamic Amazon EC2 Inventory Management | AWS Partner Network (APN) Blog

The dynamic inventory should be empty first. My ansible.cfg looks like this:

[defaults]
#inventory = /etc/ansible/inventory
inventory = /root/ansible_tasks/task2/inventory2
host_key_checking = false
remote_user = ec2-user
become = true
ask_pass = false

[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_password = false
AWS user for ansible

We need to create a IAM user with appropriate permission/power for ansible to launch and create resources on aws.

I am going to use vault to encrypt by aws access and secret key.

ansible-vault create --vault-id aws@prompt key.yml
access_key: your access key
secret_key: your secret key

We now need to create two roles, one for ec2 instance launch and one for deploying webserver on that.

EC2 role

tasks/main.yml

# tasks file for webserver_aws

  - name: include access/secret key aws
    include_vars: /root/ansible_tasks/task2/key.yml

  - name: creating key pair
    amazon.aws.ec2_key:
      name: "{{ key_pair_name }}"
      aws_access_key: "{{ access_key }}"
      aws_secret_key: "{{ secret_key }}"
      region: "{{ region }}"
    register: keypair

  - debug:
      var: keypair

  - copy:
      content: '{{ keypair["key"]["private_key"] }}'
      dest: /root/ansible_tasks/task2/{{ key_pair_name }}.pem
    ignore_errors: yes

  - name: create security group for our instance
    amazon.aws.ec2_group:
      name: "{{ sg_name }}"
      description: "sg having 22 and 80 ingress"
      vpc_id: "{{ vpc_id }}"
      region: "{{ region }}"
      aws_secret_key: "{{ access_key }}"
      aws_access_key: "{{ secret_key }}"
      state: present
      rules:
        - proto: tcp
          from_port: 80
          to_port: 80
          cidr_ip: 0.0.0.0/0
        - proto: tcp
          from_port: 22
          to_port: 22
          cidr_ip: 0.0.0.0/0


  - name: lauching an ec2 instance
    ec2_instance:
      region: "{{ region }}"
      aws_access_key: "{{ access_key }}"
      aws_secret_key: "{{ secret_key }}"
      instance_type: "{{ instance_type }}"
      key_name: "{{ key_pair_name }}"
      state: present
      image_id: "{{ image_id }}"
      vpc_subnet_id: "{{ subnet_id }}"
      group: "{{ sg_name }}"
      name: "{{ instance_name }}"
      assign_public_ip: yes

vars.main.yml

# vars file for webserver_aws
region: ap-south-1
instance_type: t2.micro
image_id: ami-0a9d27a9f4f5c0efc
key_pair_name: ansiblekeypair
instance_name: webserver
vpc_id: vpc-a2809dca
sg_name: sg_ansible
subnet_id: subnet-9edee4f6

Webserver role

tasks/main.yml

# tasks file for ec2_webserver

- name: include access/secret key aws
  include_vars: /root/ansible_tasks/task2/key.yml

- name: retrive facts
  ec2_instance_info:
    aws_access_key: "{{ access_key }}"
    aws_secret_key: "{{ secret_key }}"
    region: "{{ region }}"
  register: instfacts

- debug:
    var: instfacts

- name: install httpd
  package:
    name: "httpd"
    state: present
  when: ansible_os_family == "RedHat"
  register: httpd_status

- name: start httpd service
  service:
    name: "httpd"
    state: started

- name: copy webpage to root
  copy:
    src: "{{ src }}"
    dest: "{{ dest }}"
    mode: 0666
    when: httpd_status.rc == 0
    notify: restart httpd

handlers/main.yml

# handlers file for ec2_webserver
- name: restart httpd
  service:
    name: httpd
    state: restarted

vars/main.yml

# vars file for ec2_webserver
region: ap-south-1
src: /root/ansible_tasks/task2/ec2_webserver/files/index.html
dest: /var/www/html/index.html
Final Run

Run both the roles:

main-ec2.yml

- hosts: localhost
  roles:
  - role: "ec2_conf"

main-webserver.yml

- hosts: all
  roles:
  - role: "ec2_webserver"
ansible-playbook --vault-id aws@prompt main-ec2.yml
ansible-playbook --vault-id aws@prompt main-webserver.yml

Results

We successfully launched instance and configured webserver in it using ansible.

Thank you for coming this far!

EFS in AWS

EFS stands for Elastic File Storage, a service provided by AWS of file storage and can easily be used with EC2. We can use EFS across AZs, different regions. To learn more about EFS : https://aws.amazon.com/efs/

In my earlier post ( https://mynk.home.blog/2020/06/15/571/ ) I hosted a web page on EC2 using S3 and EBS storage. Today we are going to do the same but in place of EBS we are gonna use EFS. As EBS have some limitations.

Security Group

Security group are the firewall to an instance which controls the ingress (incoming traffic) and egress (outgoing traffic)

Added NFS protocol

resource "aws_security_group" "allow_ssh_http_nfs" {
  name        = "allow_ssh_http_nfs"
  description = "Allow ssh and http and nfs inbound traffic"
  
  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "http"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "NFS"
    from_port	= 2049
    to_port	= 2049
    protocol	= "tcp"
    cidr_blocks	= ["0.0.0.0/0"]

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "allow_ssh_http_nfs"
  }
}

Launch EC2

Amazon require amazon-efs-utils to be present in instance for mounting purpose.

resource "aws_instance" "myin" {
  ami  = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = aws_key_pair.generated_key.key_name
  security_groups = [ "allow_ssh_http_nfs" ]
  

  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.key.private_key_pem
    host     = aws_instance.myin.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git amazon-efs-utils nfs-utils -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }

  tags = {
    Name = "os1"
  }
}

EFS

EFS creation require 4 tasks:

  • Creation EFS file system
  • Defining access point
  • Attaching policy
  • Defining mount target
# efs 
resource "aws_efs_file_system" "efs" {
  depends_on = [
    aws_instance.myin,
    aws_security_group.allow_ssh_http_nfs, ]
  creation_token = "my-product"
  
  tags = {
    Name = "MyProduct"
  }
}

# efs access point 
resource "aws_efs_access_point" "ap" {
  file_system_id = aws_efs_file_system.efs.id
  depends_on = [ aws_efs_file_system.efs, ]
}

# efs policy
resource "aws_efs_file_system_policy" "policy" {
  
  depends_on = [ aws_efs_file_system.efs, ]
  file_system_id = aws_efs_file_system.efs.id
   
  
  policy = <<POLICY
	{
	    "Version": "2012-10-17",
	    "Id": "efs-policy-wizard-37ea40d1-826a-4398-99d6-a4561182f9f6",
	    "Statement": [
	        {
	            "Sid": "efs-statement-65263caf-dba3-4299-b808-4da9635bba63",
	            "Effect": "Allow",
	            "Principal": {
	                "AWS": "*"
	            },
	            "Resource": "${aws_efs_file_system.efs.arn}",
	            "Action": [
	                "elasticfilesystem:ClientMount",
	                "elasticfilesystem:ClientWrite",
	                "elasticfilesystem:ClientRootAccess"
	            ],
	            "Condition": {
	                "Bool": {
	                    "aws:SecureTransport": "true"
	                }
	            }
	        }
	    ]
	}
	POLICY
}

# efs mount target
resource "aws_efs_mount_target" "alpha" {
  file_system_id = aws_efs_file_system.efs.id
  subnet_id = aws_instance.myin.subnet_id
  security_groups = [ aws_security_group.allow_ssh_http.id ]
  depends_on = [ aws_efs_file_system.efs, 
                 aws_efs_access_point.ap, 
                 aws_efs_file_system_policy.policy,]
}

resource "null_resource" "nullremote1"  {
  depends_on = [
    aws_efs_mount_target.alpha,
    aws_efs_file_system_policy.policy,
  ]

  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.key.private_key_pem
    host     = aws_instance.myin.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "sudo chmod ugo+rw /etc/fstab",
      "sudo mount ${aws_efs_file_system.efs.id}:/ /var/www/html/",
      "sudo echo '${aws_efs_file_system.efs.id}:/ /var/www/html/ efs tls,_netdev' >> /etc/fstab",
      "sudo mount -a -t efs,nfs4 defaults",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/mykg/sampleCloud.git /var/www/html/",
    ]
  }
}

Rest of the setup of CloudFront and S3 bucket same as the previous post.

Earlier Post:https://mynk.home.blog/2020/06/15/571/

Terraform

Initialize the resources and modules

# terraform init 

Check the plan, what is going to be added or changes

# terraform plan

Apply

# terraform apply --auto-approve

Conclusion

We successfully used EFS in place of EBS and hosted the web page on EC2 instance using S3 bucket and CloudFront.

Thank You!

Deploying WordPress and Mysql on EC2 in custom VPC using Terraform

Workflow

  • Creating a VPC and in this two subnet
    • Public Subnet – accessible from public world
    • Private Subnet – not accessible from public subnet
  • Public facing internet gateway for VPC
  • Routing table for subnet
  • Launching preconfigured wordpress instance in public subnet
  • Launching preconfigured mysql instance in private subnet

Provider and Key

# configure the provider
provider "aws" {
  region = "ap-south-1"
  profile = "terraform-user"
}

#Creating private key
resource "tls_private_key" "key" {
  algorithm = "RSA"
  rsa_bits = 4096
}

resource "aws_key_pair" "generated_key" {
    key_name = "task3_key"
    public_key = tls_private_key.key.public_key_openssh

    depends_on = [
        tls_private_key.key
    ]
}

#Downloading priavte key
resource "local_file" "file" {
    content  = tls_private_key.key.private_key_pem
    filename = "E:/Terraform/tasks cloud trainig/task3/task3_key.pem"
    file_permission = "0400"

    depends_on = [ aws_key_pair.generated_key ]
}

VPC and Subnet

VPC (virtual private network) is a Lab in which we can create sub-labs known as subnets.

# creating a vpc
resource "aws_vpc" "vpc" {
  cidr_block       = "192.169.0.0/16"
  instance_tenancy = "default"
  enable_dns_hostnames = true

  tags = {
    Name = "vpc"
  }
}

# creating subnet in 1a
resource "aws_subnet" "sub_1a" {
  vpc_id     = "${aws_vpc.vpc.id}"
  cidr_block = "192.169.1.0/24"
  availability_zone = "ap-south-1a" 
  map_public_ip_on_launch  =  true

  tags = {
    Name = "sub_1a"
  }
  depends_on = [ aws_vpc.vpc ]
}

# creating a subnet in 1b
resource "aws_subnet" "sub_1b" {
  vpc_id     = "${aws_vpc.vpc.id}"
  cidr_block = "192.169.2.0/24"
  availability_zone = "ap-south-1b"

  tags = {
    Name = "sub_1b"
  }
  depends_on = [ aws_vpc.vpc ]
}

Internet-Gateway

It allows VPC to connect with public world (internet)

# creating igw
resource "aws_internet_gateway" "igw" {
  vpc_id = "${aws_vpc.vpc.id}"

  tags = {
    Name = "igw"
  }
  depends_on = [ aws_vpc.vpc ]
}

Route Table and association

Route table controls the routing. Each subnet should be associated with route table. By default it route table is attached but we want to add more routes we can, same as I did below to provide subnet 1a public access.

# creating route table
resource "aws_route_table" "r" {
  vpc_id = "${aws_vpc.vpc.id}"

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.igw.id}"
  }

  tags = {
    Name = "r"
  }
  depends_on = [ aws_internet_gateway.igw, aws_vpc.vpc ]
}

# associating route table
resource "aws_route_table_association" "a" {
  subnet_id      = aws_subnet.sub_1a.id
  route_table_id = aws_route_table.r.id

  depends_on = [ aws_route_table.r ]
}

Security Group for wordpress

Security groups are firewall to an instance which controls the port traffic.

# sg for wordpress
resource "aws_security_group" "wp_sg" {
  name        = "wp sg"
  description = "Allow http ssh all"
  vpc_id      = "${aws_vpc.vpc.id}"

  ingress {
    description = "http"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "wp sg"
  }
}

Security Group for mysql

# sg for mysql
resource "aws_security_group" "mysql_sg" {
  name        = "mysql sg"
  description = "ssh and mysql port"
  vpc_id      = "${aws_vpc.vpc.id}"

  ingress {
    description = "mysql port"
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = [aws_vpc.vpc.cidr_block]
  }
  
  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [aws_vpc.vpc.cidr_block]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "mysql sg"
  }
  depends_on = [ aws_vpc.vpc ]
}

EC2 instance

# wordpress ami
resource "aws_instance" "wordpress" {
    depends_on = [   aws_subnet.sub_1a, aws_security_group.wp_sg, ]
    
    ami           = "ami-02b9afddbf1c3b2e5"
    instance_type = "t2.micro"
    key_name = "task3_key"
    vpc_security_group_ids = ["${aws_security_group.wp_sg.id}"]
    subnet_id = aws_subnet.sub_1a.id
    tags = {
        Name = "WordPress"
    }
}

# mysql ami
resource "aws_instance" "mysql" {
    depends_on = [    aws_subnet.sub_1b, aws_security_group.mysql_sg, ]
    ami           = "ami-0d8b282f6227e8ffb"
    instance_type = "t2.micro"
    key_name = "task3_key"
    vpc_security_group_ids = ["${aws_security_group.mysql_sg.id}"]
    subnet_id = aws_subnet.sub_1b.id
    tags = {
        Name = "Mysql"
    }
}

Final Step

terraform init
terraform plan
terraform apply

After launching connect to wordpress instance

Open directory /var/www/html/ and open wp-config.php and put your mysql instance private ip and restart httpd service.

systemctl restart httpd

And now open wordpress public dns name or public IP.

We launched our wordpress instance in public subnet and mysql in private subnet of the same VPC.

NOTE

I created my own AMI of wordpress and mysql so you won’t be able to use the same AMI. To see how you can create your own AMI, here is a rough repo of my friend.

mysql: https://github.com/kunal1601/VPC-AWS/blob/master/MySql-ami.txt

wordpress: https://github.com/kunal1601/VPC-AWS/blob/master/Wordpress-ami.txt

Thank You!

Job DSL in jenkins

DSL stands for Domain Specific Language which emphasis on a single domain or we can say it is created for a single domain or purpose. Normally in jenkins we have a dashboard available and we create our jobs and pipelines through that. But, we just can’t every time make different jobs for our tasks it will be a hectic task to do so.

To overcome above problem we need a code/programming service for jenkins that creates all other jobs through a single program, therefore jenkins support Domain Specific Language and the plugin is called job DSL. Jenkins support groovy as DSL.

There is a lot of perks of using job DSL, suppose our developer knows groovy or like we as a operational team wants our developer to write code for jobs in DSL that can happen too, so it will save a lot of time of operational team and also speed up the process.

We write our DSL code in a job that after creates all other jobs or say pipeline and that initial job is known as seed job.

Let’s apply our DSL to my previous blog/task : https://mynk.home.blog/2020/06/29/jenkins-pipeline-for-deployment-on-top-of-kubernetes/

Seed Job

job("job_1") {
  description("This job will pull code from github")

  scm {
    github('mykg/test-autoweb', 'master')
    }
  triggers {
    scm("* * * * *")
  }
  steps {
        remoteShell('root@192.168.99.101:22') {
            command('cp -rvf /root/jenk_cont/jenkins/workspace/job1/* /root/jenk_cont/web-code/')
        }
    }
}

job("job_2") {
  description("This job will check the respective code and deploy respective environment on K8S")
  triggers {
        upstream('job_1')
    }
  steps {
        remoteShell('root@192.168.99.101:22') {
            command(
              '/cd /root/jenk_cont/web-code/',

	          'if sudo kubectl get pvc | grep myweb-pvc',
              'then',
                'echo "pvc already exist"',
              'else',
				'sudo kubectl create -f /root/jenk_cont/pvc.yml',
			  'fi',

              'x=$(ls | grep html)',
              'if echo $x == *.html',
              'then',
                'if sudo kubectl get deploy | grep myweb-deploy',
                'then',
		          'echo "already running"',
                'else',
                  'sudo kubectl apply -f /root/jenk_cont/http_kubedeploy.yml',
                  'sudo sleep 10',
                  'P=$(kubectl get pods -l env=production -o jsonpath="{.items[0].metadata.name}")',
                  'echo $P',
                  'sudo kubectl cp /root/jenk_cont/web-code/*.html $P:/var/www/html/',
                  'kubectl exec $P -- chmod 0777 /var/www/html/',
                  'kubectl exec $P -- chmod 0777 /var/www/html/index.html',
                'fi',
              'else',
                'if sudo kubectl get deploy | grep myweb-deploy',
                'then',
                   'echo "already running"',
                'else',
                   'sudo kubectl apply -f /root/jenk_cont/php_kubedeploy.yml',
                   'POD=$(kubectl get pods -l env=production -o jsonpath="{.items[0].metadata.name}")',
                   'sudo kubectl cp /root/jenk_cont/web-code/*.html $POD:/var/www/html/',
                   'kubectl exec $POD -- chmod 0777 /var/www/html/',
                   'kubectl exec $POD -- chmod 0777 /var/www/html/index.php',
                'fi',
              'fi',

              'sudo kubectl create -f /root/jenk_cont/service.yml',
              'sudo kubectl get svc'            
            )
        }
    }
}
  
job("job_3") {
    description("This job will check the web code is accessible or not. And send mail if not working")
    triggers {
        upstream('job_2')
    }
  steps {
    remoteShell('root@192.168.99.101:22') {
            command(
            'status=$(curl -sL -w "%{http_code}" -o /dev/null/ 192.168.99.100:32000)',
            'if [[ $status == 200 ]]',
            'then',
              'exit 0',
            'else', 
              'exit 1',
            'fi'
            )
        }
    }
  publishers {
        extendedEmail {
            recipientList('contacttomayankgaur@gmail.com')
            defaultSubject('Oops')
            defaultContent('Something broken')
            contentType('text/html')
            triggers {
              failure{
                subject('Subject')
                content('Body')
                recipientList('contacttomayankgaur@gmail.com')
                sendTo {
                        recipientList()
                    }
              }  
            }
        }
    }
}

buildPipelineView('View DSL') {
    filterBuildQueue()
    filterExecutors()
    title('job DSL build view')
    displayedBuilds(5)
    selectedJob('job_1')
    alwaysAllowManualTrigger()
    showPipelineParameters()
    refreshFrequency(60)
}

Conclusion

We created our seed job using Domain Specific Language (DSL) and did our earlier task with it. If you haven’t quiet understood what I did, then visit the task mentioned above.

Thanks! Comment down your views

Deploying services on EKS using EFS (Drupal)

Elastic Kubernetes Service is a well managed kubernetes service and allows us to perform every task related to kubernetes on AWS cloud. Amazon EKS provides many facilities to the users like scalability of nodes etc…

Here, we are going to deploy drupal on top of EKS and use EFS (Elastic File System) for PVC creation so that it can be accessible from any region. The problem with EBS is that it is a regional service.

Drupal is an amazing online web framework for web designing and management.

Workflow

We are going to create k8s cluster using cli. But by default aws cli do not provide so much functions and properties for EKS so, there is a client called eksctl we are going to configure that and then using kubectl client we are going to deploy out services on our cluster and for PVC we create EFS and provision it.

Configuring Clients

eksctl: https://github.com/weaveworks/eksctl

awscli: https://aws.amazon.com/cli/

kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Let’s first create IAM and attach following policies

Download the user credentials to configure aws cli with it.

aws configure

now, try running

aws eks list-clusters

But, we want to use eksctl. We already configured aws cli so eksctl will retrieve credentials from there.

eksctl get cluster

Creating Cluster

We have to create YAML code to create cluster.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
        name: my-cluster
        region: ap-south-1

nodeGroups:
        - name: ng1
          desiredCapacity: 2
          instanceType: t2.micro
          ssh:
                  publicKeyName: aws_key_iam
        
        - name: ng2
          desiredCapacity: 2
          instanceType: t2.micro
          ssh:
                  publicKeyName: aws_key_iam

Node groups automate and provision lifecycle of nodes and also provides scalability of nodes.

We have attached ssh key to login into our nodes if needed.

eksctl create cluster -f cluster.yml

Now, we want our kubectl client to connect with our cluster create above so, we need to update config using

aws eks update-kubeconfig --name my-clutser
kubectl config-view
kubectl get pods

EFS

We want our PVC should create in EFS so we need to create EFS but, before going further we need to do a very small thing. By deafult amazon nodes do not have utility to connect with EFS. We need to login to each node and ….

sudo yum install amazon-efs-utils -y

Now, let’s create EFS

Let’s create namespace for our cluster to launch services there.

kubctl create ns myns

We now, have to create YAML code for EFS provisioner to be able to mount PVC to EFS or we can can that to create PVC in EFS.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: efs-provisioner
spec:
  selector:
    matchLabels:
      app: efs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:v0.1.0
          env:
            - name: FILE_SYSTEM_ID
              value: fs-7369e3a2
            - name: AWS_REGION
              value: ap-south-1
            - name: PROVISIONER_NAME
              value: my-efs/aws-efs
          volumeMounts:
            - name: pv-volume
              mountPath: /persistentvolumes
      volumes:
        - name: pv-volume
          nfs:
            server: fs-7369e3a2.efs.ap-south-1.amazonaws.com
            path: /

before running above check the values for FILE_SYSTEM_ID and AWS_REGION from AWS console.

kubectl create -f create-efs-provisioner.yml -n myns

Now, we are giving cluster role binding permission

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nfs-provisioner-role-binding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: myns
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
kubectl create -f create-rbac.yml -n myns

Deploying

The last step is to deploy the services. For this I have created a kustomization.yml in which we have our secrets for database password and order of files in which they will be executed.

secretGenerator:
- name: db-pass
  literals:
  - password=1234
  
namespace: myns

resources:
  - storage-pvc.yml
  - postgresql.yml 
  - drupal-deploy.yml
storage-pvc.yml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-efs
provisioner: my-efs/aws-efs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  labels:
    env: production
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
postgresql.yml
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  labels:
    env: production
spec:
  ports:
    - port: 5432
  selector:
    env: production
    tier: postgreSQL
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
  labels:
    env: production
spec:
  replicas: 2
  selector:
    matchLabels:
      env: production
      tier: postgreSQL
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        env: production
        tier: postgreSQL
    spec:
      containers:
        - image: postgres:latest
          name: postgresql
          env:
            - name: POSTGRES_USER
              value: drupal
            - name: POSTGRES_DB
              value: drupal_production
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                 name: db-pass
                 key: password
          ports:
            - containerPort: 5432
              name: postgresql
          volumeMounts:
            - name: postgresql
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: postgresql
          persistentVolumeClaim:
            claimName: postgres-claim
drupal-deploy.yml
apiVersion: v1
kind: Service
metadata:
  name: drupal-svc
  labels:
    env: production
spec:
  ports:
    - port: 80
  selector:
    env: production
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drupal-pv-claim1
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  labels:
    env: production
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drupal-pv-claim2
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  labels:
    env: production
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: drupal
  labels:
    env: production
spec:
  replicas: 2
  selector:
    matchLabels:
      env: production
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        env: production
        tier: frontend
    spec:
      containers:
      - image: drupal:8-apache
        name: drupal-cont
        ports:
        - containerPort: 80
          name: drupal-port
        volumeMounts:    
            - name: drupal-persistent-storage1
              mountPath: /var/www/html/profiles
            - name: drupal-persistent-storage2
              mountPath: /var/www/html/themes
      volumes:
      - name: drupal-persistent-storage1
        persistentVolumeClaim:
          claimName: drupal-pv-claim1
      - name: drupal-persistent-storage2
        persistentVolumeClaim:
          claimName: drupal-pv-claim2

Now we all is remaining just to execute kustomization.yml

kubectl apply -k .

Check the pods status

kubectl get pods -n myns

Check the services and copy the external IP of drupal service.

kubectl get svc -n myns

Additional

We can also use fargate service which provide a server less for EKS.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
        name: far-cluster
        region: ap-southeast-1

fargateProfiles:
        - name: fargate-default
          selectors:
                  - namespace: kube-system
                  - namespace: default
eksctl create cluster -f fargatecluster.yml

To check fargate profile

eksctl get fargateprofile --cluster far-cluster

Conclusion

We have successfully deployed drupal on Amazon EKS and also used EFS for PVC creation and saw how we can use server less service (fargate) for EKS.

Thanks! For coming this far.

Jenkins Pipeline for deployment on top of Kubernetes

Executing yaml files again and again can be exhaustive as a lot of error and mistakes may arise so we easily can integrate it with jenkins.

Here is what we are going to do:

  1. Create container image that’s has Jenkins installed using dockerfile Or You can use the Jenkins Server on RHEL 8/7
  2. When we launch this image, it should automatically starts Jenkins service in the container.
  3. Create a job chain of job1, job2, job3 and job4 using build pipeline plugin in Jenkins
  4. Job1 : Pull the Github repo automatically when some developers push repo to Github.
  5. Job2 :
    1. By looking at the code or program file, Jenkins should automatically start the respective language interpreter installed image container to deploy code on top of Kubernetes ( eg. If code is of PHP, then Jenkins should start the container that has PHP already installed )
    2. Expose your pod so that testing team could perform the testing on the pod
    3. Make the data to remain persistent ( If server collects some data like logs, other user information )
  6. Job3 : Test your app if it is working or not.
  7. Job4 : if app is not working , then send email to developer with error messages and redeploy the application after code is being edited by the developer.

Dockerfile

FROM centos:7
RUN yum install java-11-openjdk.x86_64 -y 
RUN yum install wget -y
RUN yum install git -y
RUN wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
RUN rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
RUN yum install jenkins -y
CMD java -jar /usr/lib/jenkins/jenkins.war

Build above dockerfile….

docker build -t jenkins .

Now, let’s run jenkins container.

mkdir /root/jenk_cont/jenkins/
docker run -dit --name jenkins -p 8082:8080 /root/jenk_cont/jenkins/:/root/.jenkins/ jenkins:v5

To find password inside jenkins container.

docker exec jenkins cat /root/.jenkins/secrets/initialAdminPassword

Our jenkins server is ready and running on docker.

But when we run any job by default it will execute inside the container and we definitely don’t want that. So, we are going to install a plugin called SSH to execute our jobs remotely to other system.

SSH config

Now, we are going to setup ssh credentials for our plugin installed. Go to Manage Jenkins > Configure System.

Pipeline

We want our github repo to be pulled by jenkins. So, that be happen we require GitHub plugin.

We will use github webhooks so, if you are running a vm like I am install ngrok and then

./ngrok http 8082

Go to the repo where you have your website code and go to setting and then traverse to webhooks.

JOB 1

JOB 2

This job will deploy the our code on top of kubernetes and expose the services.

Code under build
/cd /root/jenk_cont/web-code/

if sudo kubectl get pvc | grep myweb-pvc
then
  echo "pvc already exist"
else
  sudo kubectl create -f /root/jenk_cont/pvc.yml
fi

x=$(ls | grep html)
if echo $x == *.html
then
  if sudo kubectl get deploy | grep myweb-deploy
  then
    echo "already running"
  else
    sudo kubectl apply -f /root/jenk_cont/http_kubedeploy.yml
    sudo sleep 10
    P=$(kubectl get pods -l env=production -o jsonpath="{.items[0].metadata.name}")
    echo $P
    sudo kubectl cp /root/jenk_cont/web-code/*.html $P:/var/www/html/
    kubectl exec $P -- chmod 0777 /var/www/html/
    kubectl exec $P -- chmod 0777 /var/www/html/index.html
  fi
else
  if sudo kubectl get deploy | grep myweb-deploy
  then
    echo "already running"
  else
    sudo kubectl apply -f /root/jenk_cont/php_kubedeploy.yml
    POD=$(kubectl get pods -l env=production -o jsonpath="{.items[0].metadata.name}")
    sudo kubectl cp /root/jenk_cont/web-code/*.html $POD:/var/www/html/
    kubectl exec $POD -- chmod 0777 /var/www/html/
    kubectl exec $POD -- chmod 0777 /var/www/html/index.php
  fi
fi  

sudo kubectl create -f /root/jenk_cont/service.yml
sudo kubectl get svc

JOB 3

This is a testing job which will test our code whether it is working fine or not and if any error occurs then send a mail to jenkins server handler or developer.

Also, install a plugin named Downstream-Ext for later use.

Configuring SMTP email in jenkins

We can also download extra plugin for email for more configurations.

the job looks like

JOB 4

This job will send email to server handler or developer if JOB 3 fails, which means if site is not accessible.

If we make mistakes intentionally the mail received will look like this:

JOB 4 will only get triggered when JOB 3 fails.

If you are having trouble in sending email then visit : https://www.youtube.com/watch?v=DULs4Wq4xMg

Conclusion

Build Pipeline

Download build pipeline plugin…

Our website is accessible and we successfully deployed our web app on k8s integrated with jenkins pipeline and also configured auto website testing and email sending abilities.

Thank You! Comment down your views.

Monitoring Services on K8S while keeping their data persistent

Monitoring is a practice of keeping an eye on any service. Monitoring includes log checking, metric monitoring etc. For this to be done we have tools like Prometheus and Grafana.

Prometheus and Grafana

Prometheus is a tool that monitor metrics of a system and also provide some visuals to show stats. For this to be done exposure of metrics is need which is done by exporters available for many services. Prometheus creates time series database most of the time.

Grafana provides great variety of visuals like variety of charts, graphs etc… It is mainly used for it’s interactive visualization, that is why the combination of grafana and prometheus is preferred over any other. Grafana relies in pure data that it get from prometheus.

We are going to integrate Prometheus and Grafana and perform the following task this way:

  1. Deploy them as pods on top of Kubernetes by creating any of the following resources Deployment, ReplicaSet, Pods or Services.
  2. And make their data to be remain persistent.
  3. And both of them should be exposed to outside world.

Dockerfile

We are going to build our custom image for prometheus and grafana.

Prometheus :-

FROM centos:8
RUN yum install wget -y
RUN wget https://github.com/prometheus/prometheus/releases/download/v2.19.0/prometheus-2.19.0.linux-amd64.tar.gz
RUN tar -xvf prometheus-2.19.0.linux-amd64.tar.gz
WORKDIR prometheus-2.19.0.linux-amd64/
EXPOSE 9090
CMD ./prometheus

Grafana :-

FROM centos:8
RUN yum install wget -y
RUN wget https://dl.grafana.com/oss/release/grafana-7.0.3.linux-amd64.tar.gz
RUN tar -zxvf grafana-7.0.3.linux-amd64.tar.gz
WORKDIR grafana-7.0.3/bin/
EXPOSE 3000
CMD ./grafana-server

To build the image :-

docker build -t <image-name>:<tag> .

We need to deploy these services on kubernetes and make their data permanent. So, first we need to create a PVC (PersistentVolumeClaim).

PVC

PVC stands for PersistentVolumeClaim. We as a k8s user request for the storage from PVC behind the scene uses PV resources for we can say that uses it to claim the storage for us. We can get both static and dynamic type of storage from PVC.

For more of PVC : https://kubernetes.io/docs/concepts/storage/persistent-volumes/

We are here mounting the directory which contains the main data that should be remained persistent as later on maybe due to some fault or anything our pods get down but data will remain persistent.

PVC for grafana :-

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
      name: graf-pvc
      labels:
              env: production
spec:
      accessModes:
              - ReadWriteOnce
      resources:
              requests:
                      storage: 1Gi

PVC for prometheus :-

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
      name: prom-pvc
      labels:
              env: production
spec:
      accessModes:
              - ReadWriteOnce
      resources:
              requests:
                      storage: 1Gi

Deploying on k8s

Prometheus :-

apiVersion: apps/v1
kind: Deployment
metadata:
      name: prom-deploy
      labels:
              env: production
spec:
      replicas: 1
      selector:
           matchLabels:
                  env: production
      template:
           metadata:
                  name: prom-pod
                  labels:
                       env: production
           spec:
                  containers:
                       - name: prom-con
                         image: mykgod/prometheus
                         volumeMounts:
                         - name: prom-persistent-storage
                           mountPath: "/prometheus-2.19.0.linux-amd64/data/"
                         ports:
                         - containerPort: 9090
                           name: prom-pod
                  volumes:
                         - name: prom-persistent-storage
                           persistentVolumeClaim:
                                   claimName: prom-pvc

Grafana :-

apiVersion: apps/v1
kind: Deployment
metadata:
        name: graf-deploy
        labels:
                env: production
spec:
        replicas: 1
        selector:
                matchLabels:
                        env: production
        template:
                metadata:
                        name: graf-pod
                        labels:
                                env: production
                spec:
                        containers:
                                - name: graf-con
                                  image: mykgod/grafana
                                  volumeMounts:
                                  - name: graf-persistent-storage
                                    mountPath: /grafana-7.0.3/data 
                                  ports:
                                  - containerPort: 3000
                                    name: graf-pod
                        volumes:
                                - name: graf-persistent-storage
                                  persistentVolumeClaim:
                                          claimName: graf-pvc

Service

We have exposed our deploy through following service.

apiVersion: v1
kind: Service
metadata:
        name: service-monitor-1
spec:
        selector:
                env: production
        type: NodePort 
        ports:
                - port: 9090
                  protocol: TCP
                  name: port-prom
                - port: 3000
                  protocol: TCP
                  name: port-graf

Testing

Now, it is the time to test our services…

kubectl get all

check the port exposed to the services by above command and open web browser to see the webUI of grafana and preometheus.

Now, let’s do some changes to promestheus.yml file lying in our pod.

kubectl get pods
kubectl exec -it <prometheus_pod_name> -- bash

Here I added another system for metrics monitoring.

kill -HUP 1
# this will kill the process 1 (prometheus) and restart it

Now, we can see our metrics through prometheus and grafana.

We have our precious monitoring data into our prom and graf server….so let’s delete a pod, any prom or graf or both.

kubectl get pods
kubectl delete pods <prom_pod_name>
kubectl delete pods <graf_pod_name>

Refresh the browser pages and we can see that changes are still there and it is working fine. And also we can check our pods rebuild.

Conclusion

We successfully deployed grafana and prometheus on top of kubernetes cluster and made these services and their data permanent using PVCs and exposed the services to the outside world.

Web page/app hosting on EC2 using S3, cloudfront with Terraform

Terrafrom is the most widely used product for Infrastructure as a Service. By using terraform infrastructure handling is a piece of cake. Let us see what we are going to work on today.

What we are going to do:

  • Create the key and security group which allow the port 80.
  • Launch EC2 instance use the key and security group.
  • Launch one Volume (EBS) and mount that volume into /var/www/html of EC2.
  • The web app code is in github repo and should be downloaded into /var/www/html/ (web server).
  • Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  • Create a cloudfront using s3 bucket(which contains images) and use the cloudfront URL to update in code of web app.

For above, we are using terraform : https://www.terraform.io/downloads.html

I am using it on rhel 8.

Approach

  • First of all creation of key to used in EC2.
  • Second we will create security group to be assigned to EC2.
  • Then create an EC2 and EBS volume and also snapshot of EBS respectively.
  • Creation of S3 bucket should take place now
  • At last cloudfront

Configure the provider

# configure the provider
provider "aws" {
  region = "ap-south-1"
  profile = "new_tf"
}

Key Pair

We are creating key using tls_private_key resource and passing the key to loacl_file to be saved in our host for after use.

# creating a key pair
resource "tls_private_key" "key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}
resource "aws_key_pair" "generated_key" {
  key_name   = "deploy-key"
  public_key = tls_private_key.key.public_key_openssh
}
# saving key to local file
resource "local_file" "deploy-key" {
    content  = tls_private_key.key.private_key_pem
    filename = "/root/terra/task1/deploy-key.pem"
}

Security Group

Security Groups are the firewall of an EC2 instance it controls the traffic by writing some rules. We did the same below allowing ingress for port 22 and 80 and egress to anywhere in the world.

# creating a SG
resource "aws_security_group" "allow_ssh_http" {
  name        = "allow_ssh_http"
  description = "Allow ssh and http inbound traffic"
  
  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "http"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "allow_ssh_http"
  }
}

EC2

We are creating EC2 instance here proving which ami, key, sg, type to use and connecting it via ssh.

# launching an ec2 instance
resource "aws_instance" "myin" {
  ami  = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = aws_key_pair.generated_key.key_name
  security_groups = [ "allow_ssh_http" ]
  
  depends_on = [
    null_resource.nulllocal2,
  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("/root/terra/task1/deploy-key.pem")
    host     = aws_instance.myin.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }
  tags = {
    Name = "os1"
  }
}

EBS, snapshot n attachment

Creating and attaching a volume is always a good practice. We are making our data persistent here.

# create an ebs volume
resource "aws_ebs_volume" "ebstest" {
  availability_zone = aws_instance.myin.availability_zone
  size              = 1
  tags = {
    Name = "ebs1"
  }
}
# create an ebs snapshot
resource "aws_ebs_snapshot" "ebstest_snapshot" {
  volume_id = aws_ebs_volume.ebstest.id
  tags = {
    Name = "ebs1_snap"
  }
}
# attaching the volume
resource "aws_volume_attachment" "ebs1_att" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.ebstest.id
  instance_id = aws_instance.myin.id
  force_detach = true
}

Config and Code Deployment

This part connects to the EC2 instance launched previously and then format and mount our volume and pull the github repo containing the website.

resource "null_resource" "nullremote1"  {
  depends_on = [
    aws_volume_attachment.ebs1_att,
    aws_s3_bucket_object.object
  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("/root/terra/task1/deploy-key.pem")
    host     = aws_instance.myin.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/mykg/sampleCloud.git /var/www/html"
    ]
  }
}
Additional
# setting read_permission on pem
resource "null_resource" "nulllocal2"  {
  depends_on = [
    local_file.deploy-key,
  ]
   provisioner "local-exec" {
            command = "chmod 400 /root/terra/task1/deploy-key.pem"
        }
}

This is optional as we can also set permission in local_file resource used under Key Pair.

S3

Here we created our S3 bucket and set acl and then uploaded our image to S3.

resource "aws_s3_bucket" "b" {
  bucket = "mynkbucket19"
  acl    = "public-read"
  tags = {
    Name        = "mynkbucket"
  }
}
resource "aws_s3_bucket_object" "object" {
  depends_on = [ aws_s3_bucket.b, ]
  bucket = "mynkbucket19"
  key    = "x.jpg"
  source = "/root/terra/task1/cloudfront/x.jpg"
  acl = "public-read"
}
locals {
  s3_origin_id = "S3-mynkbucket19"
}

Here, I uploaded my x.jpg directly trough terraform. There are other ways too like I also did it with jenkins below.

CloudFront and origin access id

First, we generated a random origin access id using aws_cloudfront_origin_accesss_identity to be used while creating cloudfront.

Then, we setup our domain_name and origin_id as we want to access our S3 bucket by cloudfront.

What we wanted to do next is to update the website code lying in EC2 with the domain of cloudfront so, by using remote-exec provisioner we updated the code. Then we set our default cache behavior and can also set restrictions to some countries.

# origin access id
resource "aws_cloudfront_origin_access_identity" "oai" {
  comment = "this is OAI to be used in cloudfront"
}
# creating cloudfront 
resource "aws_cloudfront_distribution" "s3_distribution" {
  depends_on = [ aws_cloudfront_origin_access_identity.oai, 
                 null_resource.nullremote1,  
  ]
  origin {
    domain_name = aws_s3_bucket.b.bucket_domain_name
    origin_id   = local.s3_origin_id
    s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
    }
  }
    connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("/root/terra/task1/deploy-key.pem")
    host     = aws_instance.myin.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo su << EOF",
      "echo \"<img src='http://${self.domain_name}/${aws_s3_bucket_object.object.key}'>\" >> /var/www/html/index.html",
      "EOF"
    ]
  }
  enabled             = true
  is_ipv6_enabled     = true
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    viewer_protocol_policy = "redirect-to-https"
  }
  
  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }
  viewer_certificate {
    cloudfront_default_certificate = true
  }
}
Public IP on shell
# IP
output "IP_of_inst" {
  value = aws_instance.myin.public_ip
}

Automation using Jenkins

I have integrated jenkins to automate the task a little bit.

Using S3 publisher plugin

S3 publisher plugin is used to publish objects into S3 bucket from host. I used this plugin to pull the repo from github and push to S3 bucket, we can do the same with terrafrom itself by using a small piece of code.

Go to manage jenkins and insall S3 publisher plugin.

Now, our previous job will look like this :

Finally we have our hosted web page :

Conclusion

We efficiently hosted the website on EC2 using S3 bucket and Cloudfront and also integrated jenkins to make our task look more automated.

Thanks! Drop a suggestion below and let’s connect via LinkedIn.

Design a site like this with WordPress.com
Get started