Web page/app hosting on EC2 using S3, cloudfront with Terraform

Terrafrom is the most widely used product for Infrastructure as a Service. By using terraform infrastructure handling is a piece of cake. Let us see what we are going to work on today.

What we are going to do:

  • Create the key and security group which allow the port 80.
  • Launch EC2 instance use the key and security group.
  • Launch one Volume (EBS) and mount that volume into /var/www/html of EC2.
  • The web app code is in github repo and should be downloaded into /var/www/html/ (web server).
  • Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  • Create a cloudfront using s3 bucket(which contains images) and use the cloudfront URL to update in code of web app.

For above, we are using terraform : https://www.terraform.io/downloads.html

I am using it on rhel 8.

Approach

  • First of all creation of key to used in EC2.
  • Second we will create security group to be assigned to EC2.
  • Then create an EC2 and EBS volume and also snapshot of EBS respectively.
  • Creation of S3 bucket should take place now
  • At last cloudfront

Configure the provider

# configure the provider
provider "aws" {
  region = "ap-south-1"
  profile = "new_tf"
}

Key Pair

We are creating key using tls_private_key resource and passing the key to loacl_file to be saved in our host for after use.

# creating a key pair
resource "tls_private_key" "key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}
resource "aws_key_pair" "generated_key" {
  key_name   = "deploy-key"
  public_key = tls_private_key.key.public_key_openssh
}
# saving key to local file
resource "local_file" "deploy-key" {
    content  = tls_private_key.key.private_key_pem
    filename = "/root/terra/task1/deploy-key.pem"
}

Security Group

Security Groups are the firewall of an EC2 instance it controls the traffic by writing some rules. We did the same below allowing ingress for port 22 and 80 and egress to anywhere in the world.

# creating a SG
resource "aws_security_group" "allow_ssh_http" {
  name        = "allow_ssh_http"
  description = "Allow ssh and http inbound traffic"
  
  ingress {
    description = "ssh"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "http"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "allow_ssh_http"
  }
}

EC2

We are creating EC2 instance here proving which ami, key, sg, type to use and connecting it via ssh.

# launching an ec2 instance
resource "aws_instance" "myin" {
  ami  = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = aws_key_pair.generated_key.key_name
  security_groups = [ "allow_ssh_http" ]
  
  depends_on = [
    null_resource.nulllocal2,
  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("/root/terra/task1/deploy-key.pem")
    host     = aws_instance.myin.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }
  tags = {
    Name = "os1"
  }
}

EBS, snapshot n attachment

Creating and attaching a volume is always a good practice. We are making our data persistent here.

# create an ebs volume
resource "aws_ebs_volume" "ebstest" {
  availability_zone = aws_instance.myin.availability_zone
  size              = 1
  tags = {
    Name = "ebs1"
  }
}
# create an ebs snapshot
resource "aws_ebs_snapshot" "ebstest_snapshot" {
  volume_id = aws_ebs_volume.ebstest.id
  tags = {
    Name = "ebs1_snap"
  }
}
# attaching the volume
resource "aws_volume_attachment" "ebs1_att" {
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.ebstest.id
  instance_id = aws_instance.myin.id
  force_detach = true
}

Config and Code Deployment

This part connects to the EC2 instance launched previously and then format and mount our volume and pull the github repo containing the website.

resource "null_resource" "nullremote1"  {
  depends_on = [
    aws_volume_attachment.ebs1_att,
    aws_s3_bucket_object.object
  ]
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("/root/terra/task1/deploy-key.pem")
    host     = aws_instance.myin.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/mykg/sampleCloud.git /var/www/html"
    ]
  }
}
Additional
# setting read_permission on pem
resource "null_resource" "nulllocal2"  {
  depends_on = [
    local_file.deploy-key,
  ]
   provisioner "local-exec" {
            command = "chmod 400 /root/terra/task1/deploy-key.pem"
        }
}

This is optional as we can also set permission in local_file resource used under Key Pair.

S3

Here we created our S3 bucket and set acl and then uploaded our image to S3.

resource "aws_s3_bucket" "b" {
  bucket = "mynkbucket19"
  acl    = "public-read"
  tags = {
    Name        = "mynkbucket"
  }
}
resource "aws_s3_bucket_object" "object" {
  depends_on = [ aws_s3_bucket.b, ]
  bucket = "mynkbucket19"
  key    = "x.jpg"
  source = "/root/terra/task1/cloudfront/x.jpg"
  acl = "public-read"
}
locals {
  s3_origin_id = "S3-mynkbucket19"
}

Here, I uploaded my x.jpg directly trough terraform. There are other ways too like I also did it with jenkins below.

CloudFront and origin access id

First, we generated a random origin access id using aws_cloudfront_origin_accesss_identity to be used while creating cloudfront.

Then, we setup our domain_name and origin_id as we want to access our S3 bucket by cloudfront.

What we wanted to do next is to update the website code lying in EC2 with the domain of cloudfront so, by using remote-exec provisioner we updated the code. Then we set our default cache behavior and can also set restrictions to some countries.

# origin access id
resource "aws_cloudfront_origin_access_identity" "oai" {
  comment = "this is OAI to be used in cloudfront"
}
# creating cloudfront 
resource "aws_cloudfront_distribution" "s3_distribution" {
  depends_on = [ aws_cloudfront_origin_access_identity.oai, 
                 null_resource.nullremote1,  
  ]
  origin {
    domain_name = aws_s3_bucket.b.bucket_domain_name
    origin_id   = local.s3_origin_id
    s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
    }
  }
    connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = file("/root/terra/task1/deploy-key.pem")
    host     = aws_instance.myin.public_ip
  }
  provisioner "remote-exec" {
    inline = [
      "sudo su << EOF",
      "echo \"<img src='http://${self.domain_name}/${aws_s3_bucket_object.object.key}'>\" >> /var/www/html/index.html",
      "EOF"
    ]
  }
  enabled             = true
  is_ipv6_enabled     = true
  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    viewer_protocol_policy = "redirect-to-https"
  }
  
  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }
  viewer_certificate {
    cloudfront_default_certificate = true
  }
}
Public IP on shell
# IP
output "IP_of_inst" {
  value = aws_instance.myin.public_ip
}

Automation using Jenkins

I have integrated jenkins to automate the task a little bit.

Using S3 publisher plugin

S3 publisher plugin is used to publish objects into S3 bucket from host. I used this plugin to pull the repo from github and push to S3 bucket, we can do the same with terrafrom itself by using a small piece of code.

Go to manage jenkins and insall S3 publisher plugin.

Now, our previous job will look like this :

Finally we have our hosted web page :

Conclusion

We efficiently hosted the website on EC2 using S3 bucket and Cloudfront and also integrated jenkins to make our task look more automated.

Thanks! Drop a suggestion below and let’s connect via LinkedIn.

2 thoughts on “Web page/app hosting on EC2 using S3, cloudfront with Terraform

  1. Pingback: EFS in AWS – rna

Leave a comment

Design a site like this with WordPress.com
Get started