Deploying services on EKS using EFS (Drupal)

Elastic Kubernetes Service is a well managed kubernetes service and allows us to perform every task related to kubernetes on AWS cloud. Amazon EKS provides many facilities to the users like scalability of nodes etc…

Here, we are going to deploy drupal on top of EKS and use EFS (Elastic File System) for PVC creation so that it can be accessible from any region. The problem with EBS is that it is a regional service.

Drupal is an amazing online web framework for web designing and management.

Workflow

We are going to create k8s cluster using cli. But by default aws cli do not provide so much functions and properties for EKS so, there is a client called eksctl we are going to configure that and then using kubectl client we are going to deploy out services on our cluster and for PVC we create EFS and provision it.

Configuring Clients

eksctl: https://github.com/weaveworks/eksctl

awscli: https://aws.amazon.com/cli/

kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Let’s first create IAM and attach following policies

Download the user credentials to configure aws cli with it.

aws configure

now, try running

aws eks list-clusters

But, we want to use eksctl. We already configured aws cli so eksctl will retrieve credentials from there.

eksctl get cluster

Creating Cluster

We have to create YAML code to create cluster.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
        name: my-cluster
        region: ap-south-1

nodeGroups:
        - name: ng1
          desiredCapacity: 2
          instanceType: t2.micro
          ssh:
                  publicKeyName: aws_key_iam
        
        - name: ng2
          desiredCapacity: 2
          instanceType: t2.micro
          ssh:
                  publicKeyName: aws_key_iam

Node groups automate and provision lifecycle of nodes and also provides scalability of nodes.

We have attached ssh key to login into our nodes if needed.

eksctl create cluster -f cluster.yml

Now, we want our kubectl client to connect with our cluster create above so, we need to update config using

aws eks update-kubeconfig --name my-clutser
kubectl config-view
kubectl get pods

EFS

We want our PVC should create in EFS so we need to create EFS but, before going further we need to do a very small thing. By deafult amazon nodes do not have utility to connect with EFS. We need to login to each node and ….

sudo yum install amazon-efs-utils -y

Now, let’s create EFS

Let’s create namespace for our cluster to launch services there.

kubctl create ns myns

We now, have to create YAML code for EFS provisioner to be able to mount PVC to EFS or we can can that to create PVC in EFS.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: efs-provisioner
spec:
  selector:
    matchLabels:
      app: efs-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:v0.1.0
          env:
            - name: FILE_SYSTEM_ID
              value: fs-7369e3a2
            - name: AWS_REGION
              value: ap-south-1
            - name: PROVISIONER_NAME
              value: my-efs/aws-efs
          volumeMounts:
            - name: pv-volume
              mountPath: /persistentvolumes
      volumes:
        - name: pv-volume
          nfs:
            server: fs-7369e3a2.efs.ap-south-1.amazonaws.com
            path: /

before running above check the values for FILE_SYSTEM_ID and AWS_REGION from AWS console.

kubectl create -f create-efs-provisioner.yml -n myns

Now, we are giving cluster role binding permission

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nfs-provisioner-role-binding
subjects:
  - kind: ServiceAccount
    name: default
    namespace: myns
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
kubectl create -f create-rbac.yml -n myns

Deploying

The last step is to deploy the services. For this I have created a kustomization.yml in which we have our secrets for database password and order of files in which they will be executed.

secretGenerator:
- name: db-pass
  literals:
  - password=1234
  
namespace: myns

resources:
  - storage-pvc.yml
  - postgresql.yml 
  - drupal-deploy.yml
storage-pvc.yml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-efs
provisioner: my-efs/aws-efs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  labels:
    env: production
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
postgresql.yml
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  labels:
    env: production
spec:
  ports:
    - port: 5432
  selector:
    env: production
    tier: postgreSQL
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
  labels:
    env: production
spec:
  replicas: 2
  selector:
    matchLabels:
      env: production
      tier: postgreSQL
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        env: production
        tier: postgreSQL
    spec:
      containers:
        - image: postgres:latest
          name: postgresql
          env:
            - name: POSTGRES_USER
              value: drupal
            - name: POSTGRES_DB
              value: drupal_production
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                 name: db-pass
                 key: password
          ports:
            - containerPort: 5432
              name: postgresql
          volumeMounts:
            - name: postgresql
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: postgresql
          persistentVolumeClaim:
            claimName: postgres-claim
drupal-deploy.yml
apiVersion: v1
kind: Service
metadata:
  name: drupal-svc
  labels:
    env: production
spec:
  ports:
    - port: 80
  selector:
    env: production
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drupal-pv-claim1
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  labels:
    env: production
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drupal-pv-claim2
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  labels:
    env: production
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: drupal
  labels:
    env: production
spec:
  replicas: 2
  selector:
    matchLabels:
      env: production
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        env: production
        tier: frontend
    spec:
      containers:
      - image: drupal:8-apache
        name: drupal-cont
        ports:
        - containerPort: 80
          name: drupal-port
        volumeMounts:    
            - name: drupal-persistent-storage1
              mountPath: /var/www/html/profiles
            - name: drupal-persistent-storage2
              mountPath: /var/www/html/themes
      volumes:
      - name: drupal-persistent-storage1
        persistentVolumeClaim:
          claimName: drupal-pv-claim1
      - name: drupal-persistent-storage2
        persistentVolumeClaim:
          claimName: drupal-pv-claim2

Now we all is remaining just to execute kustomization.yml

kubectl apply -k .

Check the pods status

kubectl get pods -n myns

Check the services and copy the external IP of drupal service.

kubectl get svc -n myns

Additional

We can also use fargate service which provide a server less for EKS.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
        name: far-cluster
        region: ap-southeast-1

fargateProfiles:
        - name: fargate-default
          selectors:
                  - namespace: kube-system
                  - namespace: default
eksctl create cluster -f fargatecluster.yml

To check fargate profile

eksctl get fargateprofile --cluster far-cluster

Conclusion

We have successfully deployed drupal on Amazon EKS and also used EFS for PVC creation and saw how we can use server less service (fargate) for EKS.

Thanks! For coming this far.

2 thoughts on “Deploying services on EKS using EFS (Drupal)

  1. Write more, thats all I have to say. Literally, it seems as though you relied on the video to make your point. You clearly know what youre talking about, why throw away your intelligence on just posting videos to your blog when you could be giving us something enlightening to read?

    Like

Leave a reply to Mayank Gaur Cancel reply

Design a site like this with WordPress.com
Get started