Saturday, November 16, 2019

Run your Kubernetes Application in Didicated Hosting servers with Loadbalancer

After creating your Kubernetes cluster you will think how can I configure a load balancer, like load balancers in AWS GCP, Azure, etc.



Here we can make use of Metallb https://metallb.universe.tf/ as our load balancer.
 NFS as storage

This is my K8S cluster:

myk8s-tests/metallb$ kubectl get node -o wide
NAME                   STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
kmaster.example.com    Ready    master   10d   v1.16.2   172.42.42.100           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4
kworker1.example.com   Ready       10d   v1.16.2   172.42.42.101           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4
kworker2.example.com   Ready       10d   v1.16.2   172.42.42.102           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4


Setup MetalLB in your K8S cluster.

1.
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
2. cat  metallb.yml 

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.42.42.110-172.42.42.120  --> Add your IP ranges

Creating a NFS server for our K8S cluster storage:

# apt-get install nfs-kernel-server
# mkdir -p /srv/nfs/kubedata
# chmod -R 777 /srv/nfs/
# cat /etc/exports
/srv/nfs/kubedata *(rw,sync,no_subtree_check,insecure)
# exportfs -rav
exporting *:/srv/nfs/kubedata
# exportfs -v
/srv/nfs/kubedata
(rw,wdelay,insecure,root_squash,no_subtree_check,sec=sys,rw,insecure,root_squash,no_all_squash)
# showmount -e
Export list for ubuntu-01:
/srv/nfs/kubedata *
Then you need to test the NFS mounting in all your k8s nodes.
# showmount -e 172.42.42.10
Export list for 172.42.42.10:
/srv/nfs/kubedata *
# mount -t nfs 172.42.42.10:/srv/nfs/kubedata /mnt
# mount | grep kubedata
172.42.42.10:/srv/nfs/kubedata on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.42.42.101,local_lock=none,addr=172.42.42.10)


Now configure our k8s cluster with our NFS server.

1. We need to create a physical volume in our NFS server.

cat pv-nfs.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-manual
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 172.42.42.10
    path: "/srv/nfs/kubedata/nfs_manual"

2. After that, we will create a volume claim that matches our PV
cat pvc-nfs.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-manual
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany  --> need to check
  resources:
    requests:
      storage: 800Mi   --> need check

Verify both manual creations are working fine at your cluster.

nfs$ kubectl get pv,pvc
NAME                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
persistentvolume/pv-nfs-manual   1Gi        RWX            Retain           Bound    default/pvc-nfs-manual   manual                  53m
NAME                                   STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc-nfs-manual   Bound    pv-nfs-manual   1Gi        RWX            manual         53m

Now we can deploy our app.

cat nfs-nginx.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      volumes:
      - name: www  --> this name should be same as Volumemounts name
        persistentVolumeClaim:
          claimName: pvc-nfs-manual
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: www   --> need to be same as above name
          mountPath: /usr/share/nginx/html

Create a service for our app using type as LoadBalancer

cat nfs-nginx-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

kubectl create -f nfs-nginx-svc.yml 

nfs$ kubectl get all

NAME                               READY   STATUS    RESTARTS   AGE
pod/nginx-deploy-f5bd4749b-nftg9   1/1     Running   0          51m
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
service/kubernetes   ClusterIP      10.96.0.1                 443/TCP          10d
service/nginx        LoadBalancer   10.103.153.86   172.42.42.110   8080:32012/TCP   11s
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deploy   1/1     1            1           51m
NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deploy-f5bd4749b   1         1         1       51m







No comments:

Post a Comment