Saturday, November 16, 2019

Run your Kubernetes Application in Didicated Hosting servers with Loadbalancer

After creating your Kubernetes cluster you will think how can I configure a load balancer, like load balancers in AWS GCP, Azure, etc.



Here we can make use of Metallb https://metallb.universe.tf/ as our load balancer.
 NFS as storage

This is my K8S cluster:

myk8s-tests/metallb$ kubectl get node -o wide
NAME                   STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
kmaster.example.com    Ready    master   10d   v1.16.2   172.42.42.100           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4
kworker1.example.com   Ready       10d   v1.16.2   172.42.42.101           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4
kworker2.example.com   Ready       10d   v1.16.2   172.42.42.102           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4


Setup MetalLB in your K8S cluster.

1.
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
2. cat  metallb.yml 

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.42.42.110-172.42.42.120  --> Add your IP ranges

Creating a NFS server for our K8S cluster storage:

# apt-get install nfs-kernel-server
# mkdir -p /srv/nfs/kubedata
# chmod -R 777 /srv/nfs/
# cat /etc/exports
/srv/nfs/kubedata *(rw,sync,no_subtree_check,insecure)
# exportfs -rav
exporting *:/srv/nfs/kubedata
# exportfs -v
/srv/nfs/kubedata
(rw,wdelay,insecure,root_squash,no_subtree_check,sec=sys,rw,insecure,root_squash,no_all_squash)
# showmount -e
Export list for ubuntu-01:
/srv/nfs/kubedata *
Then you need to test the NFS mounting in all your k8s nodes.
# showmount -e 172.42.42.10
Export list for 172.42.42.10:
/srv/nfs/kubedata *
# mount -t nfs 172.42.42.10:/srv/nfs/kubedata /mnt
# mount | grep kubedata
172.42.42.10:/srv/nfs/kubedata on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.42.42.101,local_lock=none,addr=172.42.42.10)


Now configure our k8s cluster with our NFS server.

1. We need to create a physical volume in our NFS server.

cat pv-nfs.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-manual
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 172.42.42.10
    path: "/srv/nfs/kubedata/nfs_manual"

2. After that, we will create a volume claim that matches our PV
cat pvc-nfs.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-manual
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany  --> need to check
  resources:
    requests:
      storage: 800Mi   --> need check

Verify both manual creations are working fine at your cluster.

nfs$ kubectl get pv,pvc
NAME                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
persistentvolume/pv-nfs-manual   1Gi        RWX            Retain           Bound    default/pvc-nfs-manual   manual                  53m
NAME                                   STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc-nfs-manual   Bound    pv-nfs-manual   1Gi        RWX            manual         53m

Now we can deploy our app.

cat nfs-nginx.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      volumes:
      - name: www  --> this name should be same as Volumemounts name
        persistentVolumeClaim:
          claimName: pvc-nfs-manual
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: www   --> need to be same as above name
          mountPath: /usr/share/nginx/html

Create a service for our app using type as LoadBalancer

cat nfs-nginx-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

kubectl create -f nfs-nginx-svc.yml 

nfs$ kubectl get all

NAME                               READY   STATUS    RESTARTS   AGE
pod/nginx-deploy-f5bd4749b-nftg9   1/1     Running   0          51m
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
service/kubernetes   ClusterIP      10.96.0.1                 443/TCP          10d
service/nginx        LoadBalancer   10.103.153.86   172.42.42.110   8080:32012/TCP   11s
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deploy   1/1     1            1           51m
NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deploy-f5bd4749b   1         1         1       51m







Monday, November 11, 2019

KIND - Kubernetes IN Docker

We can easily up and running K8S cluster for testing, This very lightweight as compared to other local setups.

This post is a reference for those who are trying to install the Kubernetes cluster on your ubuntu machine.

My Ubuntu 18.04 TLS server is a VirtualBox VM and I have installed the following dependencies on that server.

1. Install Docker
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

2. Install GO Language
https://golang.org/dl/
$ wget https://dl.google.com/go/go1.13.4.linux-amd64.tar.gz
$sudo tar -C /usr/local -xzf go1.13.4.linux-amd64.tar.gz
$ export PATH=$PATH:/usr/local/go/bin
$ go version
go version go1.13.4 linux/amd64
3. kubectl
 $ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.5M  100 44.5M    0     0  1209k      0  0:00:37  0:00:37 --:--:-- 1182k
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl



Now install KIND
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.5.1
go: finding sigs.k8s.io v0.5.1
$ rm go1.13.4.linux-amd64.tar.gz
$ export PATH=$PATH:/home/ajeesh/go/bin
$ kind version
v0.5.1
$ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.15.3) 🖼
 ✓ Preparing nodes 📦
 ✓ Creating kubeadm config 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
   $ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
  $ kind get kubeconfig-path
/home/ajeesh/.kube/kind-config-kind

$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                  NAMES
b52ee9180210        kindest/node:v1.15.3   "/usr/local/bin/entr…"   14 minutes ago      Up 13 minutes       35507/tcp, 127.0.0.1:35507->6443/tcp   kind-control-plane
$ kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
kind-control-plane   Ready    master   17m   v1.15.3

But this is a single node cluster. But if you want to create a multi-node HA cluster we need to do the following settings.

First, delete the current cluster.
$ kind delete cluster
Deleting cluster "kind" ...
$KUBECONFIG is still set to use /home/ajeesh/.kube/kind-config-kind even though that file has been deleted, remember to unset it
$ unset KUBECONFIG
This will delete the kubeconfig file on your .kube folder.
/.kube$ ls
cache  http-cache
 $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

A cluster with 3 control-plane nodes and 3 workers

$ cat multi-node-kind.yml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
~$ kind create cluster --config multi-node-kind.yml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.15.3) 🖼
 ✓ Preparing nodes 📦📦📦📦📦
 ✓ Configuring the external load balancer ⚖️
 ✓ Creating kubeadm config 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining more control-plane nodes 🎮
 ✓ Joining worker nodes 🚜
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info

I have some issue with resources so I have reduced worker node to 1 from 3


$ kubectl get nodes
NAME                  STATUS   ROLES    AGE     VERSION
kind-control-plane    Ready    master   2m20s   v1.15.3
kind-control-plane2   Ready    master   106s    v1.15.3
kind-worker           Ready       46s     v1.15.3
:~$ docker ps
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS                                  NAMES
a0e52dd3effa        kindest/node:v1.15.3           "/usr/local/bin/entr…"   4 minutes ago       Up 3 minutes                                               kind-worker
2a6c8833c3cb        kindest/node:v1.15.3           "/usr/local/bin/entr…"   4 minutes ago       Up 3 minutes        35213/tcp, 127.0.0.1:35213->6443/tcp   kind-control-plane2
4e38366ad4a7        kindest/haproxy:2.0.0-alpine   "/docker-entrypoint.…"   4 minutes ago       Up 4 minutes        37331/tcp, 127.0.0.1:37331->6443/tcp   kind-external-load-balancer
8a0ce1959722        kindest/node:v1.15.3           "/usr/local/bin/entr…"   4 minutes ago       Up 3 minutes        36145/tcp, 127.0.0.1:36145->6443/tcp   kind-control-plane
:~$ kind get nodes
kind-worker
kind-control-plane2
kind-external-load-balancer
kind-control-plane
~$ kubectl -n kube-system get all
NAME                                              READY   STATUS    RESTARTS   AGE
pod/coredns-5c98db65d4-hmrxc                      1/1     Running   0          4m31s
pod/coredns-5c98db65d4-vgj9w                      1/1     Running   0          4m31s
pod/etcd-kind-control-plane                       1/1     Running   0          3m36s
pod/etcd-kind-control-plane2                      1/1     Running   0          4m12s
pod/kindnet-7754r                                 1/1     Running   1          4m13s
pod/kindnet-9c4rt                                 1/1     Running   1          4m31s
pod/kindnet-b2td4                                 1/1     Running   1          3m13s
pod/kube-apiserver-kind-control-plane             1/1     Running   0          3m36s
pod/kube-apiserver-kind-control-plane2            1/1     Running   0          4m12s
pod/kube-controller-manager-kind-control-plane    1/1     Running   1          3m58s
pod/kube-controller-manager-kind-control-plane2   1/1     Running   0          3m58s
pod/kube-proxy-c628w                              1/1     Running   0          4m13s
pod/kube-proxy-p9787                              1/1     Running   0          4m31s
pod/kube-proxy-zf6pm                              1/1     Running   0          3m13s
pod/kube-scheduler-kind-control-plane             1/1     Running   1          3m58s
pod/kube-scheduler-kind-control-plane2            1/1     Running   0          4m12s
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10           53/UDP,53/TCP,9153/TCP   4m46s
NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
daemonset.apps/kindnet      3         3         3       3            3                                   4m43s
daemonset.apps/kube-proxy   3         3         3       3            3           beta.kubernetes.io/os=linux   4m45s
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           4m46s
NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-5c98db65d4   2         2         2       4m31s