Wednesday, December 4, 2019

HELM 3 The new chages and an overview


Helm3 has entirely changed its fundamentals, like tiller pods and service account permissions which we have used in version2.


So in the version3, it is quite easy for installing and moreover security aspect, In earlier versions, everyone installed it with proving full admin privileges. Those who are using Helm version 2 you must make sure you are using a separate and RBAC for the tiller Service Account.

In the new version Helm will use the same Kube config file permission you are using for managing your Kubernetes cluster.

Here I am providing the useful commands and configuration for Helm version 3.

Installation:

We can download the latest stable release from here: https://github.com/helm/helm/releases



ajeesh@Aspire-A515-51G:~/Downloads/helm$ tar -zxvf helm-v3.0.0-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/README.md
linux-amd64/LICENSE
ajeesh@Aspire-A515-51G:~/Downloads/helm$ cd linux-amd64/
ajeesh@Aspire-A515-51G:~/Downloads/helm/linux-amd64$ ls
helm  LICENSE  README.md
ajeesh@Aspire-A515-51G:~/Downloads/helm/linux-amd64$ sudo mv helm /usr/local/bin/
ajeesh@Aspire-A515-51G:~/Downloads/helm/linux-amd64$ helm --help
The Kubernetes package manager
Common actions for Helm:
- helm search:    search for charts
- helm pull:      download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts
ajeesh@Aspire-A515-51G:~/Downloads/helm/linux-amd64$ helm version --short
v3.0.0+ge29ce2a

If you have helm2 version and you can easily migrate your version2 repos to helm version 3 using a helm plugin.

ajeesh@Aspire-A515-51G:~$ helm plugin install https://github.com/helm/helm-2to3Downloading and installing helm-2to3 v0.2.0 ...
https://github.com/helm/helm-2to3/releases/download/v0.2.0/helm-2to3_0.2.0_linux_amd64.tar.gz
Installed plugin: 2to3
ajeesh@Aspire-A515-51G:~$ helm plugin listNAME    VERSION DESCRIPTION
2to3    0.2.0   migrate and cleanup Helm v2 configuration and releases in-place to Helm v3
ajeesh@Aspire-A515-51G:~$ helm 2to3 --helpMigrate and Cleanup Helm v2 configuration and releases in-place to Helm v3

Usage:
  2to3 [command]

Available Commands:
  cleanup     cleanup Helm v2 configuration, release data and Tiller deployment
  convert     migrate Helm v2 release in-place to Helm v3
  help        Help about any command
  move        migrate Helm v2 configuration in-place to Helm v3

Here if you are in version 2 first you need to convert the configuration files from version 2 to verion2 using the command
helm 2to3 move

Then you can move your installed repos one by one using the command.

helm 2to3 convert jenkins
helm 2to3 convert wordpress

After migrated all the repo then you can clean up your old helm version 2

helm 2to3 cleanup : this will delete your tiller pods from your Kubernetes cluster.

ajeesh@Aspire-A515-51G:~$ helm search repo
Error: no repositories configured

Here we need to add our helm repo:

ajeesh@Aspire-A515-51G:~$ helm repo add stable https://kubernetes-charts.storage.googleapis.com"stable" has been added to your repositories
ajeesh@Aspire-A515-51G:~$ helm repo listNAME    URL
stable  https://kubernetes-charts.storage.googleapis.com

 ajeesh@Aspire-A515-51G:~$ helm search repo jenkins
NAME            CHART VERSION   APP VERSION     DESCRIPTION
stable/jenkins  1.9.7

If you search for a new repo using our old helm version 2 command you will get an error , since in version three it is different.

ajeesh@Aspire-A515-51G:~$ helm install stable/wordpress --name myblog
Error: unknown flag: --name

Here you need to use the helm command as like this,

ajeesh@Aspire-A515-51G:~$ helm install myblog stable/wordpress
NAME: myblog
LAST DEPLOYED: Tue Dec  3 22:48:56 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the WordPress URL:





Saturday, November 16, 2019

Run your Kubernetes Application in Didicated Hosting servers with Loadbalancer

After creating your Kubernetes cluster you will think how can I configure a load balancer, like load balancers in AWS GCP, Azure, etc.



Here we can make use of Metallb https://metallb.universe.tf/ as our load balancer.
 NFS as storage

This is my K8S cluster:

myk8s-tests/metallb$ kubectl get node -o wide
NAME                   STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
kmaster.example.com    Ready    master   10d   v1.16.2   172.42.42.100           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4
kworker1.example.com   Ready       10d   v1.16.2   172.42.42.101           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4
kworker2.example.com   Ready       10d   v1.16.2   172.42.42.102           CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.4


Setup MetalLB in your K8S cluster.

1.
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
2. cat  metallb.yml 

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.42.42.110-172.42.42.120  --> Add your IP ranges

Creating a NFS server for our K8S cluster storage:

# apt-get install nfs-kernel-server
# mkdir -p /srv/nfs/kubedata
# chmod -R 777 /srv/nfs/
# cat /etc/exports
/srv/nfs/kubedata *(rw,sync,no_subtree_check,insecure)
# exportfs -rav
exporting *:/srv/nfs/kubedata
# exportfs -v
/srv/nfs/kubedata
(rw,wdelay,insecure,root_squash,no_subtree_check,sec=sys,rw,insecure,root_squash,no_all_squash)
# showmount -e
Export list for ubuntu-01:
/srv/nfs/kubedata *
Then you need to test the NFS mounting in all your k8s nodes.
# showmount -e 172.42.42.10
Export list for 172.42.42.10:
/srv/nfs/kubedata *
# mount -t nfs 172.42.42.10:/srv/nfs/kubedata /mnt
# mount | grep kubedata
172.42.42.10:/srv/nfs/kubedata on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.42.42.101,local_lock=none,addr=172.42.42.10)


Now configure our k8s cluster with our NFS server.

1. We need to create a physical volume in our NFS server.

cat pv-nfs.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-manual
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 172.42.42.10
    path: "/srv/nfs/kubedata/nfs_manual"

2. After that, we will create a volume claim that matches our PV
cat pvc-nfs.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-manual
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany  --> need to check
  resources:
    requests:
      storage: 800Mi   --> need check

Verify both manual creations are working fine at your cluster.

nfs$ kubectl get pv,pvc
NAME                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
persistentvolume/pv-nfs-manual   1Gi        RWX            Retain           Bound    default/pvc-nfs-manual   manual                  53m
NAME                                   STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvc-nfs-manual   Bound    pv-nfs-manual   1Gi        RWX            manual         53m

Now we can deploy our app.

cat nfs-nginx.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: nginx
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx
  template:
    metadata:
      labels:
        run: nginx
    spec:
      volumes:
      - name: www  --> this name should be same as Volumemounts name
        persistentVolumeClaim:
          claimName: pvc-nfs-manual
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: www   --> need to be same as above name
          mountPath: /usr/share/nginx/html

Create a service for our app using type as LoadBalancer

cat nfs-nginx-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

kubectl create -f nfs-nginx-svc.yml 

nfs$ kubectl get all

NAME                               READY   STATUS    RESTARTS   AGE
pod/nginx-deploy-f5bd4749b-nftg9   1/1     Running   0          51m
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
service/kubernetes   ClusterIP      10.96.0.1                 443/TCP          10d
service/nginx        LoadBalancer   10.103.153.86   172.42.42.110   8080:32012/TCP   11s
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deploy   1/1     1            1           51m
NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deploy-f5bd4749b   1         1         1       51m







Monday, November 11, 2019

KIND - Kubernetes IN Docker

We can easily up and running K8S cluster for testing, This very lightweight as compared to other local setups.

This post is a reference for those who are trying to install the Kubernetes cluster on your ubuntu machine.

My Ubuntu 18.04 TLS server is a VirtualBox VM and I have installed the following dependencies on that server.

1. Install Docker
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

2. Install GO Language
https://golang.org/dl/
$ wget https://dl.google.com/go/go1.13.4.linux-amd64.tar.gz
$sudo tar -C /usr/local -xzf go1.13.4.linux-amd64.tar.gz
$ export PATH=$PATH:/usr/local/go/bin
$ go version
go version go1.13.4 linux/amd64
3. kubectl
 $ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 44.5M  100 44.5M    0     0  1209k      0  0:00:37  0:00:37 --:--:-- 1182k
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl



Now install KIND
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.5.1
go: finding sigs.k8s.io v0.5.1
$ rm go1.13.4.linux-amd64.tar.gz
$ export PATH=$PATH:/home/ajeesh/go/bin
$ kind version
v0.5.1
$ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.15.3) 🖼
 ✓ Preparing nodes 📦
 ✓ Creating kubeadm config 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
   $ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
  $ kind get kubeconfig-path
/home/ajeesh/.kube/kind-config-kind

$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                  NAMES
b52ee9180210        kindest/node:v1.15.3   "/usr/local/bin/entr…"   14 minutes ago      Up 13 minutes       35507/tcp, 127.0.0.1:35507->6443/tcp   kind-control-plane
$ kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
kind-control-plane   Ready    master   17m   v1.15.3

But this is a single node cluster. But if you want to create a multi-node HA cluster we need to do the following settings.

First, delete the current cluster.
$ kind delete cluster
Deleting cluster "kind" ...
$KUBECONFIG is still set to use /home/ajeesh/.kube/kind-config-kind even though that file has been deleted, remember to unset it
$ unset KUBECONFIG
This will delete the kubeconfig file on your .kube folder.
/.kube$ ls
cache  http-cache
 $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

A cluster with 3 control-plane nodes and 3 workers

$ cat multi-node-kind.yml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
~$ kind create cluster --config multi-node-kind.yml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.15.3) 🖼
 ✓ Preparing nodes 📦📦📦📦📦
 ✓ Configuring the external load balancer ⚖️
 ✓ Creating kubeadm config 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining more control-plane nodes 🎮
 ✓ Joining worker nodes 🚜
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info

I have some issue with resources so I have reduced worker node to 1 from 3


$ kubectl get nodes
NAME                  STATUS   ROLES    AGE     VERSION
kind-control-plane    Ready    master   2m20s   v1.15.3
kind-control-plane2   Ready    master   106s    v1.15.3
kind-worker           Ready       46s     v1.15.3
:~$ docker ps
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS                                  NAMES
a0e52dd3effa        kindest/node:v1.15.3           "/usr/local/bin/entr…"   4 minutes ago       Up 3 minutes                                               kind-worker
2a6c8833c3cb        kindest/node:v1.15.3           "/usr/local/bin/entr…"   4 minutes ago       Up 3 minutes        35213/tcp, 127.0.0.1:35213->6443/tcp   kind-control-plane2
4e38366ad4a7        kindest/haproxy:2.0.0-alpine   "/docker-entrypoint.…"   4 minutes ago       Up 4 minutes        37331/tcp, 127.0.0.1:37331->6443/tcp   kind-external-load-balancer
8a0ce1959722        kindest/node:v1.15.3           "/usr/local/bin/entr…"   4 minutes ago       Up 3 minutes        36145/tcp, 127.0.0.1:36145->6443/tcp   kind-control-plane
:~$ kind get nodes
kind-worker
kind-control-plane2
kind-external-load-balancer
kind-control-plane
~$ kubectl -n kube-system get all
NAME                                              READY   STATUS    RESTARTS   AGE
pod/coredns-5c98db65d4-hmrxc                      1/1     Running   0          4m31s
pod/coredns-5c98db65d4-vgj9w                      1/1     Running   0          4m31s
pod/etcd-kind-control-plane                       1/1     Running   0          3m36s
pod/etcd-kind-control-plane2                      1/1     Running   0          4m12s
pod/kindnet-7754r                                 1/1     Running   1          4m13s
pod/kindnet-9c4rt                                 1/1     Running   1          4m31s
pod/kindnet-b2td4                                 1/1     Running   1          3m13s
pod/kube-apiserver-kind-control-plane             1/1     Running   0          3m36s
pod/kube-apiserver-kind-control-plane2            1/1     Running   0          4m12s
pod/kube-controller-manager-kind-control-plane    1/1     Running   1          3m58s
pod/kube-controller-manager-kind-control-plane2   1/1     Running   0          3m58s
pod/kube-proxy-c628w                              1/1     Running   0          4m13s
pod/kube-proxy-p9787                              1/1     Running   0          4m31s
pod/kube-proxy-zf6pm                              1/1     Running   0          3m13s
pod/kube-scheduler-kind-control-plane             1/1     Running   1          3m58s
pod/kube-scheduler-kind-control-plane2            1/1     Running   0          4m12s
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10           53/UDP,53/TCP,9153/TCP   4m46s
NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
daemonset.apps/kindnet      3         3         3       3            3                                   4m43s
daemonset.apps/kube-proxy   3         3         3       3            3           beta.kubernetes.io/os=linux   4m45s
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           4m46s
NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-5c98db65d4   2         2         2       4m31s










Tuesday, October 8, 2019

VirtualBox 6 upgrade and issues for the latest version

Upgrade your VirtualBox to the latest release 6

Here  I am showing how i have upgraded/downgrade my VirtualBox on Ubuntu machine.
NB: You need to backup your virtual servers before you perform the steps.


Please find the below steps where i have followed for upgrading my virtualbox version 5.1.38 to latest stable version 6.0.12. I didn't take any backup :)



ajeesh@ajeesh-Aspire-A515-51G:~$ ps aux | grep virt
ajeesh   19129  0.1  0.1 167452 12920 ?        S    21:52   0:00 /usr/lib/virtualbox/VBoxXPCOMIPCD
ajeesh   19134  0.4  0.2 761312 23784 ?        Sl   21:52   0:01 /usr/lib/virtualbox/VBoxSVC --auto-shutdown

ajeesh@ajeesh-Aspire-A515-51G:~$ kill -9 19129 19134


root@ajeesh-Aspire-A515-51G:/etc/apt/sources.list.d# apt-get install virtualbox-6.0
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  libsdl-ttf2.0-0
The following packages will be REMOVED:
  virtualbox virtualbox-ext-pack virtualbox-qt

The following NEW packages will be installed:
  libsdl-ttf2.0-0 virtualbox-6.0
0 upgraded, 2 newly installed, 3 to remove and 258 not upgraded.
Need to get 109 MB of archives.
After this operation, 147 MB of additional disk space will be used.
Do you want to continue? [Y/n] y



Setting up virtualbox-6.0 (6.0.12-133076~Ubuntu~xenial) ...
addgroup: The group `vboxusers' already exists as a system group. Exiting.
Processing triggers for libc-bin (2.23-0ubuntu10) ...
root@ajeesh-Aspire-A515-51G:





But this version 6.0.12 is causing some issue for my Vagrant containers.

vagrant-container$ vagrant up

The provider 'virtualbox' that was requested to back the machine
'kmaster' is reporting that it isn't usable on this system. The
reason is shown below:

Vagrant has detected that you have a version of VirtualBox installed
that is not supported by this version of Vagrant. Please install one of
the supported versions listed below to use Vagrant:

4.0, 4.1, 4.2, 4.3, 5.0, 5.1

A Vagrant update may also be available that adds support for the version
you specified. Please check www.vagrantup.com/downloads.html to download
the latest version.
ajeesh@ajeesh-Aspire-A515-51G:

So i have downgraded the VirtualBox version to 5.1.38

sources.list.d# apt-get install virtualbox-5.1
Reading package lists... Done
Building dependency tree      
Reading state information... Done
Do you want to continue? [Y/n] y
Get:1 http://download.virtualbox.org/virtualbox/debian xenial/contrib amd64 virtualbox-5.1 amd64 5.1.38-122592~Ubuntu~xenial [66.0 MB]
Fetched 66.0 MB in 57s (1,147 kB/s)                                                                                                  
Preconfiguring packages ...

Setting up virtualbox-5.1 (5.1.38-122592~Ubuntu~xenial) ...
addgroup: The group `vboxusers' already exists as a system group. Exiting.
root@ajeesh-Aspire-A515-51G:


vagrant-container$ kubectl get nodes
NAME                   STATUS    ROLES     AGE       VERSION
kmaster.example.com    Ready     master    2d21h     v1.16.1
kworker1.example.com   Ready         2d21h     v1.16.1
kworker2.example.com   Ready         2d21h     v1.16.1

Thursday, October 3, 2019

LND Lighning Network Vulnerability reported

LND Lighning Network Vulnerability reported

Recently lightning network developer Russel updated a serious vulnerability for the old versions. The versions which is less than v0.7.0

    CVE-2019-12998 c-lightning < 0.7.1
    CVE-2019-12999 lnd < 0.7
    CVE-2019-13000 eclair <= 0.3

The issue he described as below:


A lightning node accepting a channel must check that the funding transaction
output does indeed open the channel proposed.  Otherwise an attacker can claim
to open a channel but either not pay to the peer, or not pay the full amount.
Once that transaction reaches the minimum depth, it can spend funds from the
channel. The victim will only notice when it tries to close the channel and none
of the commitment or mutual close transactions it has are valid.


Solution
--------

Once the funding transaction is seen, peers MUST check that the outpoint as
described in `funding_created`[1] is a funding transaction output[2] with
the amount described in `open_channel`[3].


Fixed versions:
 c-lightning: v0.7.1 and above
lnd: v0.7.1 and above
eclair: v0.3.1 and above

So the best way to fix the issue is you need t upgrade to the latest release, While right this i can see the latest version for the lnd is v0.8.0-Beta. From this release onwards, lnd will only support database upgrades from the previous major release. So that means those who are running on v0.6.0 would be required to upgrade v0.7.0 first and then to v0.8.0.

VMWare Tools : Not Running

In VMWare vSphere Client you will see the following errors.

VMWare Tools : Not Running  ( Not Installed)


We can easily fix this issue by installing open-vm-tools package in linux machine.

root@testing:/home/ubuntu# apt-get install open-vm-tools

Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following extra packages will be installed:
  libdumbnet1 libicu52 zerofree
Suggested packages:
  open-vm-tools-desktop
The following NEW packages will be installed:
  libdumbnet1 libicu52 open-vm-tools zerofree
0 upgraded, 4 newly installed, 0 to remove and 218 not upgraded.
Need to get 7,237 kB of archives.
After this operation, 30.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

After installing this package you can see your IP address and MAC address using this vmware tool.



Saturday, September 28, 2019

upgrade Visual Studio Code version in Ubuntu

It is very easy for upgrading your Visual Studio Code in your Ubuntu desktop machine.

ajeesh@ajeesh-Aspire-A515-51G:~$ code --version
1.35.1
c7d83e57cd18xxx2843bda1bcf21f
x64


root@ajeesh-Aspire-A515-51G:/home/ajeesh# apt-get install code
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
 code
1 upgraded, 0 newly installed, 0 to remove and 265 not upgraded.
Need to get 55.4 MB of archives.
After this operation, 45.8 MB of additional disk space will be used.
Get:1 https://packages.microsoft.com/repos/vscode stable/main amd64 code amd64 1.38.1-1568209190 [55.4 MB]
Fetched 55.4 MB in 23min 10s (39.9 kB/s)                                                                                              
(Reading database ... 379957 files and directories currently installed.)
Preparing to unpack .../code_1.38.1-1568209190_amd64.deb ...
Unpacking code (1.38.1-1568209190) over (1.35.1-1560350270) ...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5.1) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20160824-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up code (1.38.1-1568209190) ...
root@ajeesh-Aspire-A515-51G:/home/ajeesh# exit

ajeesh@ajeesh-Aspire-A515-51G:~$ code --version
1.38.1
b37e54c98e1xxe5a3761284e3ffb0
x64
ajeesh@ajeesh-Aspire-A515-51G:~$

Thursday, February 28, 2019

How can we reduce the docker image size

DIVE

 This tool provides a way to discover and explore the contents of a docker image. Additionally the tool estimates
the amount of wasted space and identifies the offending files from the image.

root@ajeesh-desktop:/home/ajeesh# wget https://github.com/wagoodman/dive/releases/download/v0.6.0/dive_0.6.0_linux_amd64.deb

root@ajeesh-desktop:/home/ajeesh# apt install ./dive_0.6.0_linux_amd64.deb




root@ajeesh-desktop:/home/ajeesh# dive test_frontend:v1
Fetching image...
Parsing image...
  ├─ [layer:  1] 02c8cd0778ef78e : [==============================>] 100 % (29046/29046)
  ├─ [layer:  2] 0e24a3d04ebc058 : [==============================>] 100 % (4149/4149)
  ├─ [layer:  3] 1fbf962f1faa79c : [==============================>] 100 % (268/268)
  ├─ [layer:  4] 263da0d188f39bc : [==============================>] 100 % (20812/20812)
  ├─ [layer:  5] 2c6d59c15b5a74f : [==============================>] 100 % (22112/22112)
  ├─ [layer:  6] 4a3e40ad2625f55 : [==============================>] 100 % (6/6)
  ├─ [layer:  7] 5f67701353f1632 : [==============================>] 100 % (23/23)
  ├─ [layer:  8] 6e9ac776b0f4ad8 : [==============================>] 100 % (2/2)
  ├─ [layer:  9] 71f81c9582dbf66 : [==============================>] 100 % (25/25)
  ├─ [layer: 10] 84a49a835b59732 : [==============================>] 100 % (2/2)
  ├─ [layer: 11] 9a9aa5f37fdc24c : [==============================>] 100 % (9285/9285)
  ├─ [layer: 12] bb8588e06b91e7b : [==============================>] 100 % (3/3)
  ├─ [layer: 13] cacd4f0f725d2c5 : [==============================>] 100 % (4/4)
  ├─ [layer: 14] d107e0074bfc596 : [==============================>] 100 % (7010/7010)
  ├─ [layer: 15] d7ecfda03fa2ea7 : [==============================>] 100 % (6293/6293)
  ├─ [layer: 16] f08e6b62ac1a1d3 : [==============================>] 100 % (1354/1354)
  ├─ [layer: 17] ff6cffce858c4c9 : [==============================>] 100 % (1428/1428)
  ├─ [layer: 18] ffa2599ac9ec53e : [==============================>] 100 % (885/885)
  ╧
Analyzing image...
Building cache...