Friday, December 18, 2020

Kubernetes is deprecating Docker?

 

Kubernetes is deprecating Docker?



NO, Kubernetes is deprecating Docker as a container runtime after v1.20. 

Docker support is not going away, it is just deprecating the "Dockershim"

Kubernetes using CRI(Kubernetes created a standard interface called CRI for all runtime implementations.) for all runtime and here docker is most widely used. Moreover, docker is not only the container runtime , but also we can use contained, CRI-O, Rkt etc. So kubelet is not directly talking to Container runtime, instead, it is talking to container runtime through a Container Runtime Interface. In the case of docker Kubernetes cannot use CRI to communicate with docker runtime. Since docker is not implemented with any CRI. So Kubernetes developed a wrapper like application called "dockershim" for communicating with docker runtime, which speaks CRI protocol on one side and Dockerd protocol on the other.

When this will fully be removed.

As of now 12-17-2020 K8S V1.20 - Kubelet start showing a warning message
1.21 - will also show the same warning message
1.22 - will also show the same warning message
1.23 - dockershim will remove

Questions:
Q1: Do we need to install docker?
Ans: No, you don't need to install docker , instead we need to install "containerd" or CRI-O

Q2: Your docker images will work?
Ans: Yes, you can push the image to your registry, "docker ps" can't see stuff created by CRI. Instead, there is a separate tool "crictl"
docker ps --> crictl ps, docker info --> crictl info etc

Q3: What about performance and security?
Ans: For docker runtime, K8S has lots of unwanted docker modules like API, CLI, and Server( in the server, we have container runtime, Volume, and network) but K8S only needs the runtime. So removing the docker runtime makes better performance and thus fewer components fewer security risks.

Q4: What is the name of container runtime for docker?
Ans: Containerd, which is already under the part of CNCF which is maintained and developed as a separate project. ContainerD is the second most alternative of using docker as a runtime.

Q5: Containerd is using any of the service providers?
Ans: Containerd is already used by major Cloud platforms ( AWS EKS, Google Kubernetes Service)

Q6: Do I need to make any modifications for my managed K8S cluster running on AWS,GCP?
Ans: Cloud providers will take care of installing the binaries and container runtime on K8S worker node.

Q7: On-prem K8S cluster?
Yes, Action required, there are two options: 1st. Change the container runtime as "containerd" or CRI-O , 2nd. We still want to use "dockershim" manually install it on your cluster, since Miratis now take control of dockershim https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/


Thursday, August 6, 2020

EIA 2020 Draft Withdrawal request

Sample Letter body:

========================================

To,
C.K.Mishra
Secretary 
Ministry of Environment, Forests and climate 
Indira Pariyavaran Bhavan
Jor Bagh, New Delhi

Date : XX-XX-2020

From,
Your_name
Address

Dear Mr. Mishra,

Subject: Withdraw the draft EIA notification, 2020 [F.N.2-50/2018/IA.III] and defer the process of public comments in the light of the COVID-2019 pandemic.

I am Your_Name, as a citizen of India, writing this mail with reference to the draft EIA notification, 2020 which has been uploaded on the environment ministry’s website on 12.3.2020 seeking public comments within sixty days of the issuance of the notification. I am happy to hear that the same has been extended to August 11. I am deeply concerned that the draft notification has been put out in the midst of a national health crisis. Due to the prevailing situation of global pandemic spread, offices and public movement have been restricted.

The EIA notification is an important regulation through which the impacts of land-use change, water extraction, tree felling, pollution, waste and effluent management for industrial and infrastructure projects are to be studied and used in developmental decisionmaking. Any change in this law has a direct bearing on the living and working conditions of people and the ecology.

As per the design and implementation of EIA notification, it is crucial that the government provides a suitable and adequate opportunity for those impacted or likely to be affected. Opportunities to understand and discuss the implications of the proposed amendments may be severely hindered due to the present health emergency with restricted public movement, social distancing, and challenges to everyday life activities. These restrictions also make it impossible to disseminate information about the notification to communities who deserve to know and influence the notification. 

 So I genuinely requesting the environment of ministry to :

1. Withdraw the proposed amendments of the Draft EIA notification 2020 as early as possible.

2. Consider reissuing the draft only after health conditions related to Covid19 and civic life is normalized across the country.

3. Ensure that there are widespread and informed public discussions on the implication of these amendments.

4. Full disclosure of the nature of comments received and the reasons for acceptance and rejection of these comments, prior to the issuance of the final amendments.

I hope that the environment ministry will uphold its obligations towards informed public participation like the commitment to Principle 10 of the Rio Declaration and also the Principles of Natural Justice, while taking a considered view on the proposed amendments to the EIA notification, 2020.

Copy to: 1. Geeta Menon, Jt Secy, MoEFF (menong@cag.gov.in)

Yours faithfully 
Your_Name

========================================

To address : eia2020-moefcc@gov.in
CC : menong@cag.gov.in
Email subject : Withdraw EIA 2020 draft

Wednesday, July 29, 2020

AWS Certified Solutions Architect – Associate C02 Tips and Tricks




Recently I have passed AWS Certified Solutions Architect - Associate[SAA-C02]. I would like to share some of my experience regarding the latest exam and its preparations.
Now you can schedule your exam at your Home(Pearson Vue only support this opportunity at the moment)

Requirements for writing your Exam from your Home:

  • Windows 10 OS( Linux OS will not work)
  • I have used my cousin sister's Laptop[4GB DDR3, 500HDD,i3]
  • Broadband internet connection ( My JIO connection failed during the start of my exam so I changed the connection, it's maybe because of my home location, It is too dangerous to do this during the Exam)
  • Passport or Driving license.
  • Test your Machine [Network, Audio, Camara] from here: https://home.pearsonvue.com/aws/onvue


I have done the following for preparing my exam:

  • A Cloud Guru CEO Ryan Kroonenburg's course
  • Solve Dumps/Questions in between 200 - 500(min)
  •  Attended AWS meetups + AWS webinars ( Optional )[I am an active member of AWS Users Kochi]
  • Do some discussion with your friend who already attended the exam before[ For me it was Muhasin-Urolime, AWS Expert, Thanks, dude.. ]
  • However, you should have some LUCK anyways...You should have to keep in mind we need to crack the AWS exam pattern[ NB: For beginners and Intermediates ]

Some of the exam topics came for my SAA-C02:
  • VPC (more than 4)
  • EFS
  • AutoScaling ( more than 5 )
  • EBS (more than 2 )
  • RDS (more than 5)
  • S3  (more than 3 )
  • Storage Gateway (more than 2)
  • Data Sync (more than 2)
  • DynamoDB
  • ElastiCache
  • RedShift
  • Kinesis
  • ServerLESS (more than 3 )
  • SQS (more than 2 )
  • Cloudfront  (more than 3 )
  • Cognito
  • Key management
  • etc others i am not sure...

Tips and Tricks

AWS Organizations
1. single point of maintenance +  limiting access to specific services or actions in all of the team members AWS accounts = Use Service control policies

EFS 
2. Shared storage between multiple EC2 instances + file locking capabilities= EFS
3. high availability +  POSIX-compliant and access concurrently from EC2 instances.= EFS

VPC
4. HA , we need two AZ and each AZ contains 3 subnets (1 public for ALB + 1 private for Web servers + 1 private for Database).
5. to provide VPC private connection to AWS services = Use VPC endpoint
6. IPv6 traffic =Egress-only internet gateway

AutoScaling
7. High availability + Scalability + Web server + Session Stickiness  = Auto Scaling group +  ALB +  multiple AZs
8. prevent any scaling delay = Use a Scheduled scaling to scale-out EC2 instances
9. HA = Auto Scaling group(ASG) + ELB + multi EC2 instances in each AZ
10. Scaling based on high demand at peak times = Dynamic Scaling
11. Scheduled workloads  = use schedule scaling.

EBS
12. I/O intensive + relational databases =  EBS Provisioned IOPS SSD (io1)
13. Improve the performance of EBS volume + handle workloads = use EBS Provisioned IOPS SSD (io1)
14. SAN disc = Object Store = EBS
15. log processing + sequentially + throughput rate 500 MB/s = EBS Throughput Optimized HDD (st1)
16. Proprietary File System  = EBS

RDS
17. For performance = Add more read replica to Amazon RDS
18. Transactional + High performance + Data size range 16 TB to 64 TB = Amazon Aurora
19. RDS Reserved Instance's Region, DB Engine, DB Instance Class, Deployment Type and term length cannot be changed later.

S3
20. backup data  less frequently + rapid access + low cost = Amazon S3 Standard-IA
21. restrict access = generate S3 pre-signed URLs
   The pre-signed URLs are valid only for the specified duration.
22. To restrict access to content that you serve from Amazon S3 buckets
Create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution.
Configure your S3 bucket permissions so that CloudFront can use the OAI to access the files in your bucket and serve them to your users.
23. PUT Request prefixes = 3,500
24. Encrypt S3 bucket + Encrypt Redshift  + Move data = Data at rest.
25. Secure + Salable + High available = S3
26. Short-term/Temporary access  = Amazon S3 pre-signed URL
27. Enable versioning in both source and destination buckets is prerequisites for cross-region replication in Amazon S3
28. Object Store + Immutable  = Amazon Glacier
29. bypass web servers and store the files directly into S3 bucket = pre-signed URL.
30. Expedited retrieval within 1 - 5 minutes
31. bulk retrieval within 5 - 12 hours
32. The Vault Lock and Standard Retrieval - with 3-5 hours

Storage Gateway
32. Storage Gateway with the cached mode is the best option to migrate iSCSI  to Cloud
33. NFS supported by File Gateway only

DynamoDB
32. 50 ms latency +  increasing exponentially = Amazon DynamoDB
33. S3, DynamoDB and Lambda all have HA
34. Centralized database + strong consistency + scalable + cost optimized = Amazon DynamoDB
35. Enable Amazon DynamoDB Auto Scaling = LESS changes
36. data in chunks + little latency = NoSQL DB = DynamoDB
37. Big Data + flexible schema + indexed data + scalable =Amazon DynamoDB
38. Lowest latency data retrieval + highest scalability = DynamoDB

ElastiCache
39. Repeated Complex Queries = Caching = Amazon ElastiCache.
40. Real data + in-memory = Use Redis.
41. Memcached -> simplicity , Redis -> rich sets of features.

RedShift
42. High Performance + Big Historical Data +  business intelligence tools = data warehouse= Amazon Redshift
43. run different queries types on big data = Amazon Redshift workload management
44. data warehouse + Big Data  + fast = Amazon Redshift

Kinesis
45. 1000 bid per second + process in order + no losing messages + multiple services to process each bid= Amazon Kinesis Data Streams
46. real-time stream +large-volume+ AWS Serverless + custom SQL = Amazon Kinesis Data Analytics
47. Real-time + BIG data streaming = Amazon Kinesis Data Streams
48. Data Analytics + SQL = Amazon Kinesis Data Analytics
49. 100,000 requests per second + Sequential events + click stream analyzing = Use Amazon Kinesis Stream
50. IOT data +  streams+ Partition by equipment + s3 =Use Amazon Kinesis Data Streams

ServerLESS [ Lambda, ]
51. Migration to AWS + Stateless application + Static Content + Less operation overhead = ServerLess solution = Amazon Cognito + Amazon S3 + Amazon API Gateway+ and AWS Lambda
52. Lambda has a limitation of 1000 concurrent requests
53. AWS compute solution +  no special hardware + use 512 MB of memory to run = AWS Lambda functions
54. Lambda is the best option to handle S3 events
55. Scalable + cost-effective + ServerLess = Amazon API Gateway with AWS Lambda Function
56. securely store database passwords + customer master key + Lambda Function  = Lambda Environment Variables

SQS
57. SQS prevent losing orders
58. sold old items first = Use Amazon SQS with FIFO Queue
59. MOST efficient + cost-effective = Decouple the two tiers using Amazon SQS.
60. handle failed messages = Amazon SQS dead-letter queue

Cloudfront
61. CloudFront has a geo-restriction, not a geo routing
62. Custom Origin "On-Premises" + enhance the performance of downloading static files  = Amazon CloudFront

Clodformation:
63. Pilot light DR scenario= DB replication + AWS CloudFormation

CloudTrail
64.  for API calls monitoring

Elastic Beanstalk
65. easy deploying + without managing infrastructure = Elastic Beanstalk 
66. simple deployment + scalable + running on IIS = AWS Elastic Beanstalk

Amazon Cognito
67. MFA +Mobile = Amazon Cognito
68. block suspicious sign-ins = Amazon Cognito user pools

AWS Shield
69. protect the application from DDOS attack = Use AWS Shield

KEY:
70. AWS manages both data key and master key + automatically manage both encryption and decryption  = SSE-S3
71. automated rotation of the encryption keys+  track the usage of encryption key = use SSE-KMS

Misc:
72. machine learning, high-performance computing, video processing, and financial modeling = Amazon FSx for Lustre



Wednesday, May 6, 2020

5 less than K8S = K3S Lightweight Kubernetes

5 less than K8S = K3S Lightweight Kubernetes
Installing and configure a Lightweight Kubernetes cluster.

This is a Lightweight Kubernetes distribution for production workloads.

You can complete the Kubernetes installation in less than 5 Minutes

Document for reference: https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/


Installation Steps:

Master : curl -sfL https://get.k3s.io | sh -
Worker : curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -



Here I am taking 1GB RAM Master and Worker Node(Ubuntu18) in my Virtual Box, The main advantage for the installation is there are no Pre-requisites.

Properties:
- You can install in Raspberry Pi Hardware
- By default, the data is keeping in SQLite no ETCD, But you can configure
- It is using Flannel network
- It is using Containerd not Docker
- Just needs Linux Kernel and Cgroup


Master Node:
==============

root@master:/home# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=Master_IP --flannel-iface=enp0s8" sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.17.4+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
root@master:/home#


root@master:/home# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
master  Ready    master   9m27s   v1.17.4+k3s1
root@master:/home/#

Kuberctl is installed by Racher script
root@msater:/home# which kubectl
/usr/local/bin/kubectl

root@master:/var/lib# cd /var/lib/rancher/
root@master:/var/lib/rancher# ls
k3s

root@master:/var/lib/rancher# cd /etc/rancher/
root@master:/etc/rancher# ls
k3s  node

root@master:/etc/rancher# cd /var/lib/rancher/k3s/server/
root@master:/var/lib/rancher/k3s/server# ls
cred  db  kine.sock  manifests  node-token  static  tls  token

TOCKEN LOCATION:
root@master:/var/lib/rancher/k3s/server# cat token
K10e08a165e58554e19bf1f0eab12dd06e8345655b7efe52bcd04029a76226b2034::server:5d82295abb1eb943c32bbcd1fec959d3

KUBE-CONFIG FILE LOCATION:
root@master:~# cd /etc/rancher/k3s/
root@master:/etc/rancher/k3s# ls
k3s.yaml


WORKER NODE Installation:
=========================

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=Master_IP --flannel-iface=enp0s8" K3S_URL="https://Master_IP:6443" K3S_TOKEN="xxxxxxxxxxx034::server:5d82295abxxxxxxxc959d3" sh -


vagrant@worker:~$ sudo su
root@worker:/home# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=Master_IP --flannel-iface=enp0s8" K3S_URL="https://Master_IP:6443" K3S_TOKEN="K1xxx6b2034::server:5xxxxxxx59d3" sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.17.4+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent
root@worker:/home#


After this, you can see your new worker node is added into your Kubernetes cluster.
root@master:/etc/rancher/k3s# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
worker   Ready       39s   v1.17.4+k3s1
master   Ready    master   27m   v1.17.4+k3s1
root@master:/etc/rancher/k3s#


So we have completed the cluster installation.


For testing, we can deploy an Nginx application and check the cluster further.


root@master:/etc/rancher/k3s# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1            443/TCP   29m


root@master:/etc/rancher/k3s# kubectl run urolime --image nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/urolime created
root@master:/etc/rancher/k3s#


root@master:/etc/rancher/k3s# kubectl get all
NAME                           READY   STATUS              RESTARTS   AGE
pod/urolime-5b47968689-f4qnj   0/1     ContainerCreating   0          26s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1            443/TCP   32m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/urolime   0/1     1            0           26s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/urolime-5b47968689   1         1         0       26s
root@master:/etc/rancher/k3s#


Expose the service into a NodePort and try accessing it.

root@master:/etc/rancher/k3s# kubectl expose deployment urolime --port 80 --type NodePort
service/urolime exposed
root@master:/etc/rancher/k3s# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.43.0.1              443/TCP        33m
urolime      NodePort    10.43.143.95           80:32623/TCP   7s
root@master:/etc/rancher/k3s#


root@master:/etc/rancher/k3s# curl Master_IP:32623



Welcome to nginx!


Welcome to nginx!


If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.

Thank you for using nginx.


root@master:/etc/rancher/k3s#

Sunday, April 26, 2020

Backup and restore Kubernetes cluser

I am checking how can I back up and restore my Kubernetes cluster.



First, we need Object storage like S3.
Minio is opensource Object storage which is compatible for Amazon S3

Setting Up a Minio:
====================
We can run Minio in a docker environment
Stable
docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data


root@testing:/home/ubuntu# docker pull minio/minio
Using default tag: latest
latest: Pulling from minio/minio
4167d3e14976: Already exists
275c32df8f5e: Pull complete
cf0c84ce4772: Pull complete
70885164616a: Pull complete
Digest: sha256:6f8db3d7a1060cb1fcd6855791e9befe2d7f51644be65183680c1189eb196177
Status: Downloaded newer image for minio/minio:latest
root@testing:/home/ubuntu# docker run --name minio -p 9000:9000 -v data:/data minio/minio server /data
Endpoint:  http://172.17.0.3:9000  http://127.0.0.1:9000
Browser Access:
   http://172.17.0.3:9000  http://127.0.0.1:9000
Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide
Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately using 'MINIO_ACCESS_KEY' and 'MINIO_SECRET_KEY'
VELERO SETUP:
==========================
Installing the Velero binary
ajeesh@Aspire-A515-51G:~/Downloads/valero$ wget https://github.com/vmware-tanzu/velero/releases/download/v1.3.2/velero-v1.3.2-linux-amd64.tar.gz
--2020-04-26 21:27:41--  https://github.com/vmware-tanzu/velero/releases/download/v1.3.2/velero-v1.3.2-linux-amd64.tar.gz
Resolving github.com (github.com)... 13.234.176.102
Connecting to github.com (github.com)|13.234.176.102|:443... connected.

velero-v1.3.2-linux-amd64.tar.gz      100%[=======================================================================>]  23.39M  1.30MB/s    in 25s

2020-04-26 21:28:08 (956 KB/s) - ‘velero-v1.3.2-linux-amd64.tar.gz’ saved [24528427/24528427]


ajeesh@Aspire-A515-51G:~/Downloads/valero$ tar zxf velero-v1.3.2-linux-amd64.tar.gz
ajeesh@Aspire-A515-51G:~/Downloads/valero$ sudo mv velero-v1.3.2-linux-amd64/velero /usr/local/bin/



Next, you need to update your Minio logins for Velero to configure.

# cat <> minio.credentials
> [default]
> aws_access_key_id=minioadmin
> aws_secret_access_key=minioadmin
> EOF
root@Aspire-A515-51G:
root@Aspire-A515-51G:/velero# ls
minio.credentials


velero$ echo $KUBECONFIG
/home/ajeesh/.kube/config

velero$ /usr/local/bin/velero install   --provider aws  --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket bucketone  --secret-file ./minio.credentials    --backup-location-config region=minio,s3ForcePathStyle=true,s3Url=http://myip:9000

CustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource
CustomResourceDefinition/resticrepositories.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero: attempting to create resource
Namespace/velero: created
ClusterRoleBinding/velero: attempting to create resource
ClusterRoleBinding/velero: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: created
VolumeSnapshotLocation/default: attempting to create resource
VolumeSnapshotLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.
ajeesh@Aspire-A515-51G:~/test

ajeesh@Aspire-A515-51G:~$ kubectl get all -n velero
NAME                          READY   STATUS    RESTARTS   AGE
pod/velero-795c8d58cd-fc86d   1/1     Running   2          5m17s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/velero   1/1     1            1           5m17s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/velero-795c8d58cd   1         1         1       5m17s
ajeesh@Aspire-A515-51G:~$

ajeesh@Aspire-A515-51G:~$ kubectl -n velero get crds
NAME                                          CREATED AT
backups.velero.io                             2020-04-26T16:42:32Z
backupstoragelocations.velero.io              2020-04-26T16:42:32Z
bgpconfigurations.crd.projectcalico.org       2019-11-06T07:35:55Z
bgppeers.crd.projectcalico.org                2019-11-06T07:35:55Z
blockaffinities.crd.projectcalico.org         2019-11-06T07:35:55Z
clusterinformations.crd.projectcalico.org     2019-11-06T07:35:55Z
deletebackuprequests.velero.io                2020-04-26T16:42:32Z
downloadrequests.velero.io                    2020-04-26T16:42:32Z
felixconfigurations.crd.projectcalico.org     2019-11-06T07:35:55Z
globalnetworkpolicies.crd.projectcalico.org   2019-11-06T07:35:55Z
globalnetworksets.crd.projectcalico.org       2019-11-06T07:35:55Z
hostendpoints.crd.projectcalico.org           2019-11-06T07:35:55Z
ipamblocks.crd.projectcalico.org              2019-11-06T07:35:55Z
ipamconfigs.crd.projectcalico.org             2019-11-06T07:35:55Z
ipamhandles.crd.projectcalico.org             2019-11-06T07:35:55Z
ippools.crd.projectcalico.org                 2019-11-06T07:35:55Z
networkpolicies.crd.projectcalico.org         2019-11-06T07:35:55Z
networksets.crd.projectcalico.org             2019-11-06T07:35:55Z
podvolumebackups.velero.io                    2020-04-26T16:42:32Z
podvolumerestores.velero.io                   2020-04-26T16:42:32Z
resticrepositories.velero.io                  2020-04-26T16:42:32Z
restores.velero.io                            2020-04-26T16:42:32Z
schedules.velero.io                           2020-04-26T16:42:32Z
serverstatusrequests.velero.io                2020-04-26T16:42:32Z
volumesnapshotlocations.velero.io             2020-04-26T16:42:32Z
ajeesh@Aspire-A515-51G:~$

Here I have used the following values and variables for configuring the Velero installation.
=========
Velero Version: v1.3.2
velero plugin for AWS = velero/velero-plugin-for-aws:v1.0.0
http://myip:9000 = is the address for Minio container
$KUBECONFIG= home/ajeesh/.kube/config
===========

ajeesh@Aspire-A515-51G:~$ kubectl get ns
NAME                   STATUS   AGE
default                Active   172d
kube-node-lease        Active   172d
kube-public            Active   172d
kube-system            Active   172d
kubernetes-dashboard   Active   146d
metallb-system         Active   161d
velero                 Active   14m
ajeesh@Aspire-A515-51G:~$

For Velero commands to autocomplete:

ajeesh@Aspire-A515-51G:~$ source <(velero completion bash)
ajeesh@Aspire-A515-51G:~$ velero backup
backup           backup-location

For testing I am creating a test namespace for Velero backup:

ajeesh@Aspire-A515-51G:~$ kubectl create ns nginxtest
namespace/nginxtest created

ajeesh@Aspire-A515-51G:~$ kubectl get ns
NAME                   STATUS   AGE
default                Active   172d
kube-node-lease        Active   172d
kube-public            Active   172d
kube-system            Active   172d
kubernetes-dashboard   Active   146d
metallb-system         Active   161d
nginxtest              Active   4s
velero                 Active   20m
ajeesh@Aspire-A515-51G:~$ kubectl -n nginxtest run nginx --image nginx --replicas 2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
ajeesh@Aspire-A515-51G:~$

Create a VELERO BACKUP
----------------------
ajeesh@Aspire-A515-51G:~$ velero backup create namespacenginx --include-namespaces=nginxtest
Backup request "namespacenginx" submitted successfully.
Run `velero backup describe namespacenginx` or `velero backup logs namespacenginx` for more details.
ajeesh@Aspire-A515-51G:~$

ajeesh@Aspire-A515-51G:~$ velero backup get
NAME             STATUS   CREATED   EXPIRES   STORAGE LOCATION   SELECTOR
namespacenginx   New           29d                         
ajeesh@Aspire-A515-51G:~$


ajeesh@Aspire-A515-51G:~$ kubectl -n velero get backups
NAME             AGE
namespacenginx   97s
ajeesh@Aspire-A515-51G:~$

While checking the logs i can see the following

ajeesh@Aspire-A515-51G:~$ velero backup logs namespacenginx
Logs for backup "namespacenginx" are not available until it's finished processing. Please wait until the backup has a phase of Completed or Failed and try again.
ajeesh@Aspire-A515-51G:~$ 

This seems to be some issue and my backup has some issue, I need to further check this.


ajeesh@Aspire-A515-51G:~$ velero backup describe namespacenginx
Name:         namespacenginx
Namespace:    velero
Labels:       
Annotations: 
Phase:  New
Namespaces:
  Included:  nginxtest
  Excluded: 
Resources:
  Included:        *
  Excluded:       
  Cluster-scoped:  auto
Label selector: 
Storage Location:
Snapshot PVs:  auto
TTL:  720h0m0s
Hooks: 
Backup Format Version:  0
Started:   
Completed: 
Expiration: 
Persistent Volumes:
ajeesh@Aspire-A515-51G:~$

ajeesh@Aspire-A515-51G:~$ velero restore create -help
Error: unknown shorthand flag: 'e' in -elp
Usage:
  velero restore create [RESTORE_NAME] [--from-backup BACKUP_NAME | --from-schedule SCHEDULE_NAME] [flags]

Examples:
  # create a restore named "restore-1" from backup "backup-1"
  velero restore create restore-1 --from-backup backup-1

  # create a restore with a default name ("backup-1-") from backup "backup-1"
  velero restore create --from-backup backup-1
  # create a restore from the latest successful backup triggered by schedule "schedule-1"
  velero restore create --from-schedule schedule-1

  # create a restore from the latest successful OR partially-failed backup triggered by schedule "schedule-1"
  velero restore create --from-schedule schedule-1 --allow-partially-failed

  # create a restore for only persistentvolumeclaims and persistentvolumes within a backup
  velero restore create --from-backup backup-2 --include-resources persistentvolumeclaims,persistentvolumes


For a stand-alone cluster we would require the following data for the backup
1. The root certificate files /etc/kubernetes/pki/ca.crt and /etc/kubernetes/pki/ca.key
2. ECTD backup

ECTD backup:
I have followed below steps:

$ kubectl get pods -n kube-system | grep etcd
etcd-kmaster.example.com                      1/1     Running   21         186d
$ kubectl exec -it -n kube-system etcd-kmaster.example.com  -- /bin/sh

# etcdctl snapshot save
No help topic for 'snapshot'

In this case, I would require to issue the following command.

# export ETCDCTL_API=3

Then our backup command

# export ETCDCTL_API=3
# etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save etcd-snapshot-$(date +%Y-%m-%d_%H:%M:%S_%Z).db
{"level":"warn","ts":"2020-05-10T17:01:10.150Z","caller":"clientv3/retry_interceptor.go:116","msg":"retry stream intercept"}
Snapshot saved at etcd-snapshot-2020-05-10_17:01:10_UTC.db
#du -shc etcd-snapshot-2020-05-10_17:01:10_UTC.db
3.9M etcd-snapshot-2020-05-10_17:01:10_UTC.db
3.9M total