Thursday, August 11, 2022

Installing Helm by a specific version

 While creating a duplicate deployment system for testing purposes we may require to install a very old version of helm

I have done this by using the below steps.

# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

# chmod +x get_helm.sh

# ./get_helm.sh -v v3.1.2

# helm version

version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}





Saturday, May 14, 2022

Vagrant issue with Ubuntu16.04

While executing the command "vagrant up" you will see the following error message.


 Box 'generic/ubuntu2004' could not be found. Attempting to find and install...

   servername: Box Provider: virtualbox

   servername: Box Version: 3.3.0

The box 'generic/ubuntu2004' could not be found or

could not be accessed in the remote catalog. If this is a private

box on HashiCorp's Atlas, please verify you're logged in via

`vagrant login`. Also, please double-check the name. The expanded

URL and error message are shown below:


URL: ["https://atlas.hashicorp.com/generic/ubuntu2004"]

Error: The requested URL returned error: 404 Not Found

So the issue is with your vagrant version (1.8.7), so here we would require the last Vagrant version 2


On your Ubuntu 16.04LTS vagrant binay download will show the following error.



ajeesh@Aspire-A515-51G:~/Downloads/vagr$ ./vagrant --help

/tmp/.mount_vagranhKrhmD/usr/bin/ruby2.6: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by /tmp/.mount_vagranhKrhmD/usr/lib/x86_64-linux-gnu/libruby-2.6.so.2.6)

/tmp/.mount_vagranhKrhmD/usr/bin/ruby2.6: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by /tmp/.mount_vagranhKrhmD/usr/lib/x86_64-linux-gnu/libruby-2.6.so.2.6)


Fix:
# apt-get remove vagrant

# apt-get install vagrant

root@Aspire-A515-51G:~# vagrant --version
Vagrant 2.2.19

Friday, March 12, 2021

Unhealthy status for controller-manager and scheduler

 [vagrant@kmaster ~]$ kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+

NAME                 STATUS      MESSAGE                                                                                       ERROR

scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   

controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   

etcd-0               Healthy     {"health":"true"}    


And other commands everything works fine.


 [ajeesh@kmaster ~]$ kubectl cluster-info

Kubernetes master is running at https://172.42.42.100:6443

KubeDNS is running at https://172.42.42.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

[ajeesh@kmaster ~]$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS         RESTARTS   AGE
kube-system   calico-kube-controllers-56b44cd6d5-27smn      1/1     Running        0          6m43s
kube-system   calico-kube-controllers-56b44cd6d5-g6gfw      0/1     NodeAffinity   0          118d
kube-system   calico-node-rv52x                             1/1     Running        0          6m42s
kube-system   calico-node-tq84v                             1/1     Running        4          118d
kube-system   calico-node-wg8h9                             1/1     Running        0          6m42s
kube-system   coredns-f9fd979d6-69dhx                       1/1     Running        2          118d
kube-system   coredns-f9fd979d6-rff6t                       1/1     Running        2          118d
kube-system   etcd-kmaster.example.com                      1/1     Running        3          118d
kube-system   kube-apiserver-kmaster.example.com            1/1     Running        3          118d
kube-system   kube-controller-manager-kmaster.example.com   1/1     Running        2          118d
kube-system   kube-proxy-7psvh                              1/1     Running        2          118d
kube-system   kube-proxy-mf4hx                              1/1     Running        2          118d
kube-system   kube-proxy-ndnk6                              1/1     Running        2          118d
kube-system   kube-scheduler-kmaster.example.com            1/1     Running        2          118d
[vagrant@kmaster ~]$ 


This issue is shown in my test Kubernetes cluster and while checking the  kube-controller-manager.yaml and kube-scheduler.yaml 0 is the default port number used. So I have commented on that line and fixed the issue.

vi /etc/kubernetes/manifests/kube-controller-manager.yaml

    - --leader-elect=true

    - --node-cidr-mask-size=24

#    - --port=0

    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt


vi /etc/kubernetes/manifests/kube-scheduler.yaml

     - --kubeconfig=/etc/kubernetes/scheduler.conf

    - --leader-elect=true

#    - --port=0

    image: k8s.gcr.io/kube-scheduler:v1.19.4

    imagePullPolicy: IfNotPresent

    livenessProbe:

      failureThreshold: 8


@kmaster vagrant]# service kubelet restart

@kmaster ~]$ kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+

NAME                 STATUS    MESSAGE             ERROR

controller-manager   Healthy   ok                  

scheduler            Healthy   ok                  

etcd-0               Healthy   {"health":"true"}  

Friday, December 18, 2020

Kubernetes is deprecating Docker?

 

Kubernetes is deprecating Docker?



NO, Kubernetes is deprecating Docker as a container runtime after v1.20. 

Docker support is not going away, it is just deprecating the "Dockershim"

Kubernetes using CRI(Kubernetes created a standard interface called CRI for all runtime implementations.) for all runtime and here docker is most widely used. Moreover, docker is not only the container runtime , but also we can use contained, CRI-O, Rkt etc. So kubelet is not directly talking to Container runtime, instead, it is talking to container runtime through a Container Runtime Interface. In the case of docker Kubernetes cannot use CRI to communicate with docker runtime. Since docker is not implemented with any CRI. So Kubernetes developed a wrapper like application called "dockershim" for communicating with docker runtime, which speaks CRI protocol on one side and Dockerd protocol on the other.

When this will fully be removed.

As of now 12-17-2020 K8S V1.20 - Kubelet start showing a warning message
1.21 - will also show the same warning message
1.22 - will also show the same warning message
1.23 - dockershim will remove

Questions:
Q1: Do we need to install docker?
Ans: No, you don't need to install docker , instead we need to install "containerd" or CRI-O

Q2: Your docker images will work?
Ans: Yes, you can push the image to your registry, "docker ps" can't see stuff created by CRI. Instead, there is a separate tool "crictl"
docker ps --> crictl ps, docker info --> crictl info etc

Q3: What about performance and security?
Ans: For docker runtime, K8S has lots of unwanted docker modules like API, CLI, and Server( in the server, we have container runtime, Volume, and network) but K8S only needs the runtime. So removing the docker runtime makes better performance and thus fewer components fewer security risks.

Q4: What is the name of container runtime for docker?
Ans: Containerd, which is already under the part of CNCF which is maintained and developed as a separate project. ContainerD is the second most alternative of using docker as a runtime.

Q5: Containerd is using any of the service providers?
Ans: Containerd is already used by major Cloud platforms ( AWS EKS, Google Kubernetes Service)

Q6: Do I need to make any modifications for my managed K8S cluster running on AWS,GCP?
Ans: Cloud providers will take care of installing the binaries and container runtime on K8S worker node.

Q7: On-prem K8S cluster?
Yes, Action required, there are two options: 1st. Change the container runtime as "containerd" or CRI-O , 2nd. We still want to use "dockershim" manually install it on your cluster, since Miratis now take control of dockershim https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/


Thursday, August 6, 2020

EIA 2020 Draft Withdrawal request

Sample Letter body:

========================================

To,
C.K.Mishra
Secretary 
Ministry of Environment, Forests and climate 
Indira Pariyavaran Bhavan
Jor Bagh, New Delhi

Date : XX-XX-2020

From,
Your_name
Address

Dear Mr. Mishra,

Subject: Withdraw the draft EIA notification, 2020 [F.N.2-50/2018/IA.III] and defer the process of public comments in the light of the COVID-2019 pandemic.

I am Your_Name, as a citizen of India, writing this mail with reference to the draft EIA notification, 2020 which has been uploaded on the environment ministry’s website on 12.3.2020 seeking public comments within sixty days of the issuance of the notification. I am happy to hear that the same has been extended to August 11. I am deeply concerned that the draft notification has been put out in the midst of a national health crisis. Due to the prevailing situation of global pandemic spread, offices and public movement have been restricted.

The EIA notification is an important regulation through which the impacts of land-use change, water extraction, tree felling, pollution, waste and effluent management for industrial and infrastructure projects are to be studied and used in developmental decisionmaking. Any change in this law has a direct bearing on the living and working conditions of people and the ecology.

As per the design and implementation of EIA notification, it is crucial that the government provides a suitable and adequate opportunity for those impacted or likely to be affected. Opportunities to understand and discuss the implications of the proposed amendments may be severely hindered due to the present health emergency with restricted public movement, social distancing, and challenges to everyday life activities. These restrictions also make it impossible to disseminate information about the notification to communities who deserve to know and influence the notification. 

 So I genuinely requesting the environment of ministry to :

1. Withdraw the proposed amendments of the Draft EIA notification 2020 as early as possible.

2. Consider reissuing the draft only after health conditions related to Covid19 and civic life is normalized across the country.

3. Ensure that there are widespread and informed public discussions on the implication of these amendments.

4. Full disclosure of the nature of comments received and the reasons for acceptance and rejection of these comments, prior to the issuance of the final amendments.

I hope that the environment ministry will uphold its obligations towards informed public participation like the commitment to Principle 10 of the Rio Declaration and also the Principles of Natural Justice, while taking a considered view on the proposed amendments to the EIA notification, 2020.

Copy to: 1. Geeta Menon, Jt Secy, MoEFF (menong@cag.gov.in)

Yours faithfully 
Your_Name

========================================

To address : eia2020-moefcc@gov.in
CC : menong@cag.gov.in
Email subject : Withdraw EIA 2020 draft

Wednesday, July 29, 2020

AWS Certified Solutions Architect – Associate C02 Tips and Tricks




Recently I have passed AWS Certified Solutions Architect - Associate[SAA-C02]. I would like to share some of my experience regarding the latest exam and its preparations.
Now you can schedule your exam at your Home(Pearson Vue only support this opportunity at the moment)

Requirements for writing your Exam from your Home:

  • Windows 10 OS( Linux OS will not work)
  • I have used my cousin sister's Laptop[4GB DDR3, 500HDD,i3]
  • Broadband internet connection ( My JIO connection failed during the start of my exam so I changed the connection, it's maybe because of my home location, It is too dangerous to do this during the Exam)
  • Passport or Driving license.
  • Test your Machine [Network, Audio, Camara] from here: https://home.pearsonvue.com/aws/onvue


I have done the following for preparing my exam:

  • A Cloud Guru CEO Ryan Kroonenburg's course
  • Solve Dumps/Questions in between 200 - 500(min)
  •  Attended AWS meetups + AWS webinars ( Optional )[I am an active member of AWS Users Kochi]
  • Do some discussion with your friend who already attended the exam before[ For me it was Muhasin-Urolime, AWS Expert, Thanks, dude.. ]
  • However, you should have some LUCK anyways...You should have to keep in mind we need to crack the AWS exam pattern[ NB: For beginners and Intermediates ]

Some of the exam topics came for my SAA-C02:
  • VPC (more than 4)
  • EFS
  • AutoScaling ( more than 5 )
  • EBS (more than 2 )
  • RDS (more than 5)
  • S3  (more than 3 )
  • Storage Gateway (more than 2)
  • Data Sync (more than 2)
  • DynamoDB
  • ElastiCache
  • RedShift
  • Kinesis
  • ServerLESS (more than 3 )
  • SQS (more than 2 )
  • Cloudfront  (more than 3 )
  • Cognito
  • Key management
  • etc others i am not sure...

Tips and Tricks

AWS Organizations
1. single point of maintenance +  limiting access to specific services or actions in all of the team members AWS accounts = Use Service control policies

EFS 
2. Shared storage between multiple EC2 instances + file locking capabilities= EFS
3. high availability +  POSIX-compliant and access concurrently from EC2 instances.= EFS

VPC
4. HA , we need two AZ and each AZ contains 3 subnets (1 public for ALB + 1 private for Web servers + 1 private for Database).
5. to provide VPC private connection to AWS services = Use VPC endpoint
6. IPv6 traffic =Egress-only internet gateway

AutoScaling
7. High availability + Scalability + Web server + Session Stickiness  = Auto Scaling group +  ALB +  multiple AZs
8. prevent any scaling delay = Use a Scheduled scaling to scale-out EC2 instances
9. HA = Auto Scaling group(ASG) + ELB + multi EC2 instances in each AZ
10. Scaling based on high demand at peak times = Dynamic Scaling
11. Scheduled workloads  = use schedule scaling.

EBS
12. I/O intensive + relational databases =  EBS Provisioned IOPS SSD (io1)
13. Improve the performance of EBS volume + handle workloads = use EBS Provisioned IOPS SSD (io1)
14. SAN disc = Object Store = EBS
15. log processing + sequentially + throughput rate 500 MB/s = EBS Throughput Optimized HDD (st1)
16. Proprietary File System  = EBS

RDS
17. For performance = Add more read replica to Amazon RDS
18. Transactional + High performance + Data size range 16 TB to 64 TB = Amazon Aurora
19. RDS Reserved Instance's Region, DB Engine, DB Instance Class, Deployment Type and term length cannot be changed later.

S3
20. backup data  less frequently + rapid access + low cost = Amazon S3 Standard-IA
21. restrict access = generate S3 pre-signed URLs
   The pre-signed URLs are valid only for the specified duration.
22. To restrict access to content that you serve from Amazon S3 buckets
Create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution.
Configure your S3 bucket permissions so that CloudFront can use the OAI to access the files in your bucket and serve them to your users.
23. PUT Request prefixes = 3,500
24. Encrypt S3 bucket + Encrypt Redshift  + Move data = Data at rest.
25. Secure + Salable + High available = S3
26. Short-term/Temporary access  = Amazon S3 pre-signed URL
27. Enable versioning in both source and destination buckets is prerequisites for cross-region replication in Amazon S3
28. Object Store + Immutable  = Amazon Glacier
29. bypass web servers and store the files directly into S3 bucket = pre-signed URL.
30. Expedited retrieval within 1 - 5 minutes
31. bulk retrieval within 5 - 12 hours
32. The Vault Lock and Standard Retrieval - with 3-5 hours

Storage Gateway
32. Storage Gateway with the cached mode is the best option to migrate iSCSI  to Cloud
33. NFS supported by File Gateway only

DynamoDB
32. 50 ms latency +  increasing exponentially = Amazon DynamoDB
33. S3, DynamoDB and Lambda all have HA
34. Centralized database + strong consistency + scalable + cost optimized = Amazon DynamoDB
35. Enable Amazon DynamoDB Auto Scaling = LESS changes
36. data in chunks + little latency = NoSQL DB = DynamoDB
37. Big Data + flexible schema + indexed data + scalable =Amazon DynamoDB
38. Lowest latency data retrieval + highest scalability = DynamoDB

ElastiCache
39. Repeated Complex Queries = Caching = Amazon ElastiCache.
40. Real data + in-memory = Use Redis.
41. Memcached -> simplicity , Redis -> rich sets of features.

RedShift
42. High Performance + Big Historical Data +  business intelligence tools = data warehouse= Amazon Redshift
43. run different queries types on big data = Amazon Redshift workload management
44. data warehouse + Big Data  + fast = Amazon Redshift

Kinesis
45. 1000 bid per second + process in order + no losing messages + multiple services to process each bid= Amazon Kinesis Data Streams
46. real-time stream +large-volume+ AWS Serverless + custom SQL = Amazon Kinesis Data Analytics
47. Real-time + BIG data streaming = Amazon Kinesis Data Streams
48. Data Analytics + SQL = Amazon Kinesis Data Analytics
49. 100,000 requests per second + Sequential events + click stream analyzing = Use Amazon Kinesis Stream
50. IOT data +  streams+ Partition by equipment + s3 =Use Amazon Kinesis Data Streams

ServerLESS [ Lambda, ]
51. Migration to AWS + Stateless application + Static Content + Less operation overhead = ServerLess solution = Amazon Cognito + Amazon S3 + Amazon API Gateway+ and AWS Lambda
52. Lambda has a limitation of 1000 concurrent requests
53. AWS compute solution +  no special hardware + use 512 MB of memory to run = AWS Lambda functions
54. Lambda is the best option to handle S3 events
55. Scalable + cost-effective + ServerLess = Amazon API Gateway with AWS Lambda Function
56. securely store database passwords + customer master key + Lambda Function  = Lambda Environment Variables

SQS
57. SQS prevent losing orders
58. sold old items first = Use Amazon SQS with FIFO Queue
59. MOST efficient + cost-effective = Decouple the two tiers using Amazon SQS.
60. handle failed messages = Amazon SQS dead-letter queue

Cloudfront
61. CloudFront has a geo-restriction, not a geo routing
62. Custom Origin "On-Premises" + enhance the performance of downloading static files  = Amazon CloudFront

Clodformation:
63. Pilot light DR scenario= DB replication + AWS CloudFormation

CloudTrail
64.  for API calls monitoring

Elastic Beanstalk
65. easy deploying + without managing infrastructure = Elastic Beanstalk 
66. simple deployment + scalable + running on IIS = AWS Elastic Beanstalk

Amazon Cognito
67. MFA +Mobile = Amazon Cognito
68. block suspicious sign-ins = Amazon Cognito user pools

AWS Shield
69. protect the application from DDOS attack = Use AWS Shield

KEY:
70. AWS manages both data key and master key + automatically manage both encryption and decryption  = SSE-S3
71. automated rotation of the encryption keys+  track the usage of encryption key = use SSE-KMS

Misc:
72. machine learning, high-performance computing, video processing, and financial modeling = Amazon FSx for Lustre



Wednesday, May 6, 2020

5 less than K8S = K3S Lightweight Kubernetes

5 less than K8S = K3S Lightweight Kubernetes
Installing and configure a Lightweight Kubernetes cluster.

This is a Lightweight Kubernetes distribution for production workloads.

You can complete the Kubernetes installation in less than 5 Minutes

Document for reference: https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/


Installation Steps:

Master : curl -sfL https://get.k3s.io | sh -
Worker : curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -



Here I am taking 1GB RAM Master and Worker Node(Ubuntu18) in my Virtual Box, The main advantage for the installation is there are no Pre-requisites.

Properties:
- You can install in Raspberry Pi Hardware
- By default, the data is keeping in SQLite no ETCD, But you can configure
- It is using Flannel network
- It is using Containerd not Docker
- Just needs Linux Kernel and Cgroup


Master Node:
==============

root@master:/home# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=Master_IP --flannel-iface=enp0s8" sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.17.4+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
root@master:/home#


root@master:/home# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
master  Ready    master   9m27s   v1.17.4+k3s1
root@master:/home/#

Kuberctl is installed by Racher script
root@msater:/home# which kubectl
/usr/local/bin/kubectl

root@master:/var/lib# cd /var/lib/rancher/
root@master:/var/lib/rancher# ls
k3s

root@master:/var/lib/rancher# cd /etc/rancher/
root@master:/etc/rancher# ls
k3s  node

root@master:/etc/rancher# cd /var/lib/rancher/k3s/server/
root@master:/var/lib/rancher/k3s/server# ls
cred  db  kine.sock  manifests  node-token  static  tls  token

TOCKEN LOCATION:
root@master:/var/lib/rancher/k3s/server# cat token
K10e08a165e58554e19bf1f0eab12dd06e8345655b7efe52bcd04029a76226b2034::server:5d82295abb1eb943c32bbcd1fec959d3

KUBE-CONFIG FILE LOCATION:
root@master:~# cd /etc/rancher/k3s/
root@master:/etc/rancher/k3s# ls
k3s.yaml


WORKER NODE Installation:
=========================

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=Master_IP --flannel-iface=enp0s8" K3S_URL="https://Master_IP:6443" K3S_TOKEN="xxxxxxxxxxx034::server:5d82295abxxxxxxxc959d3" sh -


vagrant@worker:~$ sudo su
root@worker:/home# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--node-ip=Master_IP --flannel-iface=enp0s8" K3S_URL="https://Master_IP:6443" K3S_TOKEN="K1xxx6b2034::server:5xxxxxxx59d3" sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.17.4+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent
root@worker:/home#


After this, you can see your new worker node is added into your Kubernetes cluster.
root@master:/etc/rancher/k3s# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
worker   Ready       39s   v1.17.4+k3s1
master   Ready    master   27m   v1.17.4+k3s1
root@master:/etc/rancher/k3s#


So we have completed the cluster installation.


For testing, we can deploy an Nginx application and check the cluster further.


root@master:/etc/rancher/k3s# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1            443/TCP   29m


root@master:/etc/rancher/k3s# kubectl run urolime --image nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/urolime created
root@master:/etc/rancher/k3s#


root@master:/etc/rancher/k3s# kubectl get all
NAME                           READY   STATUS              RESTARTS   AGE
pod/urolime-5b47968689-f4qnj   0/1     ContainerCreating   0          26s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1            443/TCP   32m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/urolime   0/1     1            0           26s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/urolime-5b47968689   1         1         0       26s
root@master:/etc/rancher/k3s#


Expose the service into a NodePort and try accessing it.

root@master:/etc/rancher/k3s# kubectl expose deployment urolime --port 80 --type NodePort
service/urolime exposed
root@master:/etc/rancher/k3s# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.43.0.1              443/TCP        33m
urolime      NodePort    10.43.143.95           80:32623/TCP   7s
root@master:/etc/rancher/k3s#


root@master:/etc/rancher/k3s# curl Master_IP:32623



Welcome to nginx!


Welcome to nginx!


If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.

Thank you for using nginx.


root@master:/etc/rancher/k3s#