Kubernetes Service Type — Load Balancer

Athira KK
4 min readSep 6, 2023

--

Hey Techies…👋

In this session , we’re going to learn how Kubernetes Service Type — Load Balancer works..

Before going to the hands-on lab , let’s have some basic ideas on Kubernetes Service Types.

Kubernetes has emerged as a powerful tool to manage and scale cloud-native applications.

A service type determines how the service is exposed to the network.

There are mainly three types of services that Kubernetes supports:

  • ClusterIP — The default service that allows multiple pods in the cluster to communicate with one another.
  • NodePort — It simply routes traffic from a random host port to a random container port.
  • LoadBalancer — It runs on each pod and connects to the outside world, either through networks such as the Internet or within your datacenter.

Ingress — is not considered an official Kubernetes service , However, we can configure an Ingress service by writing rules that specify which inbound connections should be routed to which services.

Here, we are going to see the hands-on about the service type — Load Balancer.

  • Load Balancer services connect our applications to the outside world, and they are used in production environments where high availability and scalability are critical.It will keep connections open to pods that are up, and close connections to those that are down.
  • Load Balancer services are ideal for applications that must handle high traffic volumes, such as web applications or APIs.
  • We can access our application using the load balancer’s single IP address.
  • When the Service type is set to LoadBalancer, Kubernetes provides functionality equivalent to type equals ClusterIP to pods within the cluster and extends it by programming the (external to Kubernetes) load balancer with entries for the Kubernetes pods.

Let’s get started………………

1 — Create Google Kubernete cluster engine

Search for Kubernetes Engine in Google Cloud console and create a cluster.

Select Standard Cluster and create with default settings.

From below image we can confirm with green tick mark that the cluster is activated.

2 — Connect to the Command-line access

Click on the three dots in RHS and select Connect , so we will get a page like below:

Copy the code displayed above in command-line access : “ gcloud container clusters get-credentials cluster-1 — zone us-central1-c — project teak-spot-394123” and click on “RUN CLOUD SHELL”.

We will get a window like this and press enter.

3 — Confirm all the nodes are in Ready state by running below command:

#kubectl get nodes

4 — Run the below command to create Namespace called facebook

#kubectl create ns facebook

5 — Below is the yaml file we are going to create for load balancer demo: Add this file in your Github repository and take the URL as well.

YAML File
=================================
- -
apiVersion: v1
kind: Pod
metadata:
name: webserver
namespace: facebook
labels:
role: web-service
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- -
apiVersion: v1
kind: Service
metadata:
name: web-service
namespace: facebook
labels:
role: web-service
spec:
selector:
role: web-service
type: LoadBalancer
ports:
- port: 80

6 — Now apply above file as shown below:

#kubectl apply -f https://raw.githubusercontent.com/athlearn/kub/main/lb-facebook.yaml

We can see from above output that , pod and service have been created.

7 — Run the below command to see how the load balancer service type is working.

#watch -n 1 kubectl get all -n facebook -o wide

*** So we’re launching a pod, which will expose our application to the outside world which we can access through the external IP mentioned in the below screenshot.***

8 - Copy the IP and paste in the browser , here is the output………….

⭐⭐⭐ Enjoy your learning….!!! ⭐⭐⭐

--

--

Athira KK
Athira KK

Written by Athira KK

AWS DevOps Engineer | Calico Big Cats Ambassador | WomenTech Global Ambassador | LinkedIn Top Cloud Computing Voice

No responses yet