Load Balancing In Microservices

Mesut Yakut
3 min readMay 11, 2020

Hi, In this post I will try to explain load balancing in microservices architecture. In which cases load balancing should be use and what are the load balancing types.

You may have chosen microservices architecture for many reasons, especially to take advantage of cloud technologies’ features. Microservices architecture provides many advantages like distributed loosely coupled services, business or domain oriented development and agility for continuous delivery etc.
Everything is looking good at this point, if all services working on one instance in your microservices environment.

What about when you need to scale couple of services for handle more requests. Which Instance receives requests ? or How do clients know which instance they will send to requests ?

The answer to all these and similar questions: Load balancing

Load Balancing:
Load balancing is the process of sharing, incoming network traffic in concurrent or discrete time between servers called a server farm or server pool. This sharing process can be evenly scale or can be performed according to certain rules. Rules like Round Robin, Least Connections etc.

In microservices architecture there are two type of load balancing; they are server side load balancing and client side load balancing. Let’s take a closer look at them.

Server Side Load Balancing

Server side load balancing is a classical load balancing. The traffic is distributed by a load distributor placed in front of the servers and distributed to the servers that will perform the main work equally or according to certain rules. As examples most common used server side load balancers nginx, netscaler etc.

Let’s do an example of server side load balancing, in this example I will use nginx,

Firstly download nginx docker images and configure load balancing settings

docker run -p 80:80 -v ~/nginx/conf.d:/etc/nginx/conf.d -d nginx

next step is create nginx.conf in ~/nginx/conf.d directory and settings should be like below

and then restart nginx docker container

next step is our spring boot application. It will be a simple application which has just one endpoint

pom.xml:

Application class:

application yaml:

let’s start first instance of application

mvn spring-boot:run

for second instance we should change port;

export PORT=8081
mvn spring-boot:run

Now we have two instances which are working on 8080 and 8081 ports

when you request to http://localhost:80 it will return like this;
First call: Hi from Demo application with port: 8080
Secod call: Hi from Demo application with port: 8081

Now let’s look at the second type load balancing, client side load balancing

Client Side Load Balancing

In client side load balancing, client handles the load balancing. In this case client api should know all instance of server api addresses via hard coded or with a service registry.
With this method you can escape bottlenecks and single point of failures. If you use service discovery you don’t have to know any information about server api except api registered name, server registry mechanism will give all information about server api.

Let’s do an example client side load balancing without registry.

Firstly, I will use demo api like above and run two instances on 8080 and 8081.

Next step will be create a client api which is using ribbon.

Our ribbon client api like below and simple;

pom.xml:

application class:

application yaml:

After running ribbon client api, go to http://localhost:9090 to ribbon client endpoint. You will get these answers;
First call: Answer is: Hi from Demo application with port: 8080
Second call: Answer is: Hi from Demo application with port: 8081

As a conclusion, we have learnt what is load balancing and which load balancing types use in microservices architecture.

Have fun.

--

--