Setting up Load Balancer — System Design (part 2)

Chetan Dwarkani
4 min readJan 5, 2022

Load balancing is a concept of distributing a set of tasks across various different sets of resources. Load balancers are available at both the hardware level and software level. It acts as an important proxy layer in order to manage huge requests hitting our application server. Let’s consider that we have developed a backend application that is capable of handling our current small user base. As and when the user base of our app expands, the requests/second hitting the server starts increasing and now our server has to serve huge requests every day, and here’s the point where integrating load balancer comes into the play in order to manage this load distribution.

We are going to set up the following architecture in this blog

Load balancer may use various techniques to distribute the incoming load and they include round robbin, Least connection, IP Hash, Generic Hash, Least time, etc., These are not generic types and various frameworks have their own load balancing techniques defined. For example — you can find various load balancing techniques by the Nginx framework here.

Some of the characteristics of a good load balancer include

  1. Session Persistence: Maintain the consistency of serving requests until the session is terminated
  2. Dynamic configuration of server group: A good load balancer must be able to well select the servers from a pool to provide better efficacy

There are various types of load balancers serving different purposes with the goal of distributing incoming requests. Some of them include Cloud-based load balancer (These type of load balancer involves setting up the load balancing inside the same server where our service is hosted and balancing the incoming request), Global server-based load balancing, L4 ( acts on transport layer +network layer ), L7 ( acts on the application layer and has several advantages compared to L4 on detecting routes by looking into packets and deciding the path it must take while L4 doesn’t look into the packet data), SSL based load balancers (Provides a layer of SSL).

In this blog, I am going to explain in detail how to set up a load balancer using Nginx. Here’s the video of how to set it up

Setting up Load Balancer — System Desu

Let’s roll!

Step 1: Download Nginx

In the case of mac — a simple command

brew install Nginx

will help you with the download of Nginx in your system.

In case of other OS, please refer to the Nginx website — http://nginx.org/en/download.html

After installation is complete in mac, just type

sudo Nginx

in terminal and Nginx service gets started. You can verify if the service has started by typing localhost:80 in your browser and you’ll see a 502 page by Nginx. This is useful to verify if the setup of Nginx was completed.

Step 2: Run the backend service on a specific port

Let’s write a simple script which will start our service in a desired port using node.js and express. Here’s the code which will help us to achieve the same

const express = require(“express”);
const app = express();
const PORT = process.env.PORT;
app.get(“/”, (req, res) => {
const data = `Simple NodeJS app running on ${PORT} using expressJS`;
return res.send(data);
});
app.listen(PORT, “0.0.0.0”, () => console.log(`Server at ${PORT}`));’

We need to initialize Node.js project for the above using the npm init command and then simply create a file by name app.js and use the above code to run the app.

Let’s next host the app in different port numbers say 1221, 1231, 1234.

In that way, our service instance will be running on 3 different ports and we can further proceed to wire it up with a load balancer to distribute the incoming request in either of them.

Step 3: Modify Nginx configuration to connect to our servers

Once your Nginx service is running, we need to set up the necessary configuration inside the server. By default, the Nginx config file will be placed at

/usr/local/etc/nginx/nginx.conf

In the case of mac. For other os, please check where the service setup was installed and find the above file. Open the file and then type the below code and save it.

http {
server {
listen 80;
root /Users/chethan/Desktop/projects/smapleNodeBackend/;
location / {
proxy_pass http://simpleapp;
}
upstream simpleapp {
server 127.0.0.1:1221;
server 127.0.0.1:1313;
server 127.0.0.1:1414;
}
}
}
events { }

In the above code, we are updating Nginx to redirect the request coming at port 80 over ports 1221 or 1313, or 1414 based on its default selection mechanism.

Here, on the config

root: It is used to set your project’s root directory. we need to set where our project is stored.

proxy_pass: In this, we need to define various ports where our service is running. In my case, I have set it to 1221, 1313, 1414.

Let’s hit localhost:80 now and you’ll see all the magic :D

For more in-depth detail about system design, please refer: https://m-chetandwarkani.medium.com/scaling-your-backend-service-system-design-158ba107d0d8

--

--

Chetan Dwarkani

Tech experience in System Design | Java Script Framework | Android framework | ReactJS | JAVA | DB Design | chetandwarkani.com