Build a non-disruptive deployment - Environmental part

Server

Language :

Hi, I'm Lovefield.

While running a blog, I’ve encountered various situations. Sometimes the server was down, and other times I made configuration mistakes. Recently, as I’ve been making frequent updates, I realized the need to set up a zero-downtime update environment. Ultimately, I decided to modify the existing setup and quickly implemented a zero-downtime environment.

Since I planned to build the setup using Docker in a single-instance environment, I created the server architecture diagram as shown above. The configuration requires a total of seven containers. I plan to use an Nginx container as a load balancer to connect the Front-End and Back-End containers. The reason for not using AWS is simple: it's expensive! Considering the use of four instances, S3, RDS, and an LB, the cost is too high for running a simple blog. In the end, I had no choice but to rely on Docker.

docker-compose.yml

YAML

name: blog

services:
   database:
       
   fileserver:
       
   backend-blue:
       container_name: back-blue
       ports:
           - "8002:8082"
   backend-green:
       container_name: back-green
       ports:
           - "8001:8082"
   frontend-blue:
       container_name: front-blue
       ports:
           - "3002:3000"
   frontend-green:
       container_name: front-green
       ports:
           - "3001:3000"
   load-balance:
       container_name: load-balance
       image: nginx:1.27.3-perl
       volumes:
           - ./nginx.conf:/etc/nginx/conf.d/nginx.conf
       ports:
           - "8000:8000"
           - "9000:9000"
       depends_on:
           - backend-blue
           - backend-green
           - frontend-blue
           - frontend-green

I wrote the docker-compose.yml file as shown above, leaving only the key configurations for explanation purposes. The Front-End and Back-End each have Blue and Green containers. The load-balance container is set to handle access by linking the Blue and Green containers. The ports opened serve the following purposes:

- 8000: Connected to Nginx's port 8000, linking to the Blue and Green containers of the Front-End.
- 9000: Connected to Nginx's port 9000, linking to the Blue and Green containers of the Back-End.
- 3001: A port used for health checks on the Front-End Green container.
- 3002: A port used for health checks on the Front-End Blue container.
- 8001: A port used for health checks on the Back-End Green container.
- 8002: A port used for health checks on the Back-End Blue container.

nginx.conf

Nginx config

server{
   listen 8000;

   location / {
       proxy_set_header Host $host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_pass http://frontend-app;
   }
}

upstream frontend-app {
       least_conn;

       server frontend-blue:3000;
       server frontend-green:3000;
}

server{
   listen 9000;

   location / {
       proxy_set_header Host $host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_pass http://backend-app;
   }
}

upstream backend-app {
       least_conn;

       server backend-blue:8082;
       server backend-green:8082;
}

Configure the Nginx container as shown above. When traffic comes through port 8000, it will attempt to connect to either frontend-blue:3000 or frontend-green:3000, prioritizing the one with fewer connections. Similarly, when traffic comes through port 9000, it will attempt to connect to either backend-blue:8082 or backend-green:8082, again prioritizing the one with fewer connections. If one side is unavailable, it will attempt to connect to the available side.

With this, the environment setup is complete. The deployment script for zero-downtime deployment will be covered in the next post. Thank you!

Lovefield

Web Front-End developer

하고싶은게 많고, 나만의 서비스를 만들고 싶은 변태스러운 개발자입니다.