← Click here to go back to Home Page Documentation
College Logo
SALT
Salvation Army SALT College of Africa

Deployment Guide

EC2 Server Setup, Docker Containers & CI/CD Pipeline

Version1.0
Last Updated20th February 2026
EnvironmentUbuntu 22.04 LTS on AWS EC2
AudienceSystem Administrators & DevOps
Developed BySumba Group Limited

Table of Contents

1. Deployment Overview 2. Architecture Diagram 3. Prerequisites (Already on EC2) 4. Docker Setup for Automated Deployment 5. Step 1 — Create Deployment Directories 6. Step 2 — Clone Repositories to EC2 7. Step 3 — Configure Environment Variables 8. Step 4 — Start Infrastructure Containers (PostgreSQL, Redis, MinIO) Database Schema Management (ORM + Flyway + Docker) 9. Step 5 — Build & Start Backend Container 10. Step 6 — Build & Start Web Admin Containers 11. Step 7 — Copy Docs to Nginx Web Root 12. Step 8 — Configure Nginx Reverse Proxy 13. Step 9 — Configure GitHub Secrets for CI/CD 14. Step 10 — Verify Deployment 15. CI/CD Pipeline — Automated Testing & Deployment 16. Port Reference 17. Container Management 18. Docker Compose Structure 19. Docker Networking — All Containers on Same Network 20. Docker Swarm — Scaling Containers 21. Troubleshooting

1. Deployment Overview

The College E-Learning Platform uses a fully containerized deployment on AWS EC2. All application services and infrastructure run as Docker containers, with Nginx as the only native service handling SSL termination and reverse proxying.

ServiceMethodPortContainer / Location
Backend Replica 1 (Spring Boot)Docker container8091salt-backend-1
Backend Replica 2 (Spring Boot)Docker container8092salt-backend-2
Web Admin Replica 1 (Next.js)Docker container8081salt-web-admin-1
Web Admin Replica 2 (Next.js)Docker container8082salt-web-admin-2
PostgreSQL 17Docker container5432salt-postgres
Redis 7Docker container6379salt-redis
MinIO (S3 file storage)Docker container9000/9001salt-minio
NginxNative (systemd)80/443System service (already installed on EC2)
Docs (static HTML)Nginx static files80/443/var/www/html/
Key principle: Docker images are built locally on EC2 using the installed Docker engine. No Docker Hub or external registry is used. Each GitHub repository is cloned once on EC2. On each deployment, the CI/CD pipeline SSHes to EC2, pulls the latest code, and builds the Docker image right there on the server. All containers communicate via a shared Docker network (salt-network).

Automation Scripts

The deploy/ folder in the backend repository contains scripts that automate most of these steps:

ScriptPurpose
deploy/setup-ec2.shOne-time EC2 setup (Docker, Nginx, Git, directories, Docker network)
deploy/deploy-backend.shDeploy backend: SSH → git pull → docker build → docker run
deploy/deploy-web-admin.shDeploy web admin: SSH → git pull → docker build → docker run → copy docs
deploy/deploy-all.shDeploy both backend and web admin in one command
deploy/health-check.shCheck status of all services (Docker containers, Nginx, ports, HTTP)
deploy/rollback.shRoll back to previous Docker image version
deploy/view-logs.shView logs for backend, web admin, or Nginx
deploy/backup-db.shCreate timestamped PostgreSQL backup
deploy/restore-db.shRestore database from a backup file
deploy/ssl-setup.shInstall Let’s Encrypt SSL certificate with Certbot
deploy/docker-dev.shLocal development: start/stop Docker Compose services

2. Architecture Diagram

Request Flow

Browser / Mobile App Nginx (443/80) /SaltELearnAppApi → Backend (8091, 8092)

Browser / Mobile App Nginx (443/80) /SaltElearning → Web Admin (8081, 8082)

Browser Nginx (443/80) / → Static docs (/var/www/html/)
                    ┌─────────────────────────────────────────┐
                    │            AWS EC2 Instance              │
                    │                                         │
  Internet ──443──▶ │  Nginx (native, SSL termination)        │
                    │    ├── /                  → /var/www/html│
                    │    ├── /SaltElearning/    → :8081,:8082  │
                    │    └── /SaltELearnAppApi/ → :8091,:8092  │
                    │                                         │
                    │  ┌── Docker (salt-network) ────────────────────────┐ │
                    │  │                                                │ │
                    │  │ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
                    │  │ │backend-1  │ │backend-2  │ │web-admin-1│ │web-admin-2│ │
                    │  │ │  :8091    │ │  :8092    │ │  :8081    │ │  :8082    │ │
                    │  │ │Spring Boot│ │Spring Boot│ │ Next.js   │ │ Next.js   │ │
                    │  │ └─────┬─────┘ └─────┬─────┘ └───────────┘ └───────────┘ │
                    │  │       │             │                                    │
                    │  │ ┌──────▼──────┐ ┌──────┐ ┌──────┐  │ │
                    │  │ │salt-postgres│ │salt- │ │salt- │  │ │
                    │  │ │  :5432      │ │redis │ │minio │  │ │
                    │  │ │PostgreSQL 17│ │:6379 │ │:9000 │  │ │
                    │  │ └─────────────┘ └──────┘ └──────┘  │ │
                    │  └────────────────────────────────────┘ │
                    └─────────────────────────────────────────┘

3. Prerequisites (Already on EC2)

The following are already installed and running on the EC2 server:
ServiceStatusVerify Command
DockerInstalleddocker --version
Docker ComposeInstalled (bundled with Docker)docker compose version
NginxRunning (native systemd)systemctl status nginx
SSL CertificateConfiguredcertbot certificates
GitInstalledgit --version
Note: PostgreSQL, Redis, and MinIO do NOT need to be installed natively on EC2. They run as Docker containers via docker-compose.infra.yml. Only Docker, Nginx, and Git are required on the host.

4. Docker Setup for Automated Deployment

Docker is used to run all services (backend, web admin, PostgreSQL, Redis, MinIO) as containers on EC2. Nginx is the only native service — it handles SSL termination and reverse proxying to the Docker containers.

How It Works

GitHub Push GitHub Actions Runs Tests Tests Pass SSH to EC2 git pull + docker build + docker run

Verify Docker is Ready

# Check Docker is installed and running
docker --version
systemctl status docker

# Ensure the ubuntu user can run Docker without sudo
docker ps

# If "permission denied", add ubuntu to the docker group:
sudo usermod -aG docker ubuntu
# Then log out and log back in for the change to take effect

Docker User Permissions (Required for CI/CD)

Important: The GitHub Actions CI/CD pipeline SSHes into EC2 as the ubuntu user and runs docker build and docker run commands. The ubuntu user must be in the docker group to run these commands without sudo.
# Verify ubuntu is in the docker group
groups ubuntu
# Should show: ubuntu : ubuntu docker

# If not, add it:
sudo usermod -aG docker ubuntu

# Log out and back in, then verify
docker ps   # Should work without sudo

Docker Disk Space

Docker images accumulate over time with each deployment. Monitor disk space:

# Check Docker disk usage
docker system df

# Clean up unused images and containers
docker image prune -f        # Remove dangling images
docker system prune -f       # Remove all unused objects
Tip: The CI/CD pipeline runs docker image prune -f after each deployment to keep disk usage in check.

Backend Docker Build Process

The backend Dockerfile uses a multi-stage build (no pre-installed Maven or JDK required on EC2):

  1. Stage 1 (Builder): Uses eclipse-temurin:21-jdk-alpine to compile the Spring Boot source code with Maven inside the container → produces a JAR file
  2. Stage 2 (Runner): Uses eclipse-temurin:21-jre-alpine (smaller image) to run the compiled JAR

Maven, JDK, and all build tools are inside the Docker image — nothing needs to be installed on the EC2 host beyond Docker itself.

Web Admin Docker Build Process

The web admin Dockerfile also uses a multi-stage build:

  1. Stage 1 (Builder): Uses node:20-alpine to install npm dependencies and build Next.js (standalone output)
  2. Stage 2 (Runner): Uses node:20-alpine to run the standalone Next.js server on port 8081

Node.js and npm are inside the Docker image — no Node.js installation needed on EC2.

5. Step 1 — Create Deployment Directories

Create the directories where the application code and files will live:

sudo mkdir -p /opt/salvation/backend
sudo mkdir -p /opt/salvation/frontend
sudo mkdir -p /opt/salt_files
sudo mkdir -p /opt/backups
sudo chown -R ubuntu:ubuntu /opt/salvation /opt/salt_files /opt/backups
DirectoryPurpose
/opt/salvation/backendSpring Boot source code (cloned from GitHub)
/opt/salvation/frontendNext.js web admin source code (cloned from GitHub)
/opt/salt_filesLegacy file storage (assignments, materials, uploads)
/opt/backupsDatabase backup files

6. Step 2 — Clone Repositories to EC2

GitHub Repositories

The College E-Learning platform is split across 3 independent repositories under the SGL2024 GitHub organization:

RepositoryGitHub URLEC2 Clone PathContents
Backendhttps://github.com/SGL2024/Salvation-Army-Backend-main.git/opt/salvation/backendSpring Boot API, Dockerfile, deploy scripts, infrastructure docker-compose, Flyway migrations
Frontendhttps://github.com/SGL2024/SaltElearning.git/opt/salvation/frontendNext.js web admin, Dockerfile, documentation HTML files
Mobilehttps://github.com/SGL2024/mobile.git/opt/salvation/mobileFlutter student app + executive dashboard

Option A: Clone via HTTPS with Personal Access Token (Recommended)

If the repositories are private, you need a GitHub Personal Access Token (PAT):

  1. Go to GitHub → Settings → Developer settings → Personal access tokens → Tokens (classic)
  2. Click Generate new token (classic)
  3. Select scopes: repo (full control of private repositories)
  4. Copy the generated token (you won’t see it again)
# Configure Git to cache credentials (so you don't re-enter the token every time)
git config --global credential.helper store

# Clone backend repository
cd /opt/salvation/backend
git clone https://github.com/SGL2024/Salvation-Army-Backend-main.git .
# When prompted: Username = your-github-username, Password = your-PAT-token

# Clone web admin repository
cd /opt/salvation/frontend
git clone https://github.com/SGL2024/SaltElearning.git .

# Clone mobile repository (optional — only needed for Flutter web builds)
mkdir -p /opt/salvation/mobile
cd /opt/salvation/mobile
git clone https://github.com/SGL2024/mobile.git .

Option B: Clone via SSH Key

If you prefer SSH authentication:

# 1. Generate an SSH key on the EC2 server (if not already done)
ssh-keygen -t ed25519 -C "ec2-salt-deploy"
# Press Enter to accept default path (~/.ssh/id_ed25519)

# 2. Display the public key
cat ~/.ssh/id_ed25519.pub

# 3. Add the public key to GitHub:
#    Go to GitHub → Settings → SSH and GPG keys → New SSH key
#    Paste the public key and save

# 4. Test the SSH connection
ssh -T git@github.com
# Should show: "Hi SGL2024! You've successfully authenticated..."

# 5. Clone using SSH URLs
cd /opt/salvation/backend
git clone git@github.com:SGL2024/Salvation-Army-Backend-main.git .

cd /opt/salvation/frontend
git clone git@github.com:SGL2024/SaltElearning.git .

mkdir -p /opt/salvation/mobile
cd /opt/salvation/mobile
git clone git@github.com:SGL2024/mobile.git .

Verify Clones

# Check that each directory has the code
ls /opt/salvation/backend/pom.xml       # Should exist (Spring Boot)
ls /opt/salvation/frontend/package.json  # Should exist (Next.js)
ls /opt/salvation/mobile/pubspec.yaml    # Should exist (Flutter)

# Verify Git remotes
cd /opt/salvation/backend && git remote -v
cd /opt/salvation/frontend && git remote -v
CI/CD Note: The GitHub Actions CI/CD pipeline uses SSH to connect to EC2 and runs git pull origin main to update the code before each deployment. Make sure the ubuntu user has read access to the repositories.

7. Step 3 — Configure Environment Variables

Create the .env file for the backend container:

nano /opt/salvation/backend/.env

Add the following contents:

SPRING_DATASOURCE_URL=jdbc:postgresql://salt-postgres:5432/salvation
SPRING_DATASOURCE_USERNAME=postgres
SPRING_DATASOURCE_PASSWORD=your_database_password
SPRING_REDIS_HOST=salt-redis
SPRING_REDIS_PORT=6379
Container networking: Since PostgreSQL and Redis run as Docker containers on the same salt-network, the backend connects to them using their container names (salt-postgres, salt-redis) instead of localhost or IP addresses. Docker DNS resolves these names automatically.
Security: Never commit the .env file to Git. It is already listed in .gitignore.

8. Step 4 — Start Infrastructure Containers (PostgreSQL, Redis, MinIO)

PostgreSQL, Redis, and MinIO all run as Docker containers defined in docker-compose.infra.yml (located in the backend repository). This file creates the containers, volumes, and the shared salt-network.

Copy the Production Database

Place the salvation.sql file in the backend directory so PostgreSQL can initialize from it on first startup:

# Copy salvation.sql to the backend directory
cp /path/to/salvation.sql /opt/salvation/backend/salvation.sql
Important: The salvation.sql file contains the production database. Handle it with care and never expose it publicly. PostgreSQL will only load this file on first startup (when the postgres_data volume is empty).

Start Infrastructure

cd /opt/salvation/backend

# Start PostgreSQL, Redis, and MinIO containers
docker compose -f docker-compose.infra.yml up -d

# Verify all three are running
docker compose -f docker-compose.infra.yml ps

Verify Infrastructure

# Check PostgreSQL is ready
docker exec salt-postgres pg_isready -U postgres -d salvation

# Check Redis is ready
docker exec salt-redis redis-cli ping
# Should return: PONG

# Check MinIO is ready
curl -s -o /dev/null -w "%{http_code}" http://localhost:9000/minio/health/live
# Should return: 200

What This Creates

ContainerImagePortVolume
salt-postgrespostgres:175432postgres_data/var/lib/postgresql/data
salt-redisredis:7-alpine6379redis_data/data
salt-miniominio/minio:latest9000, 9001minio_data/data

Flyway migrations will run automatically when the backend container starts (Step 5), applying any pending schema changes.

The backend uses a dual-read adapter for files: it checks MinIO first, then falls back to /opt/salt_files on disk. New uploads go to MinIO.

Database Schema Management

The platform uses Hibernate (JPA) as the ORM and Flyway for database migrations. Together they ensure that JPA entity definitions stay synchronized with the PostgreSQL database schema across all environments.

How It Works in Docker

Database Initialization Flow

salvation.sql
First startup only
PostgreSQL Container
salt-postgres
Flyway V1–V13
Backend startup
Hibernate Validate
Entities vs DB
  1. PostgreSQL container starts — loads salvation.sql into an empty database on first run (35 legacy tables). Subsequent restarts skip this step because postgres_data volume already has data.
  2. Backend container starts — Flyway runs automatically and applies any pending migrations (V1–V13+). These add new tables, columns, indexes, and seed data.
  3. Hibernate validates — compares all JPA @Entity classes against the database schema. In production (ddl-auto=validate), startup fails if any entity field is missing from the DB.

DDL-Auto Settings by Environment

ProfileSettingBehaviorUse Case
localhostupdateAuto-creates/alters tables from entity changesLocal development
stagingupdateAuto-creates/alters tables from entity changesStaging server
productionvalidateRead-only check — fails if mismatch, never modifies DBProduction server
testcreateFresh schema every test run (Testcontainers)Unit/integration tests
Production safety: The production profile uses ddl-auto=validate. This means Hibernate will never auto-modify the production database. All schema changes must go through Flyway migrations.

Developer Workflow: Adding New Tables or Columns

  1. Modify the @Entity class — add new fields, relationships, or create new entity classes.
  2. Run locally with localhost profile — Hibernate auto-creates the new columns/tables in your local DB.
  3. Generate a schema diff — run the built-in schema export tool to compare entities against the database:
    mvn spring-boot:run -Dspring-boot.run.profiles=localhost,schema-export
    This produces schema-diff.sql with the exact ALTER/CREATE statements needed.
  4. Create a Flyway migration — copy the relevant SQL to a new migration file:
    cp schema-diff.sql src/main/resources/db/migration/V14__your_description.sql
  5. Deploy — Flyway applies the migration on startup, then Hibernate validates the schema matches.

Docker Commands for Database Operations

# Check which Flyway migrations have been applied
docker exec salt-postgres psql -U postgres -d salvation \
  -c "SELECT version, description, installed_on FROM flyway_schema_history ORDER BY installed_rank;"

# Connect to database shell
docker exec -it salt-postgres psql -U postgres -d salvation

# View a table's structure
docker exec salt-postgres psql -U postgres -d salvation -c "\d tbl_student"

# List all tables
docker exec salt-postgres psql -U postgres -d salvation -c "\dt"

# Check Hibernate validation in backend logs
docker logs salt-backend-1 2>&1 | grep -i "schema"

# Repair Flyway (if a migration file was modified after being applied)
docker exec salt-backend-1 java -jar app.jar --spring.flyway.repair=true

Flyway Migration Conventions

RuleDetails
File namingV{N}__{description}.sql (double underscore, next version number)
Next migrationV14__... (V1–V13 are completed)
Never modifyNever edit an existing migration file — always create a new one
Additive onlyNo DROP TABLE, no DROP COLUMN, no renames. Only ADD COLUMN IF NOT EXISTS
IdempotentUse IF NOT EXISTS / IF EXISTS for safe re-runs

Troubleshooting

ErrorCauseFix
Schema-validation: missing columnEntity has a field that doesn't exist in DBCreate a Flyway migration with ALTER TABLE ADD COLUMN
Schema-validation: missing tableEntity references a table that doesn't existCreate a Flyway migration with CREATE TABLE
Flyway checksum mismatchA migration file was modified after being appliedRun flyway repair or restore original migration file
Backend won't start after entity changeProduction validate mode rejects unmigrated changesAdd Flyway migration, rebuild, redeploy

9. Step 5 — Build & Start Backend Container

The Dockerfile uses a multi-stage build:

# Navigate to the backend directory
cd /opt/salvation/backend

# Build the Docker image (this compiles Spring Boot inside Docker)
docker build -t salt-backend:latest .

# Create logs directory on host
mkdir -p /opt/salvation/backend/logs

# Start the container
docker run -d \
  --name salt-backend \
  --restart unless-stopped \
  --network salt-network \
  --env-file .env \
  -e TZ=Africa/Nairobi \
  -p 8091:8091 \
  -v /opt/salt_files:/opt/salt_files \
  -v /opt/salvation/backend/logs:/app/logs \
  salt-backend:latest
FlagPurpose
-dRun in background (detached mode)
--restart unless-stoppedAuto-restart on crash or server reboot
--network salt-networkJoin the shared Docker network (connects to salt-postgres, salt-redis, salt-minio)
--env-file .envLoad database and Redis connection settings
-e TZ=Africa/NairobiSet East Africa Time (UTC+3) inside container
-p 8091:8091Map port 8091 (host) to 8091 (container)
-v /opt/salt_files:...Mount legacy file storage into container
-v .../logs:/app/logsPersist application logs on host filesystem

Volume Summary

Host PathContainer PathPurpose
/opt/salt_files/opt/salt_filesStudent assignments, reading materials, uploaded files (shared with legacy system)
/opt/salvation/backend/logs/app/logsSpring Boot application logs (persist across container restarts)

Wait about 15–20 seconds for Spring Boot to start, then verify:

# Check container is running
docker ps --filter "name=salt-backend"

# Check application logs
docker logs --tail 50 salt-backend

10. Step 6 — Build & Start Web Admin Containers

The web admin runs as 2 replicas on distinct ports, using Docker Compose:

ServiceContainerPort
web-admin-1salt-web-admin-18081
web-admin-2salt-web-admin-28082

The Dockerfile uses a multi-stage build:

# Navigate to the web admin directory
cd /opt/salvation/frontend

# Build and start both replicas
docker compose up -d --build

# This builds the image once and starts two containers:
#   salt-web-admin-1 on port 8081
#   salt-web-admin-2 on port 8082

Each container runs the same Next.js image but listens on a different port via the PORT environment variable. Nginx load-balances traffic across both.

Verify:

docker ps --filter "name=salt-web-admin"
docker logs --tail 20 salt-web-admin-1
docker logs --tail 20 salt-web-admin-2

11. Step 7 — Copy Docs to Nginx Web Root

The documentation HTML files are served directly by Nginx from /var/www/html/:

sudo cp -r /opt/salvation/frontend/docs/* /var/www/html/
sudo chown -R www-data:www-data /var/www/html/

This copies all user guides, technical documentation, and the documentation hub index page to the Nginx web root.

12. Step 8 — Configure Nginx Reverse Proxy

Nginx is the only native service on EC2. It handles SSL termination and routes traffic to the correct Docker container based on the URL path. The configuration file should be at:

/etc/nginx/sites-available/stagging.saltcollegeandresourcecentre.com.conf

Key routing rules (Nginx load-balances across replicas using upstream blocks):

URL PathUpstreamReplicas
/Static files at /var/www/html/Nginx direct
/SaltElearning/frontend_backendlocalhost:8081, localhost:8082
/SaltELearnAppApi/backendlocalhost:8091, localhost:8092

Example Nginx upstream configuration:

upstream backend {
    server localhost:8091;
    server localhost:8092;
}

upstream frontend_backend {
    server localhost:8081;
    server localhost:8082;
}

After any Nginx config change:

# Test the configuration
sudo nginx -t

# Reload Nginx
sudo systemctl reload nginx

13. Step 9 — Configure GitHub Secrets for CI/CD

In both GitHub repositories (Salvation-Army-Backend-main and SaltElearning), go to Settings → Secrets and variables → Actions and add these secrets:

Secret NameValueDescription
EC2_SSH_PRIVATE_KEY(your SSH private key)The full contents of the private key file used to SSH into EC2
EC2_HOST3.13.144.24Staging server IP address
EC2_USERNAMEubuntuSSH username for the EC2 instance
For production: Create a separate repository environment called "production" with the production server IP (3.13.155.123).

14. Step 10 — Verify Deployment

Run these commands on the EC2 server to verify everything is working:

# 1. Check all containers are running
docker ps

# 2. Test infrastructure containers
docker exec salt-postgres pg_isready -U postgres -d salvation
docker exec salt-redis redis-cli ping

# 3. Test backend replicas
curl http://localhost:8091/SaltELearnAppApi/common/is_online
curl http://localhost:8092/SaltELearnAppApi/common/is_online

# 4. Test web admin replicas
curl -s -o /dev/null -w "%{http_code}" http://localhost:8081
curl -s -o /dev/null -w "%{http_code}" http://localhost:8082

# 5. Test through Nginx (public)
curl -s -o /dev/null -w "%{http_code}" https://stagging.saltcollegeandresourcecentre.com/
curl -s -o /dev/null -w "%{http_code}" https://stagging.saltcollegeandresourcecentre.com/SaltELearnAppApi/common/is_online

# 6. Check Nginx (the only native service)
systemctl status nginx
Expected results: All Docker containers show "Up", all curl commands return HTTP 200, Nginx shows "active (running)".

15. CI/CD Pipeline — Automated Testing & Deployment

Both repositories have GitHub Actions workflows (.github/workflows/deploy.yml) that automate testing and deployment. Deployment only proceeds if all tests pass.

No Docker Hub needed. Tests run on GitHub Actions runners (free for public repos). Deployment SSHes to EC2 and builds Docker images locally. The only requirement on EC2 is Docker + the cloned repos.

Pipeline Flow

Push to main Run Automated Tests All Tests Pass? SSH to EC2 → git pull → docker build → docker run

What Triggers What?

EventWhat HappensDeploys?
Push to mainRuns tests → if pass, deploys to stagingYes (staging)
Push to productionRuns tests → if pass, deploys to productionYes (production)
Pull Request to mainRuns tests only — no deploymentNo

Backend Pipeline — Spring Boot (Salvation-Army-Backend-main)

Workflow file: .github/workflows/deploy.yml

Automated Testing (runs on GitHub Actions runner):

Automated Deployment (after all tests pass):

  1. SSH into EC2 using EC2_SSH_PRIVATE_KEY secret
  2. cd /opt/salvation/backend
  3. git pull origin main (or production)
  4. docker build -t salt-backend:latest . — Maven compiles Spring Boot inside Docker (JDK 21 Alpine)
  5. Stop & remove old container: docker stop salt-backend && docker rm salt-backend
  6. Ensure Docker network: docker network create salt-network 2>/dev/null || true
  7. Start new container:
docker run -d \
  --name salt-backend \
  --restart unless-stopped \
  --network salt-network \
  --env-file .env \
  -e TZ=Africa/Nairobi \
  -p 8091:8091 \
  -v /opt/salt_files:/opt/salt_files \
  -v /opt/salvation/backend/logs:/app/logs \
  salt-backend:latest
  1. Health check: wait 15 seconds, verify container is running
  2. Clean up: docker image prune -f
On failure: The pipeline captures container logs (200 lines), container status, Docker image info, and disk space — all visible in the GitHub Actions run output.

Web Admin Pipeline — Next.js (SaltElearning)

Workflow file: .github/workflows/deploy.yml

Automated Testing (runs on GitHub Actions runner):

Automated Deployment (after all tests pass):

  1. SSH into EC2 using EC2_SSH_PRIVATE_KEY secret
  2. cd /opt/salvation/frontend
  3. git pull origin main (or production)
  4. Ensure Docker network: docker network create salt-network 2>/dev/null || true
  5. Build and start both replicas:
docker compose up -d --build
# Builds the image once and starts:
#   salt-web-admin-1 on port 8081
#   salt-web-admin-2 on port 8082
  1. Copy docs to Nginx (NOT Docker — served as static files):
sudo cp -r docs/* /var/www/html/
sudo chown -R www-data:www-data /var/www/html/
  1. Health check: wait 10 seconds, verify both containers are running
  2. Clean up: docker image prune -f
On failure: The pipeline captures container logs (200 lines), container status, and disk space.

Failure Logging

If a deployment fails, the pipeline automatically captures:

These logs are visible in the GitHub Actions run output.

16. Port Reference

PortServiceTypeAccess
443Nginx HTTPSNative (systemd)Public (SSL)
80Nginx HTTP (redirects to 443)Native (systemd)Public
8091Spring Boot Backend — Replica 1DockerInternal only (via Nginx)
8092Spring Boot Backend — Replica 2DockerInternal only (via Nginx)
8081Next.js Web Admin — Replica 1DockerInternal only (via Nginx)
8082Next.js Web Admin — Replica 2DockerInternal only (via Nginx)
5432PostgreSQL 17DockerInternal only (salt-network)
6379Redis 7DockerInternal only (salt-network)
9000MinIO S3 APIDockerInternal only (salt-network)
9001MinIO Web ConsoleDockerInternal only (salt-network)
Firewall: Only ports 80 and 443 should be open in the EC2 security group for public access. All other ports are accessed internally by Docker containers on salt-network.

17. Container Management

View Container Status

docker ps                                    # Running containers
docker ps -a                                 # All containers (including stopped)

View Logs

docker logs --tail 100 salt-backend-1       # Backend replica 1
docker logs --tail 100 salt-backend-2       # Backend replica 2
docker logs --tail 100 salt-web-admin-1     # Web admin replica 1
docker logs --tail 100 salt-web-admin-2     # Web admin replica 2
docker logs --tail 100 salt-postgres        # PostgreSQL
docker logs -f salt-backend-1               # Follow logs in real-time

Restart a Container

docker restart salt-backend-1 salt-backend-2
docker restart salt-web-admin-1 salt-web-admin-2
docker restart salt-postgres

Manual Redeploy

# Backend (both replicas)
cd /opt/salvation/backend
git pull origin main
docker compose up -d --build
# Rebuilds and restarts backend-1 (:8091) and backend-2 (:8092)

# Web Admin (both replicas)
cd /opt/salvation/frontend
git pull origin main
docker compose up -d --build
# Rebuilds and restarts web-admin-1 (:8081) and web-admin-2 (:8082)

# Copy docs
sudo cp -r docs/* /var/www/html/

Rollback to Previous Version

# The deploy scripts tag the current image as :previous before each deployment
# To roll back:
docker stop salt-backend && docker rm salt-backend
docker run -d --name salt-backend --restart unless-stopped \
  --network salt-network --env-file .env \
  -e TZ=Africa/Nairobi \
  -p 8091:8091 -v /opt/salt_files:/opt/salt_files \
  -v /opt/salvation/backend/logs:/app/logs salt-backend:previous

Clean Up Disk Space

docker image prune -f                        # Remove unused images
docker system prune -f                       # Remove all unused Docker objects

Data Persistence & Volumes

All persistent data is stored in Docker named volumes or host bind mounts. Data survives container restarts, redeployments, and image rebuilds.

DataStorage MethodVolume / PathBacked Up?
PostgreSQL databaseDocker named volumepostgres_dataVia docker exec salt-postgres pg_dump to /opt/backups/
Redis cacheDocker named volumeredis_dataEphemeral (cache only)
MinIO filesDocker named volumeminio_dataManual backup
Legacy filesHost bind mount/opt/salt_files/Manual backup
Backend logsHost bind mount/opt/salvation/backend/logs/Rotated automatically
Static docsNginx static files/var/www/html/Redeployed from Git

Database Backup

# Create a backup (from the PostgreSQL container)
docker exec salt-postgres pg_dump -U postgres -d salvation -F p \
  > /opt/backups/salvation_$(date +%Y%m%d).sql

# Restore from backup
cat /opt/backups/salvation_20260220.sql | \
  docker exec -i salt-postgres psql -U postgres -d salvation

# Automated backup (add to crontab)
# Run daily at 2 AM:
# 0 2 * * * docker exec salt-postgres pg_dump -U postgres -d salvation -F p > /opt/backups/salvation_$(date +\%Y\%m\%d).sql

18. Docker Compose Structure

The platform uses 3 separate Docker Compose files, each managing its own service independently:

FileRepositoryServicesVolumesPurpose
docker-compose.infra.ymlBackend (/opt/salvation/backend/)PostgreSQL 17, Redis 7, MinIOpostgres_data, redis_data, minio_data + salvation.sql initShared infrastructure
docker-compose.ymlBackend (/opt/salvation/backend/)Spring Boot backend (2 replicas: :8091, :8092)backend_logs_1, backend_logs_2, salt_filesBackend API service
docker-compose.ymlFrontend (/opt/salvation/frontend/)Next.js web admin (2 replicas: :8081, :8082)None (stateless)Frontend web application

All three share a Docker network called salt-network so containers can communicate by name (e.g., the backend connects to salt-postgres:5432).

Docker Compose Volumes

VolumeDefined InContainer MountPurpose
postgres_datadocker-compose.infra.yml/var/lib/postgresql/dataPostgreSQL database files
redis_datadocker-compose.infra.yml/dataRedis persistent data
minio_datadocker-compose.infra.yml/dataMinIO file storage
backend_logs_1docker-compose.yml (backend)/app/logsApplication log files (Replica 1)
backend_logs_2docker-compose.yml (backend)/app/logsApplication log files (Replica 2)
salt_filesdocker-compose.yml (backend)/opt/salt_filesLegacy uploaded files (reading materials, assignments)

Docker Compose does NOT auto-load salvation.sql via docker-entrypoint-initdb.d. Instead, the database is initialized once using the setup script:

# First-time database setup (run once after starting the PostgreSQL container)
bash deploy/setup-db.sh --staging

Place salvation.sql in the backend directory (/opt/salvation/backend/) before running the setup script. The script creates the salvation database and loads the SQL dump. Subsequent container restarts skip this step because the postgres_data volume already has data.

Startup Order

# 1. Start infrastructure first (PostgreSQL, Redis, MinIO)
cd /opt/salvation/backend
docker compose -f docker-compose.infra.yml up -d

# 2. Start backend (connects to infrastructure via salt-network)
docker compose up -d

# 3. Start frontend
cd /opt/salvation/frontend
docker compose up -d

# Check all containers
docker ps

Rebuild a Single Service

# Rebuild and restart only the backend
cd /opt/salvation/backend
docker compose up -d --build

# Rebuild and restart only the frontend
cd /opt/salvation/frontend
docker compose up -d --build
Timezone: All containers are configured with TZ=Africa/Nairobi (East Africa Time, UTC+3). Timestamps in logs, database records, and application output will use EAT.

19. Docker Networking — All Containers on Same Network

All containers must be on the same Docker network (salt-network) so they can communicate with each other by container name. This applies whether containers are started via docker compose or individual docker run commands.

Create the Network

The network is created automatically by docker-compose.infra.yml when you start infrastructure. You can also create it manually (safe to run multiple times):

# Create the shared bridge network (idempotent)
docker network create salt-network 2>/dev/null || true

Why All Containers Must Share a Network

Since all services (PostgreSQL, Redis, MinIO, Backend, Frontend) run as Docker containers, they must share a network to communicate. Docker DNS resolves container names automatically within the same network.

ConnectionFromToURL
Databasesalt-backendsalt-postgresjdbc:postgresql://salt-postgres:5432/salvation
Cachesalt-backendsalt-redissalt-redis:6379
File Storagesalt-backendsalt-miniohttp://salt-minio:9000
API Callssalt-web-adminsalt-backendhttp://salt-backend:8091

Container Names on the Network

Container NameServiceInternal Port
salt-postgresPostgreSQL 175432
salt-redisRedis 76379
salt-minioMinIO S39000 (API), 9001 (Console)
salt-backend-1Spring Boot API (Replica 1)8091
salt-backend-2Spring Boot API (Replica 2)8092
salt-web-admin-1Next.js Web Admin (Replica 1)8081
salt-web-admin-2Next.js Web Admin (Replica 2)8082

Verify Network Connectivity

# List all containers on salt-network
docker network inspect salt-network --format '{{range .Containers}}{{.Name}} {{end}}'

# Test connectivity between containers
docker exec salt-backend ping -c 2 salt-postgres
docker exec salt-backend ping -c 2 salt-redis
docker exec salt-web-admin ping -c 2 salt-backend

# Check which networks a container is connected to
docker inspect salt-backend --format '{{range $k,$v := .NetworkSettings.Networks}}{{$k}} {{end}}'

20. Docker Swarm — Scaling Containers

Docker Swarm enables scaling, load balancing, and high availability for the platform services. It is built into Docker — no additional software needed.

Initialize Docker Swarm

# On the EC2 server (manager node)
docker swarm init

# Note the join token for adding worker nodes later:
docker swarm join-token worker

Prepare Docker Compose for Swarm

The backend docker-compose.yml defines 2 named replicas with distinct ports:

ServiceContainer NameHost PortContainer Port
backend-1salt-backend-180918091
backend-2salt-backend-280928092

Each replica runs the same Spring Boot image but listens on a different port via the SERVER_PORT environment variable. Nginx load-balances traffic across both.

Before deploying with Docker Swarm, remove the container_name directives (Swarm manages naming dynamically):

# Step 1: Remove fixed container_name lines from docker-compose.yml
# Swarm does not support container_name with service replicas
sed -i '/container_name: salt-backend/d' docker-compose.yml

# Step 2: Build images and start both backend services
# --build forces a fresh image build from the Dockerfile before starting
docker compose up -d --build
CommandWhat It DoesWhy It’s Needed
sed -i '/container_name: salt-backend/d'Deletes all lines containing container_name: salt-backend from the compose file (matches both salt-backend-1 and salt-backend-2)Swarm requires dynamic container names — fixed names conflict with Swarm’s service naming
docker compose up -d --buildBuilds the Docker image from the Dockerfile, then starts both backend-1 and backend-2 in detached modeCombines the build and start steps into one command — ensures running containers use the latest code
How it works: The Dockerfile sets ENV SERVER_PORT=8091 as a default. Each compose service overrides this with its own SERVER_PORT value (8091 or 8092). Spring Boot picks up the environment variable automatically, overriding the server.port=8091 in application-staging.properties.
Verify after editing: Run cat docker-compose.yml to confirm the container_name lines were removed, then check both services are healthy: docker ps should show two backend containers on ports 8091 and 8092.

Create the Shared Network

# Create an overlay network for Swarm services
docker network create --driver overlay --attachable salt-network

Deploy Infrastructure Stack

# From backend directory
cd /opt/salvation/backend
docker stack deploy -c docker-compose.infra.yml salt-infra

# Check services
docker stack services salt-infra

Deploy Backend Stack

cd /opt/salvation/backend
docker stack deploy -c docker-compose.yml salt-backend

# 2 services: backend-1 (:8091) and backend-2 (:8092)
docker stack services salt-backend

Deploy Frontend Stack

cd /opt/salvation/frontend
docker stack deploy -c docker-compose.yml salt-frontend

# 2 services: web-admin-1 (:8081) and web-admin-2 (:8082)
docker stack services salt-frontend

Scaling Reference

ServiceDefault ReplicasScalable?Notes
PostgreSQL1No (single instance)Stateful — requires Patroni/pgpool for clustering
Redis1No (single instance)Stateful — requires Redis Sentinel or Cluster
MinIO1No (single instance)Stateful — requires distributed mode for scaling
Backend (Spring Boot)2 (ports 8091, 8092)YesStateless API — each replica on a distinct port, Nginx load-balances
Frontend (Next.js)2 (ports 8081, 8082)YesStateless web app — each replica on a distinct port, Nginx load-balances
Important: When scaling the backend to multiple replicas, Nginx must be configured for load balancing (round-robin is the default). Each replica shares the same /opt/salt_files volume and connects to the same PostgreSQL container.

Useful Swarm Commands

# View all stacks
docker stack ls

# View services in a stack
docker stack services salt-backend

# View tasks (containers) for a service
docker service ps salt-backend_backend

# View logs for a service
docker service logs salt-backend_backend --tail 50

# Update a service (rolling update)
docker service update --image salt-backend:latest salt-backend_backend

# Remove a stack
docker stack rm salt-backend

Rolling Updates

Docker Swarm performs rolling updates by default. When you rebuild an image and update the service, it replaces containers one at a time with zero downtime:

# Rebuild the image
cd /opt/salvation/backend
docker build -t salt-backend:latest .

# Update the running service with the new image
docker service update --image salt-backend:latest salt-backend_backend

21. Troubleshooting

Container won’t start

# Check the logs for errors
docker logs --tail 200 salt-backend

# Common issues:
# - "Connection refused" to PostgreSQL → check salt-postgres container is running
# - "Unable to connect to Redis" → check salt-redis container is running
# - Port already in use → check if another process is using the port:
ss -tlnp | grep 8091

Backend can’t connect to PostgreSQL

# 1. Verify salt-postgres container is running
docker ps --filter "name=salt-postgres"

# 2. Verify both containers are on the same network
docker network inspect salt-network --format '{{range .Containers}}{{.Name}} {{end}}'

# 3. Test connectivity from backend to PostgreSQL
docker exec salt-backend ping -c 2 salt-postgres

# 4. Verify .env has the correct container name
# Should be: SPRING_DATASOURCE_URL=jdbc:postgresql://salt-postgres:5432/salvation
# NOT: localhost or host.docker.internal

# 5. Check PostgreSQL logs
docker logs --tail 50 salt-postgres

Docker build fails

# Check disk space
df -h /

# Free up space
docker system prune -f

# Check Docker daemon
systemctl status docker

Nginx returns 502 Bad Gateway

# The upstream service (Docker container) is not running
docker ps  # Check if the container is running
docker logs salt-backend  # Check for startup errors

# Check Nginx error log
sudo tail -50 /var/log/nginx/error.log

GitHub Actions deployment fails

Check the GitHub Actions run output for detailed logs. The pipeline captures failure logs automatically, including container status, application logs, and disk space.

# Manually check on EC2 after a failed deploy
docker ps -a
docker logs --tail 200 salt-backend
df -h /

Infrastructure containers not starting

# Check infrastructure container status
docker compose -f /opt/salvation/backend/docker-compose.infra.yml ps

# Restart infrastructure
docker compose -f /opt/salvation/backend/docker-compose.infra.yml down
docker compose -f /opt/salvation/backend/docker-compose.infra.yml up -d

# Check individual container logs
docker logs salt-postgres
docker logs salt-redis
docker logs salt-minio