EC2 Server Setup, Docker Containers & CI/CD Pipeline
| Version | 1.0 |
| Last Updated | 20th February 2026 |
| Environment | Ubuntu 22.04 LTS on AWS EC2 |
| Audience | System Administrators & DevOps |
| Developed By | Sumba Group Limited |
The College E-Learning Platform uses a fully containerized deployment on AWS EC2. All application services and infrastructure run as Docker containers, with Nginx as the only native service handling SSL termination and reverse proxying.
| Service | Method | Port | Container / Location |
|---|---|---|---|
| Backend Replica 1 (Spring Boot) | Docker container | 8091 | salt-backend-1 |
| Backend Replica 2 (Spring Boot) | Docker container | 8092 | salt-backend-2 |
| Web Admin Replica 1 (Next.js) | Docker container | 8081 | salt-web-admin-1 |
| Web Admin Replica 2 (Next.js) | Docker container | 8082 | salt-web-admin-2 |
| PostgreSQL 17 | Docker container | 5432 | salt-postgres |
| Redis 7 | Docker container | 6379 | salt-redis |
| MinIO (S3 file storage) | Docker container | 9000/9001 | salt-minio |
| Nginx | Native (systemd) | 80/443 | System service (already installed on EC2) |
| Docs (static HTML) | Nginx static files | 80/443 | /var/www/html/ |
salt-network).The deploy/ folder in the backend repository contains scripts that automate most of these steps:
| Script | Purpose |
|---|---|
deploy/setup-ec2.sh | One-time EC2 setup (Docker, Nginx, Git, directories, Docker network) |
deploy/deploy-backend.sh | Deploy backend: SSH → git pull → docker build → docker run |
deploy/deploy-web-admin.sh | Deploy web admin: SSH → git pull → docker build → docker run → copy docs |
deploy/deploy-all.sh | Deploy both backend and web admin in one command |
deploy/health-check.sh | Check status of all services (Docker containers, Nginx, ports, HTTP) |
deploy/rollback.sh | Roll back to previous Docker image version |
deploy/view-logs.sh | View logs for backend, web admin, or Nginx |
deploy/backup-db.sh | Create timestamped PostgreSQL backup |
deploy/restore-db.sh | Restore database from a backup file |
deploy/ssl-setup.sh | Install Let’s Encrypt SSL certificate with Certbot |
deploy/docker-dev.sh | Local development: start/stop Docker Compose services |
Request Flow
Browser / Mobile App → Nginx (443/80) → /SaltELearnAppApi → Backend (8091, 8092) ┌─────────────────────────────────────────┐
│ AWS EC2 Instance │
│ │
Internet ──443──▶ │ Nginx (native, SSL termination) │
│ ├── / → /var/www/html│
│ ├── /SaltElearning/ → :8081,:8082 │
│ └── /SaltELearnAppApi/ → :8091,:8092 │
│ │
│ ┌── Docker (salt-network) ────────────────────────┐ │
│ │ │ │
│ │ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ │backend-1 │ │backend-2 │ │web-admin-1│ │web-admin-2│ │
│ │ │ :8091 │ │ :8092 │ │ :8081 │ │ :8082 │ │
│ │ │Spring Boot│ │Spring Boot│ │ Next.js │ │ Next.js │ │
│ │ └─────┬─────┘ └─────┬─────┘ └───────────┘ └───────────┘ │
│ │ │ │ │
│ │ ┌──────▼──────┐ ┌──────┐ ┌──────┐ │ │
│ │ │salt-postgres│ │salt- │ │salt- │ │ │
│ │ │ :5432 │ │redis │ │minio │ │ │
│ │ │PostgreSQL 17│ │:6379 │ │:9000 │ │ │
│ │ └─────────────┘ └──────┘ └──────┘ │ │
│ └────────────────────────────────────┘ │
└─────────────────────────────────────────┘
| Service | Status | Verify Command |
|---|---|---|
| Docker | Installed | docker --version |
| Docker Compose | Installed (bundled with Docker) | docker compose version |
| Nginx | Running (native systemd) | systemctl status nginx |
| SSL Certificate | Configured | certbot certificates |
| Git | Installed | git --version |
docker-compose.infra.yml. Only Docker, Nginx, and Git are required on the host.Docker is used to run all services (backend, web admin, PostgreSQL, Redis, MinIO) as containers on EC2. Nginx is the only native service — it handles SSL termination and reverse proxying to the Docker containers.
# Check Docker is installed and running
docker --version
systemctl status docker
# Ensure the ubuntu user can run Docker without sudo
docker ps
# If "permission denied", add ubuntu to the docker group:
sudo usermod -aG docker ubuntu
# Then log out and log back in for the change to take effect
ubuntu user and runs docker build and docker run commands. The ubuntu user must be in the docker group to run these commands without sudo.# Verify ubuntu is in the docker group
groups ubuntu
# Should show: ubuntu : ubuntu docker
# If not, add it:
sudo usermod -aG docker ubuntu
# Log out and back in, then verify
docker ps # Should work without sudo
Docker images accumulate over time with each deployment. Monitor disk space:
# Check Docker disk usage
docker system df
# Clean up unused images and containers
docker image prune -f # Remove dangling images
docker system prune -f # Remove all unused objects
docker image prune -f after each deployment to keep disk usage in check.The backend Dockerfile uses a multi-stage build (no pre-installed Maven or JDK required on EC2):
eclipse-temurin:21-jdk-alpine to compile the Spring Boot source code with Maven inside the container → produces a JAR fileeclipse-temurin:21-jre-alpine (smaller image) to run the compiled JARMaven, JDK, and all build tools are inside the Docker image — nothing needs to be installed on the EC2 host beyond Docker itself.
The web admin Dockerfile also uses a multi-stage build:
node:20-alpine to install npm dependencies and build Next.js (standalone output)node:20-alpine to run the standalone Next.js server on port 8081Node.js and npm are inside the Docker image — no Node.js installation needed on EC2.
Create the directories where the application code and files will live:
sudo mkdir -p /opt/salvation/backend
sudo mkdir -p /opt/salvation/frontend
sudo mkdir -p /opt/salt_files
sudo mkdir -p /opt/backups
sudo chown -R ubuntu:ubuntu /opt/salvation /opt/salt_files /opt/backups
| Directory | Purpose |
|---|---|
/opt/salvation/backend | Spring Boot source code (cloned from GitHub) |
/opt/salvation/frontend | Next.js web admin source code (cloned from GitHub) |
/opt/salt_files | Legacy file storage (assignments, materials, uploads) |
/opt/backups | Database backup files |
The College E-Learning platform is split across 3 independent repositories under the SGL2024 GitHub organization:
| Repository | GitHub URL | EC2 Clone Path | Contents |
|---|---|---|---|
| Backend | https://github.com/SGL2024/Salvation-Army-Backend-main.git | /opt/salvation/backend | Spring Boot API, Dockerfile, deploy scripts, infrastructure docker-compose, Flyway migrations |
| Frontend | https://github.com/SGL2024/SaltElearning.git | /opt/salvation/frontend | Next.js web admin, Dockerfile, documentation HTML files |
| Mobile | https://github.com/SGL2024/mobile.git | /opt/salvation/mobile | Flutter student app + executive dashboard |
If the repositories are private, you need a GitHub Personal Access Token (PAT):
repo (full control of private repositories)# Configure Git to cache credentials (so you don't re-enter the token every time)
git config --global credential.helper store
# Clone backend repository
cd /opt/salvation/backend
git clone https://github.com/SGL2024/Salvation-Army-Backend-main.git .
# When prompted: Username = your-github-username, Password = your-PAT-token
# Clone web admin repository
cd /opt/salvation/frontend
git clone https://github.com/SGL2024/SaltElearning.git .
# Clone mobile repository (optional — only needed for Flutter web builds)
mkdir -p /opt/salvation/mobile
cd /opt/salvation/mobile
git clone https://github.com/SGL2024/mobile.git .
If you prefer SSH authentication:
# 1. Generate an SSH key on the EC2 server (if not already done)
ssh-keygen -t ed25519 -C "ec2-salt-deploy"
# Press Enter to accept default path (~/.ssh/id_ed25519)
# 2. Display the public key
cat ~/.ssh/id_ed25519.pub
# 3. Add the public key to GitHub:
# Go to GitHub → Settings → SSH and GPG keys → New SSH key
# Paste the public key and save
# 4. Test the SSH connection
ssh -T git@github.com
# Should show: "Hi SGL2024! You've successfully authenticated..."
# 5. Clone using SSH URLs
cd /opt/salvation/backend
git clone git@github.com:SGL2024/Salvation-Army-Backend-main.git .
cd /opt/salvation/frontend
git clone git@github.com:SGL2024/SaltElearning.git .
mkdir -p /opt/salvation/mobile
cd /opt/salvation/mobile
git clone git@github.com:SGL2024/mobile.git .
# Check that each directory has the code
ls /opt/salvation/backend/pom.xml # Should exist (Spring Boot)
ls /opt/salvation/frontend/package.json # Should exist (Next.js)
ls /opt/salvation/mobile/pubspec.yaml # Should exist (Flutter)
# Verify Git remotes
cd /opt/salvation/backend && git remote -v
cd /opt/salvation/frontend && git remote -v
git pull origin main to update the code before each deployment. Make sure the ubuntu user has read access to the repositories.Create the .env file for the backend container:
nano /opt/salvation/backend/.env
Add the following contents:
SPRING_DATASOURCE_URL=jdbc:postgresql://salt-postgres:5432/salvation
SPRING_DATASOURCE_USERNAME=postgres
SPRING_DATASOURCE_PASSWORD=your_database_password
SPRING_REDIS_HOST=salt-redis
SPRING_REDIS_PORT=6379
salt-network, the backend connects to them using their container names (salt-postgres, salt-redis) instead of localhost or IP addresses. Docker DNS resolves these names automatically..env file to Git. It is already listed in .gitignore.PostgreSQL, Redis, and MinIO all run as Docker containers defined in docker-compose.infra.yml (located in the backend repository). This file creates the containers, volumes, and the shared salt-network.
Place the salvation.sql file in the backend directory so PostgreSQL can initialize from it on first startup:
# Copy salvation.sql to the backend directory
cp /path/to/salvation.sql /opt/salvation/backend/salvation.sql
salvation.sql file contains the production database. Handle it with care and never expose it publicly. PostgreSQL will only load this file on first startup (when the postgres_data volume is empty).cd /opt/salvation/backend
# Start PostgreSQL, Redis, and MinIO containers
docker compose -f docker-compose.infra.yml up -d
# Verify all three are running
docker compose -f docker-compose.infra.yml ps
# Check PostgreSQL is ready
docker exec salt-postgres pg_isready -U postgres -d salvation
# Check Redis is ready
docker exec salt-redis redis-cli ping
# Should return: PONG
# Check MinIO is ready
curl -s -o /dev/null -w "%{http_code}" http://localhost:9000/minio/health/live
# Should return: 200
| Container | Image | Port | Volume |
|---|---|---|---|
salt-postgres | postgres:17 | 5432 | postgres_data → /var/lib/postgresql/data |
salt-redis | redis:7-alpine | 6379 | redis_data → /data |
salt-minio | minio/minio:latest | 9000, 9001 | minio_data → /data |
Flyway migrations will run automatically when the backend container starts (Step 5), applying any pending schema changes.
/opt/salt_files on disk. New uploads go to MinIO.The platform uses Hibernate (JPA) as the ORM and Flyway for database migrations. Together they ensure that JPA entity definitions stay synchronized with the PostgreSQL database schema across all environments.
salvation.sql into an empty database on first run (35 legacy tables). Subsequent restarts skip this step because postgres_data volume already has data.@Entity classes against the database schema. In production (ddl-auto=validate), startup fails if any entity field is missing from the DB.| Profile | Setting | Behavior | Use Case |
|---|---|---|---|
localhost | update | Auto-creates/alters tables from entity changes | Local development |
staging | update | Auto-creates/alters tables from entity changes | Staging server |
production | validate | Read-only check — fails if mismatch, never modifies DB | Production server |
test | create | Fresh schema every test run (Testcontainers) | Unit/integration tests |
ddl-auto=validate. This means Hibernate will never auto-modify the production database. All schema changes must go through Flyway migrations.localhost profile — Hibernate auto-creates the new columns/tables in your local DB.mvn spring-boot:run -Dspring-boot.run.profiles=localhost,schema-export
This produces schema-diff.sql with the exact ALTER/CREATE statements needed.cp schema-diff.sql src/main/resources/db/migration/V14__your_description.sql# Check which Flyway migrations have been applied
docker exec salt-postgres psql -U postgres -d salvation \
-c "SELECT version, description, installed_on FROM flyway_schema_history ORDER BY installed_rank;"
# Connect to database shell
docker exec -it salt-postgres psql -U postgres -d salvation
# View a table's structure
docker exec salt-postgres psql -U postgres -d salvation -c "\d tbl_student"
# List all tables
docker exec salt-postgres psql -U postgres -d salvation -c "\dt"
# Check Hibernate validation in backend logs
docker logs salt-backend-1 2>&1 | grep -i "schema"
# Repair Flyway (if a migration file was modified after being applied)
docker exec salt-backend-1 java -jar app.jar --spring.flyway.repair=true
| Rule | Details |
|---|---|
| File naming | V{N}__{description}.sql (double underscore, next version number) |
| Next migration | V14__... (V1–V13 are completed) |
| Never modify | Never edit an existing migration file — always create a new one |
| Additive only | No DROP TABLE, no DROP COLUMN, no renames. Only ADD COLUMN IF NOT EXISTS |
| Idempotent | Use IF NOT EXISTS / IF EXISTS for safe re-runs |
| Error | Cause | Fix |
|---|---|---|
Schema-validation: missing column | Entity has a field that doesn't exist in DB | Create a Flyway migration with ALTER TABLE ADD COLUMN |
Schema-validation: missing table | Entity references a table that doesn't exist | Create a Flyway migration with CREATE TABLE |
Flyway checksum mismatch | A migration file was modified after being applied | Run flyway repair or restore original migration file |
| Backend won't start after entity change | Production validate mode rejects unmigrated changes | Add Flyway migration, rebuild, redeploy |
The Dockerfile uses a multi-stage build:
# Navigate to the backend directory
cd /opt/salvation/backend
# Build the Docker image (this compiles Spring Boot inside Docker)
docker build -t salt-backend:latest .
# Create logs directory on host
mkdir -p /opt/salvation/backend/logs
# Start the container
docker run -d \
--name salt-backend \
--restart unless-stopped \
--network salt-network \
--env-file .env \
-e TZ=Africa/Nairobi \
-p 8091:8091 \
-v /opt/salt_files:/opt/salt_files \
-v /opt/salvation/backend/logs:/app/logs \
salt-backend:latest
| Flag | Purpose |
|---|---|
-d | Run in background (detached mode) |
--restart unless-stopped | Auto-restart on crash or server reboot |
--network salt-network | Join the shared Docker network (connects to salt-postgres, salt-redis, salt-minio) |
--env-file .env | Load database and Redis connection settings |
-e TZ=Africa/Nairobi | Set East Africa Time (UTC+3) inside container |
-p 8091:8091 | Map port 8091 (host) to 8091 (container) |
-v /opt/salt_files:... | Mount legacy file storage into container |
-v .../logs:/app/logs | Persist application logs on host filesystem |
| Host Path | Container Path | Purpose |
|---|---|---|
/opt/salt_files | /opt/salt_files | Student assignments, reading materials, uploaded files (shared with legacy system) |
/opt/salvation/backend/logs | /app/logs | Spring Boot application logs (persist across container restarts) |
Wait about 15–20 seconds for Spring Boot to start, then verify:
# Check container is running
docker ps --filter "name=salt-backend"
# Check application logs
docker logs --tail 50 salt-backend
The web admin runs as 2 replicas on distinct ports, using Docker Compose:
| Service | Container | Port |
|---|---|---|
web-admin-1 | salt-web-admin-1 | 8081 |
web-admin-2 | salt-web-admin-2 | 8082 |
The Dockerfile uses a multi-stage build:
PORT env var# Navigate to the web admin directory
cd /opt/salvation/frontend
# Build and start both replicas
docker compose up -d --build
# This builds the image once and starts two containers:
# salt-web-admin-1 on port 8081
# salt-web-admin-2 on port 8082
Each container runs the same Next.js image but listens on a different port via the PORT environment variable. Nginx load-balances traffic across both.
Verify:
docker ps --filter "name=salt-web-admin"
docker logs --tail 20 salt-web-admin-1
docker logs --tail 20 salt-web-admin-2
The documentation HTML files are served directly by Nginx from /var/www/html/:
sudo cp -r /opt/salvation/frontend/docs/* /var/www/html/
sudo chown -R www-data:www-data /var/www/html/
This copies all user guides, technical documentation, and the documentation hub index page to the Nginx web root.
Nginx is the only native service on EC2. It handles SSL termination and routes traffic to the correct Docker container based on the URL path. The configuration file should be at:
/etc/nginx/sites-available/stagging.saltcollegeandresourcecentre.com.conf
Key routing rules (Nginx load-balances across replicas using upstream blocks):
| URL Path | Upstream | Replicas |
|---|---|---|
/ | Static files at /var/www/html/ | Nginx direct |
/SaltElearning/ | frontend_backend | localhost:8081, localhost:8082 |
/SaltELearnAppApi/ | backend | localhost:8091, localhost:8092 |
Example Nginx upstream configuration:
upstream backend {
server localhost:8091;
server localhost:8092;
}
upstream frontend_backend {
server localhost:8081;
server localhost:8082;
}
After any Nginx config change:
# Test the configuration
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginx
In both GitHub repositories (Salvation-Army-Backend-main and SaltElearning), go to Settings → Secrets and variables → Actions and add these secrets:
| Secret Name | Value | Description |
|---|---|---|
EC2_SSH_PRIVATE_KEY | (your SSH private key) | The full contents of the private key file used to SSH into EC2 |
EC2_HOST | 3.13.144.24 | Staging server IP address |
EC2_USERNAME | ubuntu | SSH username for the EC2 instance |
3.13.155.123).Run these commands on the EC2 server to verify everything is working:
# 1. Check all containers are running
docker ps
# 2. Test infrastructure containers
docker exec salt-postgres pg_isready -U postgres -d salvation
docker exec salt-redis redis-cli ping
# 3. Test backend replicas
curl http://localhost:8091/SaltELearnAppApi/common/is_online
curl http://localhost:8092/SaltELearnAppApi/common/is_online
# 4. Test web admin replicas
curl -s -o /dev/null -w "%{http_code}" http://localhost:8081
curl -s -o /dev/null -w "%{http_code}" http://localhost:8082
# 5. Test through Nginx (public)
curl -s -o /dev/null -w "%{http_code}" https://stagging.saltcollegeandresourcecentre.com/
curl -s -o /dev/null -w "%{http_code}" https://stagging.saltcollegeandresourcecentre.com/SaltELearnAppApi/common/is_online
# 6. Check Nginx (the only native service)
systemctl status nginx
Both repositories have GitHub Actions workflows (.github/workflows/deploy.yml) that automate testing and deployment. Deployment only proceeds if all tests pass.
| Event | What Happens | Deploys? |
|---|---|---|
Push to main | Runs tests → if pass, deploys to staging | Yes (staging) |
Push to production | Runs tests → if pass, deploys to production | Yes (production) |
Pull Request to main | Runs tests only — no deployment | No |
Salvation-Army-Backend-main)Workflow file: .github/workflows/deploy.yml
Automated Testing (runs on GitHub Actions runner):
./mvnw test -B./mvnw verify -B -Dskip.unit.tests=true (uses Testcontainers with PostgreSQL 17 + Redis 7)Automated Deployment (after all tests pass):
EC2_SSH_PRIVATE_KEY secretcd /opt/salvation/backendgit pull origin main (or production)docker build -t salt-backend:latest . — Maven compiles Spring Boot inside Docker (JDK 21 Alpine)docker stop salt-backend && docker rm salt-backenddocker network create salt-network 2>/dev/null || truedocker run -d \
--name salt-backend \
--restart unless-stopped \
--network salt-network \
--env-file .env \
-e TZ=Africa/Nairobi \
-p 8091:8091 \
-v /opt/salt_files:/opt/salt_files \
-v /opt/salvation/backend/logs:/app/logs \
salt-backend:latest
docker image prune -fSaltElearning)Workflow file: .github/workflows/deploy.yml
Automated Testing (runs on GitHub Actions runner):
npm cinpm run lintnpm run build (Next.js standalone output)Automated Deployment (after all tests pass):
EC2_SSH_PRIVATE_KEY secretcd /opt/salvation/frontendgit pull origin main (or production)docker network create salt-network 2>/dev/null || truedocker compose up -d --build
# Builds the image once and starts:
# salt-web-admin-1 on port 8081
# salt-web-admin-2 on port 8082
sudo cp -r docs/* /var/www/html/
sudo chown -R www-data:www-data /var/www/html/
docker image prune -fIf a deployment fails, the pipeline automatically captures:
These logs are visible in the GitHub Actions run output.
| Port | Service | Type | Access |
|---|---|---|---|
| 443 | Nginx HTTPS | Native (systemd) | Public (SSL) |
| 80 | Nginx HTTP (redirects to 443) | Native (systemd) | Public |
| 8091 | Spring Boot Backend — Replica 1 | Docker | Internal only (via Nginx) |
| 8092 | Spring Boot Backend — Replica 2 | Docker | Internal only (via Nginx) |
| 8081 | Next.js Web Admin — Replica 1 | Docker | Internal only (via Nginx) |
| 8082 | Next.js Web Admin — Replica 2 | Docker | Internal only (via Nginx) |
| 5432 | PostgreSQL 17 | Docker | Internal only (salt-network) |
| 6379 | Redis 7 | Docker | Internal only (salt-network) |
| 9000 | MinIO S3 API | Docker | Internal only (salt-network) |
| 9001 | MinIO Web Console | Docker | Internal only (salt-network) |
salt-network.docker ps # Running containers
docker ps -a # All containers (including stopped)
docker logs --tail 100 salt-backend-1 # Backend replica 1
docker logs --tail 100 salt-backend-2 # Backend replica 2
docker logs --tail 100 salt-web-admin-1 # Web admin replica 1
docker logs --tail 100 salt-web-admin-2 # Web admin replica 2
docker logs --tail 100 salt-postgres # PostgreSQL
docker logs -f salt-backend-1 # Follow logs in real-time
docker restart salt-backend-1 salt-backend-2
docker restart salt-web-admin-1 salt-web-admin-2
docker restart salt-postgres
# Backend (both replicas)
cd /opt/salvation/backend
git pull origin main
docker compose up -d --build
# Rebuilds and restarts backend-1 (:8091) and backend-2 (:8092)
# Web Admin (both replicas)
cd /opt/salvation/frontend
git pull origin main
docker compose up -d --build
# Rebuilds and restarts web-admin-1 (:8081) and web-admin-2 (:8082)
# Copy docs
sudo cp -r docs/* /var/www/html/
# The deploy scripts tag the current image as :previous before each deployment
# To roll back:
docker stop salt-backend && docker rm salt-backend
docker run -d --name salt-backend --restart unless-stopped \
--network salt-network --env-file .env \
-e TZ=Africa/Nairobi \
-p 8091:8091 -v /opt/salt_files:/opt/salt_files \
-v /opt/salvation/backend/logs:/app/logs salt-backend:previous
docker image prune -f # Remove unused images
docker system prune -f # Remove all unused Docker objects
All persistent data is stored in Docker named volumes or host bind mounts. Data survives container restarts, redeployments, and image rebuilds.
| Data | Storage Method | Volume / Path | Backed Up? |
|---|---|---|---|
| PostgreSQL database | Docker named volume | postgres_data | Via docker exec salt-postgres pg_dump to /opt/backups/ |
| Redis cache | Docker named volume | redis_data | Ephemeral (cache only) |
| MinIO files | Docker named volume | minio_data | Manual backup |
| Legacy files | Host bind mount | /opt/salt_files/ | Manual backup |
| Backend logs | Host bind mount | /opt/salvation/backend/logs/ | Rotated automatically |
| Static docs | Nginx static files | /var/www/html/ | Redeployed from Git |
# Create a backup (from the PostgreSQL container)
docker exec salt-postgres pg_dump -U postgres -d salvation -F p \
> /opt/backups/salvation_$(date +%Y%m%d).sql
# Restore from backup
cat /opt/backups/salvation_20260220.sql | \
docker exec -i salt-postgres psql -U postgres -d salvation
# Automated backup (add to crontab)
# Run daily at 2 AM:
# 0 2 * * * docker exec salt-postgres pg_dump -U postgres -d salvation -F p > /opt/backups/salvation_$(date +\%Y\%m\%d).sql
The platform uses 3 separate Docker Compose files, each managing its own service independently:
| File | Repository | Services | Volumes | Purpose |
|---|---|---|---|---|
docker-compose.infra.yml | Backend (/opt/salvation/backend/) | PostgreSQL 17, Redis 7, MinIO | postgres_data, redis_data, minio_data + salvation.sql init | Shared infrastructure |
docker-compose.yml | Backend (/opt/salvation/backend/) | Spring Boot backend (2 replicas: :8091, :8092) | backend_logs_1, backend_logs_2, salt_files | Backend API service |
docker-compose.yml | Frontend (/opt/salvation/frontend/) | Next.js web admin (2 replicas: :8081, :8082) | None (stateless) | Frontend web application |
All three share a Docker network called salt-network so containers can communicate by name (e.g., the backend connects to salt-postgres:5432).
| Volume | Defined In | Container Mount | Purpose |
|---|---|---|---|
postgres_data | docker-compose.infra.yml | /var/lib/postgresql/data | PostgreSQL database files |
redis_data | docker-compose.infra.yml | /data | Redis persistent data |
minio_data | docker-compose.infra.yml | /data | MinIO file storage |
backend_logs_1 | docker-compose.yml (backend) | /app/logs | Application log files (Replica 1) |
backend_logs_2 | docker-compose.yml (backend) | /app/logs | Application log files (Replica 2) |
salt_files | docker-compose.yml (backend) | /opt/salt_files | Legacy uploaded files (reading materials, assignments) |
Docker Compose does NOT auto-load salvation.sql via docker-entrypoint-initdb.d. Instead, the database is initialized once using the setup script:
# First-time database setup (run once after starting the PostgreSQL container)
bash deploy/setup-db.sh --staging
Place salvation.sql in the backend directory (/opt/salvation/backend/) before running the setup script. The script creates the salvation database and loads the SQL dump. Subsequent container restarts skip this step because the postgres_data volume already has data.
# 1. Start infrastructure first (PostgreSQL, Redis, MinIO)
cd /opt/salvation/backend
docker compose -f docker-compose.infra.yml up -d
# 2. Start backend (connects to infrastructure via salt-network)
docker compose up -d
# 3. Start frontend
cd /opt/salvation/frontend
docker compose up -d
# Check all containers
docker ps
# Rebuild and restart only the backend
cd /opt/salvation/backend
docker compose up -d --build
# Rebuild and restart only the frontend
cd /opt/salvation/frontend
docker compose up -d --build
TZ=Africa/Nairobi (East Africa Time, UTC+3). Timestamps in logs, database records, and application output will use EAT.All containers must be on the same Docker network (salt-network) so they can communicate with each other by container name. This applies whether containers are started via docker compose or individual docker run commands.
The network is created automatically by docker-compose.infra.yml when you start infrastructure. You can also create it manually (safe to run multiple times):
# Create the shared bridge network (idempotent)
docker network create salt-network 2>/dev/null || true
Since all services (PostgreSQL, Redis, MinIO, Backend, Frontend) run as Docker containers, they must share a network to communicate. Docker DNS resolves container names automatically within the same network.
| Connection | From | To | URL |
|---|---|---|---|
| Database | salt-backend | salt-postgres | jdbc:postgresql://salt-postgres:5432/salvation |
| Cache | salt-backend | salt-redis | salt-redis:6379 |
| File Storage | salt-backend | salt-minio | http://salt-minio:9000 |
| API Calls | salt-web-admin | salt-backend | http://salt-backend:8091 |
| Container Name | Service | Internal Port |
|---|---|---|
salt-postgres | PostgreSQL 17 | 5432 |
salt-redis | Redis 7 | 6379 |
salt-minio | MinIO S3 | 9000 (API), 9001 (Console) |
salt-backend-1 | Spring Boot API (Replica 1) | 8091 |
salt-backend-2 | Spring Boot API (Replica 2) | 8092 |
salt-web-admin-1 | Next.js Web Admin (Replica 1) | 8081 |
salt-web-admin-2 | Next.js Web Admin (Replica 2) | 8082 |
# List all containers on salt-network
docker network inspect salt-network --format '{{range .Containers}}{{.Name}} {{end}}'
# Test connectivity between containers
docker exec salt-backend ping -c 2 salt-postgres
docker exec salt-backend ping -c 2 salt-redis
docker exec salt-web-admin ping -c 2 salt-backend
# Check which networks a container is connected to
docker inspect salt-backend --format '{{range $k,$v := .NetworkSettings.Networks}}{{$k}} {{end}}'
Docker Swarm enables scaling, load balancing, and high availability for the platform services. It is built into Docker — no additional software needed.
# On the EC2 server (manager node)
docker swarm init
# Note the join token for adding worker nodes later:
docker swarm join-token worker
The backend docker-compose.yml defines 2 named replicas with distinct ports:
| Service | Container Name | Host Port | Container Port |
|---|---|---|---|
backend-1 | salt-backend-1 | 8091 | 8091 |
backend-2 | salt-backend-2 | 8092 | 8092 |
Each replica runs the same Spring Boot image but listens on a different port via the SERVER_PORT environment variable. Nginx load-balances traffic across both.
Before deploying with Docker Swarm, remove the container_name directives (Swarm manages naming dynamically):
# Step 1: Remove fixed container_name lines from docker-compose.yml
# Swarm does not support container_name with service replicas
sed -i '/container_name: salt-backend/d' docker-compose.yml
# Step 2: Build images and start both backend services
# --build forces a fresh image build from the Dockerfile before starting
docker compose up -d --build
| Command | What It Does | Why It’s Needed |
|---|---|---|
sed -i '/container_name: salt-backend/d' | Deletes all lines containing container_name: salt-backend from the compose file (matches both salt-backend-1 and salt-backend-2) | Swarm requires dynamic container names — fixed names conflict with Swarm’s service naming |
docker compose up -d --build | Builds the Docker image from the Dockerfile, then starts both backend-1 and backend-2 in detached mode | Combines the build and start steps into one command — ensures running containers use the latest code |
ENV SERVER_PORT=8091 as a default. Each compose service overrides this with its own SERVER_PORT value (8091 or 8092). Spring Boot picks up the environment variable automatically, overriding the server.port=8091 in application-staging.properties.cat docker-compose.yml to confirm the container_name lines were removed, then check both services are healthy: docker ps should show two backend containers on ports 8091 and 8092.# Create an overlay network for Swarm services
docker network create --driver overlay --attachable salt-network
# From backend directory
cd /opt/salvation/backend
docker stack deploy -c docker-compose.infra.yml salt-infra
# Check services
docker stack services salt-infra
cd /opt/salvation/backend
docker stack deploy -c docker-compose.yml salt-backend
# 2 services: backend-1 (:8091) and backend-2 (:8092)
docker stack services salt-backend
cd /opt/salvation/frontend
docker stack deploy -c docker-compose.yml salt-frontend
# 2 services: web-admin-1 (:8081) and web-admin-2 (:8082)
docker stack services salt-frontend
| Service | Default Replicas | Scalable? | Notes |
|---|---|---|---|
| PostgreSQL | 1 | No (single instance) | Stateful — requires Patroni/pgpool for clustering |
| Redis | 1 | No (single instance) | Stateful — requires Redis Sentinel or Cluster |
| MinIO | 1 | No (single instance) | Stateful — requires distributed mode for scaling |
| Backend (Spring Boot) | 2 (ports 8091, 8092) | Yes | Stateless API — each replica on a distinct port, Nginx load-balances |
| Frontend (Next.js) | 2 (ports 8081, 8082) | Yes | Stateless web app — each replica on a distinct port, Nginx load-balances |
/opt/salt_files volume and connects to the same PostgreSQL container.# View all stacks
docker stack ls
# View services in a stack
docker stack services salt-backend
# View tasks (containers) for a service
docker service ps salt-backend_backend
# View logs for a service
docker service logs salt-backend_backend --tail 50
# Update a service (rolling update)
docker service update --image salt-backend:latest salt-backend_backend
# Remove a stack
docker stack rm salt-backend
Docker Swarm performs rolling updates by default. When you rebuild an image and update the service, it replaces containers one at a time with zero downtime:
# Rebuild the image
cd /opt/salvation/backend
docker build -t salt-backend:latest .
# Update the running service with the new image
docker service update --image salt-backend:latest salt-backend_backend
# Check the logs for errors
docker logs --tail 200 salt-backend
# Common issues:
# - "Connection refused" to PostgreSQL → check salt-postgres container is running
# - "Unable to connect to Redis" → check salt-redis container is running
# - Port already in use → check if another process is using the port:
ss -tlnp | grep 8091
# 1. Verify salt-postgres container is running
docker ps --filter "name=salt-postgres"
# 2. Verify both containers are on the same network
docker network inspect salt-network --format '{{range .Containers}}{{.Name}} {{end}}'
# 3. Test connectivity from backend to PostgreSQL
docker exec salt-backend ping -c 2 salt-postgres
# 4. Verify .env has the correct container name
# Should be: SPRING_DATASOURCE_URL=jdbc:postgresql://salt-postgres:5432/salvation
# NOT: localhost or host.docker.internal
# 5. Check PostgreSQL logs
docker logs --tail 50 salt-postgres
# Check disk space
df -h /
# Free up space
docker system prune -f
# Check Docker daemon
systemctl status docker
# The upstream service (Docker container) is not running
docker ps # Check if the container is running
docker logs salt-backend # Check for startup errors
# Check Nginx error log
sudo tail -50 /var/log/nginx/error.log
Check the GitHub Actions run output for detailed logs. The pipeline captures failure logs automatically, including container status, application logs, and disk space.
# Manually check on EC2 after a failed deploy
docker ps -a
docker logs --tail 200 salt-backend
df -h /
# Check infrastructure container status
docker compose -f /opt/salvation/backend/docker-compose.infra.yml ps
# Restart infrastructure
docker compose -f /opt/salvation/backend/docker-compose.infra.yml down
docker compose -f /opt/salvation/backend/docker-compose.infra.yml up -d
# Check individual container logs
docker logs salt-postgres
docker logs salt-redis
docker logs salt-minio