← Click here to go back to Home Page Documentation
College Logo
SALT
Salvation Army SALT College of Africa

Microservice Deployment Guide

Docker Container Orchestration & Nginx Configuration

Document ID:SALT-MSD-2026-001
Version:1.0
Date:18th February 2026
Prepared By:Sumba Group Limited
Developed BySumba Group Limited
Classification:Internal — Confidential

Table of Contents

1. Overview 2. Prerequisites 3. Container Architecture 3.1 Architecture Diagram 3.2 Complete Container Inventory 4. Docker Compose Configuration 4.1 Network & Volumes 4.2 Infrastructure Services 4.3 Spring Boot Service Template 4.4 Python & Node.js Services 5. Nginx Reverse Proxy Configuration 6. Environment Setup 6.1 Staging Environment (.env.staging) 6.2 Production Environment (.env.production) 6.3 Database Initialization 6.4 Kafka Topic Creation 6.5 Keycloak Realm Import 7. Deployment Steps 8. Health Checks & Monitoring 9. Troubleshooting 10. AWS Security Group & Port Exposure 10.1 Internet-Facing Ports (Inbound Rules) 10.2 Internal-Only Ports (No Inbound Rules) 10.3 Recommended Security Group Configuration 11. Keycloak Setup & Configuration 11.1 Initial Admin Setup 11.2 Realm Configuration 11.3 Client Applications 11.4 Role Mapping & User Federation 11.5 User Migration Strategy 12. MongoDB Setup & Configuration 12.1 Database & Collections 12.2 Indexes 12.3 Chat Data Migration 12.4 Backup & Maintenance 13. Quick Reference

1. Overview

This document provides comprehensive instructions for deploying the E-Learning Platform microservice architecture. The platform runs as 25 Docker containers orchestrated via Docker Compose on a single EC2 instance (Ubuntu 22.04), with Nginx as the SSL-terminating reverse proxy.

Deployment Summary

AttributeDetail
Total Containers25 (7 infrastructure + 3 platform + 14 business + 1 frontend)
Docker Networksalt-network (bridge mode)
TimezoneAfrica/Nairobi (East Africa Time, UTC+3) — set on ALL containers
Reverse ProxyNginx with Let's Encrypt SSL termination
Staging URLhttps://stagging.saltcollegeandresourcecentre.com
Staging IP3.13.144.24
Production URLhttps://app.saltcollegeandresourcecentre.com
Production IP3.13.155.123
Repositoryhttps://github.com/SGL2024/Salvation-Army-Backend-Microservice.git
Architecture Reference: For detailed service descriptions, Kafka topics, database ownership, and decomposition strategy, refer to the companion document: Microservice Architecture (SALT-MSA-2026-001).
Constraint: The production database salvation (PostgreSQL 17) must remain intact. All Spring Boot services connect to the same shared database. No table dropping, renaming, or schema-breaking changes are permitted.

2. Prerequisites

Ensure the following software and resources are available on the deployment host before proceeding.

PrerequisiteMinimum VersionPurposeRequired On
Docker24+Container runtime for all 25 servicesEC2 host
Docker ComposeV2 (plugin)Multi-container orchestration (docker compose syntax)EC2 host
Git2.30+Clone the microservice repositoryEC2 host
Java 21 (JDK)21 LTSBuild Spring Boot services (not needed at runtime — Docker images include JRE)Build machine only
Node.js20 LTSBuild chat-service and web-admin (not needed at runtime)Build machine only
Python3.12Build notification-service (not needed at runtime)Build machine only
OpenSSL3.0+SSL certificate management (Let's Encrypt)EC2 host
Nginx1.24+Reverse proxy, SSL termination, route management (already installed on EC2)EC2 host
RAM8 GB+25 containers require significant memory; 16 GB recommended for productionEC2 host
Disk Space50 GB+Docker images, database volumes, MinIO file storageEC2 host
EC2 Instance Type: Recommended t3.xlarge (4 vCPU, 16 GB RAM) or equivalent for staging/production. A t3.large (2 vCPU, 8 GB RAM) works for development but may experience memory pressure under full load.

3. Container Architecture

3.1 Architecture Diagram

NGINX REVERSE PROXY (SSL TERMINATION) Port 80 → 443 | Let's Encrypt | stagging.saltcollegeandresourcecentre.com Gzip | Proxy Headers | WebSocket Upgrade | 100MB Max Upload FRONTEND CLIENTS Next.js Web Admin :8081 Flutter Mobile :8082 Exec Dashboard :8082 External APIs / QR Verify PLATFORM TIER Keycloak :8180 Eureka Server :8761 API Gateway :8080Routing | JWT Validation | Rate Limiting | Circuit Breaker SERVICE TIER (12 Spring Boot + 1 FastAPI + 1 Node.js = 14 Business Services) Student MgmtSpring Boot :8101 EnrollmentSpring Boot :8102 CurriculumSpring Boot :8103 ExaminationSpring Boot :8104 AssignmentSpring Boot :8105 Grading & BillingSpring Boot :8106 CertificateSpring Boot :8107 ReportingSpring Boot :8108 File StorageSpring Boot :8109 ConfigSpring Boot :8110 SuspensionSpring Boot :8111 ChatbotSpring Boot :8112 Notification ServiceFastAPI (Python 3.12) :8200 Chat ServiceNode.js + MongoDB :8300 Apache Kafka :9092 Event Bus | 17 Topics | student.* | enrollment.* | exam.* | notification.* | grading.* | chat.* Zookeeper:2181 INFRASTRUCTURE TIER (7 Containers) PostgreSQL 17 salvation DB :5432 salt-postgres Keycloak DB PostgreSQL :5433 salt-keycloak-db Redis 7 Cache :6379 salt-redis MongoDB 7 Chat :27017 salt-mongodb MinIO (S3) Files :9000/:9001 salt-minio Zookeeper Kafka :2181 salt-zookeeper Docker Compose V2 25 Containers salt-network (bridge) EC2 Ubuntu 22.04 Persistent Volumes: postgres_data | keycloak_db_data | redis_data | mongodb_data | minio_data | kafka_data | zookeeper_data TZ=Africa/Nairobi (East Africa Time, UTC+3) — All 25 Containers
Figure 1: Microservice Deployment Architecture — 25 containers on single salt-network bridge, Nginx SSL reverse proxy

3.2 Complete Container Inventory

All 25 containers with their images, port mappings, dependencies, health checks, and purpose.

Infrastructure Tier 7 CONTAINERS

#Container NameImagePort(s)Depends OnHealth CheckPurpose
1salt-postgrespostgres:175432pg_isreadyMain salvation database
2salt-keycloak-dbpostgres:175433pg_isreadyKeycloak database (separate)
3salt-redisredis:7-alpine6379redis-cli pingCaching & sessions
4salt-mongodbmongo:727017mongosh --evalChat message storage
5salt-miniominio/minio:latest9000, 9001curl /minio/health/liveS3-compatible file storage
6salt-zookeeperconfluentinc/cp-zookeeper:7.6.02181echo ruokKafka coordination
7salt-kafkaconfluentinc/cp-kafka:7.6.09092zookeeperkafka-broker-api-versionsMessage broker

Platform Tier 3 CONTAINERS

#Container NameImagePort(s)Depends OnHealth CheckPurpose
8salt-keycloakquay.io/keycloak/keycloak:24.08180keycloak-dbcurl /healthOAuth2/OIDC identity provider
9salt-eurekasalt-eureka:latest8761curl /actuator/healthService discovery
10salt-api-gatewaysalt-api-gateway:latest8080eureka, keycloakcurl /actuator/healthAPI routing, JWT validation

Service Tier — Spring Boot 12 CONTAINERS

#Container NameImagePortDepends OnHealth CheckPurpose
11salt-student-svcsalt-student-service:latest8101postgres, eureka, kafkacurl /actuator/healthStudent management
12salt-enrollment-svcsalt-enrollment-service:latest8102postgres, eureka, kafkacurl /actuator/healthEnrollment & approvals
13salt-curriculum-svcsalt-curriculum-service:latest8103postgres, eureka, kafkacurl /actuator/healthSubjects & learning levels
14salt-exam-svcsalt-exam-service:latest8104postgres, eureka, kafkacurl /actuator/healthExam papers & marking
15salt-assignment-svcsalt-assignment-service:latest8105postgres, eureka, kafka, miniocurl /actuator/healthAssignments & materials
16salt-grading-svcsalt-grading-service:latest8106postgres, eureka, kafkacurl /actuator/healthGrading & billing
17salt-certificate-svcsalt-certificate-service:latest8107postgres, eureka, kafka, miniocurl /actuator/healthPDF certificate generation
18salt-reporting-svcsalt-reporting-service:latest8108postgres, eurekacurl /actuator/healthAnalytics & reports
19salt-file-svcsalt-file-service:latest8109postgres, eureka, miniocurl /actuator/healthFile upload/download
20salt-config-svcsalt-config-service:latest8110postgres, eurekacurl /actuator/healthSystem settings
21salt-suspension-svcsalt-suspension-service:latest8111postgres, eureka, kafkacurl /actuator/healthSuspension & intake
22salt-chatbot-svcsalt-chatbot-service:latest8112postgres, eurekacurl /actuator/healthFAQ chatbot

Service Tier — Polyglot PYTHON NODE.JS

#Container NameImagePortDepends OnHealth CheckPurpose
23salt-notification-svcsalt-notification-svc:latest8200postgres, kafka, rediscurl /healthEmail/SMS/FCM notifications
24salt-chat-svcsalt-chat-svc:latest8300mongodb, kafka, redis, eurekacurl /api/v2/chat/healthReal-time chat (WebSocket)

Frontend NEXT.JS

#Container NameImagePortDepends OnHealth CheckPurpose
25salt-web-adminsalt-web-admin:latest8081api-gatewaycurl /Next.js web admin panel
Port Range Summary: Infrastructure 2181-27017 | Platform 8080-8761 | Spring Boot 8101-8112 | Notification 8200 | Chat 8300 | Frontend 8081

4. Docker Compose Configuration

4.1 Network & Volumes

All 25 containers share a single bridge network. Seven named volumes provide data persistence across container restarts.

# docker-compose.yml — Network & Volume definitions

networks:
  salt-network:
    driver: bridge
    name: salt-network

volumes:
  postgres_data:
    driver: local
  keycloak_db_data:
    driver: local
  redis_data:
    driver: local
  mongodb_data:
    driver: local
  minio_data:
    driver: local
  kafka_data:
    driver: local
  zookeeper_data:
    driver: local

4.2 Infrastructure Services

Infrastructure containers start first and must pass health checks before platform and business services launch.

# PostgreSQL 17 — Main salvation database
salt-postgres:
  image: postgres:17
  container_name: salt-postgres
  environment:
    POSTGRES_DB: salvation
    POSTGRES_USER: ${POSTGRES_USER}
    POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    TZ: Africa/Nairobi
  ports:
    - "${POSTGRES_PORT:-5432}:5432"
  volumes:
    - postgres_data:/var/lib/postgresql/data
  networks:
    - salt-network
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d salvation"]
    interval: 10s
    timeout: 5s
    retries: 5
  restart: unless-stopped

# Keycloak Database (separate from salvation)
salt-keycloak-db:
  image: postgres:17
  container_name: salt-keycloak-db
  environment:
    POSTGRES_DB: keycloak
    POSTGRES_USER: ${KC_DB_USER:-keycloak}
    POSTGRES_PASSWORD: ${KC_DB_PASSWORD}
    TZ: Africa/Nairobi
  ports:
    - "${KC_DB_PORT:-5433}:5432"
  volumes:
    - keycloak_db_data:/var/lib/postgresql/data
  networks:
    - salt-network
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U ${KC_DB_USER:-keycloak}"]
    interval: 10s
    timeout: 5s
    retries: 5
  restart: unless-stopped

# Redis 7 — Caching & sessions
salt-redis:
  image: redis:7-alpine
  container_name: salt-redis
  environment:
    TZ: Africa/Nairobi
  ports:
    - "${REDIS_PORT:-6379}:6379"
  volumes:
    - redis_data:/data
  networks:
    - salt-network
  healthcheck:
    test: ["CMD", "redis-cli", "ping"]
    interval: 10s
    timeout: 5s
    retries: 5
  restart: unless-stopped

# MongoDB 7 — Chat message storage
salt-mongodb:
  image: mongo:7
  container_name: salt-mongodb
  environment:
    MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER:-salt}
    MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
    TZ: Africa/Nairobi
  ports:
    - "${MONGO_PORT:-27017}:27017"
  volumes:
    - mongodb_data:/data/db
  networks:
    - salt-network
  healthcheck:
    test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
    interval: 10s
    timeout: 5s
    retries: 5
  restart: unless-stopped

# MinIO — S3-compatible file storage
salt-minio:
  image: minio/minio:latest
  container_name: salt-minio
  command: server /data --console-address ":9001"
  environment:
    MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}
    MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}
    TZ: Africa/Nairobi
  ports:
    - "${MINIO_API_PORT:-9000}:9000"
    - "${MINIO_CONSOLE_PORT:-9001}:9001"
  volumes:
    - minio_data:/data
  networks:
    - salt-network
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
    interval: 15s
    timeout: 5s
    retries: 5
  restart: unless-stopped

# Zookeeper — Kafka coordination
salt-zookeeper:
  image: confluentinc/cp-zookeeper:7.6.0
  container_name: salt-zookeeper
  environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000
    TZ: Africa/Nairobi
  ports:
    - "${ZK_PORT:-2181}:2181"
  volumes:
    - zookeeper_data:/var/lib/zookeeper/data
  networks:
    - salt-network
  healthcheck:
    test: ["CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok"]
    interval: 10s
    timeout: 5s
    retries: 5
  restart: unless-stopped

# Kafka — Message broker
salt-kafka:
  image: confluentinc/cp-kafka:7.6.0
  container_name: salt-kafka
  depends_on:
    salt-zookeeper:
      condition: service_healthy
  environment:
    KAFKA_BROKER_ID: 1
    KAFKA_ZOOKEEPER_CONNECT: salt-zookeeper:2181
    KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://salt-kafka:9092
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
    TZ: Africa/Nairobi
  ports:
    - "${KAFKA_PORT:-9092}:9092"
  volumes:
    - kafka_data:/var/lib/kafka/data
  networks:
    - salt-network
  healthcheck:
    test: ["CMD-SHELL", "kafka-broker-api-versions --bootstrap-server localhost:9092"]
    interval: 15s
    timeout: 10s
    retries: 10
  restart: unless-stopped

4.3 Spring Boot Service Template

All 12 Spring Boot business services follow this template pattern. The service name, port, and specific dependencies vary.

# Template: Spring Boot microservice
salt-student-svc:
  image: salt-student-service:latest
  container_name: salt-student-svc
  depends_on:
    salt-postgres:
      condition: service_healthy
    salt-eureka:
      condition: service_healthy
    salt-kafka:
      condition: service_healthy
  environment:
    SPRING_PROFILES_ACTIVE: staging
    SERVER_PORT: 8101
    # Database
    SPRING_DATASOURCE_URL: jdbc:postgresql://salt-postgres:5432/salvation
    SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER}
    SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
    # Eureka
    EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE: http://salt-eureka:8761/eureka/
    EUREKA_INSTANCE_PREFER_IP_ADDRESS: "true"
    # Kafka
    SPRING_KAFKA_BOOTSTRAP_SERVERS: salt-kafka:9092
    # Redis
    SPRING_REDIS_HOST: salt-redis
    SPRING_REDIS_PORT: 6379
    # HikariCP (per-service pool)
    SPRING_DATASOURCE_HIKARI_MAXIMUM_POOL_SIZE: 20
    SPRING_DATASOURCE_HIKARI_MINIMUM_IDLE: 3
    # Timezone
    TZ: Africa/Nairobi
    JAVA_OPTS: "-Duser.timezone=Africa/Nairobi -Xms256m -Xmx512m"
  ports:
    - "8101:8101"
  networks:
    - salt-network
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:8101/actuator/health"]
    interval: 30s
    timeout: 10s
    retries: 5
    start_period: 60s
  restart: unless-stopped
Memory Tuning: Each Spring Boot service is configured with -Xms256m -Xmx512m. For 12 services, this requires ~6 GB minimum. Adjust based on available RAM. The gateway and Eureka server use -Xms128m -Xmx256m.

Services that require MinIO add the additional environment variables:

    # MinIO (for file-dependent services: assignment, certificate, file-storage)
    MINIO_ENDPOINT: http://salt-minio:9000
    MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
    MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
    MINIO_BUCKET: salt-files

4.4 Python & Node.js Services

Notification Service FASTAPI

# FastAPI Notification Service (Python 3.12)
salt-notification-svc:
  image: salt-notification-svc:latest
  container_name: salt-notification-svc
  depends_on:
    salt-postgres:
      condition: service_healthy
    salt-kafka:
      condition: service_healthy
    salt-redis:
      condition: service_healthy
  environment:
    DATABASE_URL: postgresql+asyncpg://\${POSTGRES_USER}:\${POSTGRES_PASSWORD}@salt-postgres:5432/salvation
    KAFKA_BOOTSTRAP_SERVERS: salt-kafka:9092
    REDIS_URL: redis://salt-redis:6379/0
    NOTIFICATION_CHANNEL: ${NOTIFICATION_CHANNEL:-EMAIL}
    SMTP_HOST: ${SMTP_HOST:-smtp.office365.com}
    SMTP_PORT: ${SMTP_PORT:-587}
    SMTP_USER: ${SMTP_USER}
    SMTP_PASSWORD: ${SMTP_PASSWORD}
    TWILIO_ACCOUNT_SID: ${TWILIO_ACCOUNT_SID}
    TWILIO_AUTH_TOKEN: ${TWILIO_AUTH_TOKEN}
    TWILIO_PHONE_NUMBER: ${TWILIO_PHONE_NUMBER}
    FCM_SERVER_KEY: ${FCM_SERVER_KEY}
    TZ: Africa/Nairobi
  ports:
    - "8200:8200"
  networks:
    - salt-network
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:8200/health"]
    interval: 30s
    timeout: 10s
    retries: 5
    start_period: 30s
  restart: unless-stopped

Chat Service NODE.JS

# Node.js Chat Service (Socket.IO + MongoDB)
salt-chat-svc:
  image: salt-chat-svc:latest
  container_name: salt-chat-svc
  depends_on:
    salt-mongodb:
      condition: service_healthy
    salt-kafka:
      condition: service_healthy
    salt-redis:
      condition: service_healthy
    salt-eureka:
      condition: service_healthy
  environment:
    NODE_ENV: production
    PORT: 8300
    MONGODB_URI: mongodb://\${MONGO_USER}:\${MONGO_PASSWORD}@salt-mongodb:27017/salt_chat?authSource=admin
    KAFKA_BROKERS: salt-kafka:9092
    REDIS_URL: redis://salt-redis:6379/1
    EUREKA_HOST: salt-eureka
    EUREKA_PORT: 8761
    TZ: Africa/Nairobi
  ports:
    - "8300:8300"
  networks:
    - salt-network
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:8300/api/v2/chat/health"]
    interval: 30s
    timeout: 10s
    retries: 5
    start_period: 20s
  restart: unless-stopped

5. Nginx Reverse Proxy Configuration

The following Nginx configuration provides SSL termination, route-based proxying to all services, WebSocket support, gzip compression, and security headers. Save this file to /etc/nginx/sites-available/salt-microservices and symlink to sites-enabled.

Server Names: Update server_name to match your deployment:
• Staging: stagging.saltcollegeandresourcecentre.com
• Production: app.saltcollegeandresourcecentre.com
# /etc/nginx/sites-available/salt-microservices
# Microservice Deployment — Nginx Reverse Proxy
# Last updated: 18th February 2026

# ─────────────────────────────────────────────
# Upstream Definitions
# ─────────────────────────────────────────────

upstream api_gateway {
    server 127.0.0.1:8080;
    keepalive 32;
}

upstream frontend_backend {
    server 127.0.0.1:8081;
    keepalive 16;
}

upstream mobile_backend {
    server 127.0.0.1:8082;
    keepalive 16;
}

upstream jenkins_backend {
    server 127.0.0.1:8090;
    keepalive 8;
}

# ─────────────────────────────────────────────
# HTTP → HTTPS Redirect
# ─────────────────────────────────────────────

server {
    listen 80;
    listen [::]:80;
    server_name stagging.saltcollegeandresourcecentre.com;

    # Let's Encrypt ACME challenge
    location /.well-known/acme-challenge/ {
        root /var/www/html;
    }

    # Redirect all other HTTP traffic to HTTPS
    location / {
        return 301 https://$host$request_uri;
    }
}

# ─────────────────────────────────────────────
# HTTPS Server Block
# ─────────────────────────────────────────────

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name stagging.saltcollegeandresourcecentre.com;

    # ── SSL Configuration ──────────────────────
    ssl_certificate     /etc/letsencrypt/live/stagging.saltcollegeandresourcecentre.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/stagging.saltcollegeandresourcecentre.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5:!RC4;
    ssl_prefer_server_ciphers on;
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 10m;

    # ── Security Headers ───────────────────────
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # ── Gzip Compression ──────────────────────
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_min_length 256;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/rss+xml
        image/svg+xml;

    # ── Client Settings ───────────────────────
    client_max_body_size 100m;
    proxy_read_timeout   300s;
    proxy_connect_timeout 60s;
    proxy_send_timeout   300s;

    # ── Common Proxy Headers ──────────────────
    proxy_set_header Host              $host;
    proxy_set_header X-Real-IP         $remote_addr;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # ── Landing Page ──────────────────────────
    # Static files served directly
    location / {
        root /var/www/html;
        index index.html;
        try_files $uri $uri/ =404;
    }

    # ── Next.js Web Admin ─────────────────────
    location /SaltElearning/ {
        proxy_pass http://frontend_backend/SaltElearning/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    # ── API Gateway (All Microservices) ───────
    # The API Gateway routes to individual services
    # via Eureka service discovery (lb://service-name)
    location /SaltELearnAppApi/ {
        proxy_pass http://api_gateway/SaltELearnAppApi/;
        proxy_http_version 1.1;
    }

    # ── Flutter Student Mobile ────────────────
    location /student/ {
        proxy_pass http://mobile_backend/student/;
        proxy_http_version 1.1;
    }

    # ── Flutter Executive Dashboard ───────────
    location /dashboard/ {
        proxy_pass http://mobile_backend/dashboard/;
        proxy_http_version 1.1;
    }

    # ── WebSocket (Chat Service via Gateway) ──
    location /ws/ {
        proxy_pass http://api_gateway/ws/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout          86400s;
        proxy_send_timeout          86400s;
    }

    # ── Jenkins CI/CD ─────────────────────────
    location /jenkins/ {
        proxy_pass http://jenkins_backend/jenkins/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    # ── Error Pages ───────────────────────────
    error_page 502 503 504 /50x.html;
    location = /50x.html {
        root /var/www/html;
        internal;
    }
}

Enable the Configuration

# Symlink to sites-enabled
sudo ln -sf /etc/nginx/sites-available/salt-microservices \
            /etc/nginx/sites-enabled/salt-microservices

# Remove default site if present
sudo rm -f /etc/nginx/sites-enabled/default

# Test configuration syntax
sudo nginx -t

# Reload Nginx (zero-downtime)
sudo systemctl reload nginx
SSL Certificates: Obtain certificates using Certbot:
sudo certbot certonly --webroot -w /var/www/html -d stagging.saltcollegeandresourcecentre.com
Certbot auto-renews via systemd timer. Verify with sudo certbot renew --dry-run.

Route Summary

URL PathUpstreamInternal PortService
/Static filesLanding page (/var/www/html)
/SaltElearning/frontend_backend8081Next.js Web Admin
/SaltELearnAppApi/api_gateway8080API Gateway → All Microservices
/student/mobile_backend8082Flutter Student App
/dashboard/mobile_backend8082Flutter Executive Dashboard
/ws/api_gateway8080WebSocket proxy → Chat Service
/jenkins/jenkins_backend8090Jenkins CI/CD

6. Environment Setup

6.1 Staging Environment (.env.staging)

Copy this template to .env on the staging server. All containers read from this single file.

# ===================================================================
# Microservice — Staging Environment (.env.staging)
# Server: stagging.saltcollegeandresourcecentre.com (3.13.144.24)
# ===================================================================

# ── PostgreSQL (salvation database) ──────────
POSTGRES_USER=salt_admin
POSTGRES_PASSWORD=<CHANGE_ME>
POSTGRES_PORT=5432

# ── Keycloak Database ────────────────────────
KC_DB_USER=keycloak
KC_DB_PASSWORD=<CHANGE_ME>
KC_DB_PORT=5433

# ── Keycloak Admin ───────────────────────────
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=<CHANGE_ME>
KC_DB_URL=jdbc:postgresql://salt-keycloak-db:5432/keycloak
KC_HOSTNAME=stagging.saltcollegeandresourcecentre.com

# ── Redis ────────────────────────────────────
REDIS_PORT=6379

# ── MongoDB ──────────────────────────────────
MONGO_USER=salt
MONGO_PASSWORD=<CHANGE_ME>
MONGO_PORT=27017

# ── MinIO ────────────────────────────────────
MINIO_ACCESS_KEY=salt-minio-admin
MINIO_SECRET_KEY=<CHANGE_ME>
MINIO_API_PORT=9000
MINIO_CONSOLE_PORT=9001

# ── Kafka ────────────────────────────────────
KAFKA_PORT=9092
ZK_PORT=2181

# ── Spring Boot Common ──────────────────────
SPRING_PROFILES_ACTIVE=staging
SPRING_JPA_DDL_AUTO=none

# ── Email (SMTP — Office 365) ───────────────
SMTP_HOST=smtp.office365.com
SMTP_PORT=587
SMTP_USER=noreply.salt.africa@saitco.onmicrosoft.com
SMTP_PASSWORD=<CHANGE_ME>
EMAIL_FROM=noreply.salt.africa@saitco.onmicrosoft.com
EMAIL_DISPLAY_NAME=College of Africa E-Learning

# ── SMS (Twilio) ─────────────────────────────
TWILIO_ACCOUNT_SID=<CHANGE_ME>
TWILIO_AUTH_TOKEN=<CHANGE_ME>
TWILIO_PHONE_NUMBER=<CHANGE_ME>

# ── Firebase Cloud Messaging ─────────────────
FCM_SERVER_KEY=<CHANGE_ME>

# ── Notification Channel ─────────────────────
NOTIFICATION_CHANNEL=EMAIL

# ── Eureka ───────────────────────────────────
EUREKA_PORT=8761
GATEWAY_PORT=8080
Security: Never commit .env files to Git. They are gitignored. Store secrets in a vault or encrypted backup.

6.2 Production Environment (.env.production)

Production differs from staging in these key areas:

VariableStagingProduction
KC_HOSTNAMEstagging.saltcollegeandresourcecentre.comapp.saltcollegeandresourcecentre.com
SPRING_PROFILES_ACTIVEstagingproduction
SPRING_JPA_DDL_AUTOnonevalidate
JAVA_OPTS-Xms256m -Xmx512m-Xms512m -Xmx1g
HIKARI_MAX_POOL2030
Kafka replication13 (if multi-broker)
All <CHANGE_ME>Staging passwordsProduction passwords (different!)
Important: Production and staging must use different passwords for all services. Never reuse staging credentials in production.

6.3 Database Initialization

The salvation database must be initialized once with the production dump. Flyway migrations run automatically on Spring Boot startup.

# First-time database setup (staging Docker)
bash deploy/setup-db.sh --staging

# Verify database is accessible
docker exec -it salt-postgres psql -U salt_admin -d salvation -c "\dt tbl_*"

# Flyway applies V1-V16+ migrations automatically
# when Spring Boot services start — no manual action needed
One-Time Only: setup-db.sh loads salvation.sql once. It is idempotent — running again will skip if the database already exists. Use --force flag only to drop and recreate (destructive!).

6.4 Kafka Topic Creation

Kafka auto-creation is disabled (KAFKA_AUTO_CREATE_TOPICS_ENABLE=false). Create topics manually after Kafka starts.

# Wait for Kafka to be healthy
docker compose exec salt-kafka bash

# Create all 17 topics (3 partitions, replication-factor 1 for staging)
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic student.registered --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic student.updated --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic enrollment.submitted --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic enrollment.approved --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic enrollment.rejected --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic exam.created --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic exam.submitted --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic exam.marked --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic exam.failed-3x --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic assignment.uploaded --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic assignment.marked --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic material.uploaded --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic grading.calculated --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic grading.level-completed --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic suspension.created --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic suspension.lifted --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
  --topic chat.message --partitions 3 --replication-factor 1

# Verify all topics created
kafka-topics --list --bootstrap-server localhost:9092

6.5 Keycloak Realm Import

Import the realm configuration after Keycloak starts for the first time.

# Option 1: Auto-import on startup (mount realm file)
# Add to salt-keycloak service in docker-compose.yml:
#   volumes:
#     - ./keycloak/salt-realm.json:/opt/keycloak/data/import/salt-realm.json
#   command: start --import-realm

# Option 2: Manual import via Admin Console
# 1. Access Keycloak admin: https://stagging.saltcollegeandresourcecentre.com/auth
# 2. Login with KEYCLOAK_ADMIN credentials
# 3. Create realm → Import → Upload salt-realm.json

# Option 3: CLI import
docker compose exec salt-keycloak \
  /opt/keycloak/bin/kc.sh import --file /opt/keycloak/data/import/salt-realm.json

Realm Configuration Summary

SettingValue
Realm Namesalt
Client IDssalt-web-admin, salt-mobile, salt-exec-dashboard, salt-gateway
Rolesadmin (level 5), tutor (level 8), eto (level 9), student (level 12)
Access Token TTL15 minutes
Refresh Token TTL24 hours
Password PolicyBCrypt hashing (compatible with existing tbl_sys_users hashes)

7. Deployment Steps

Follow these steps in order. Infrastructure must be healthy before starting platform services, and platform services must be healthy before starting business services.

Step 1: Clone the Repository

cd /opt
git clone https://github.com/SGL2024/Salvation-Army-Backend-Microservice.git salt-microservices
cd salt-microservices

Step 2: Copy Environment File

# For staging deployment
cp .env.staging .env

# Edit and fill in all  placeholders
nano .env

Step 3: Build All Docker Images

# Build all service images (this may take 10-15 minutes)
bash deploy/docker-dev.sh build

# Or build individually:
docker compose build salt-eureka
docker compose build salt-api-gateway
docker compose build salt-student-svc
# ... (repeat for each service)

Step 4: Start Infrastructure Containers

# Start infrastructure tier first (7 containers)
bash deploy/docker-dev.sh start-infra

# This starts: postgres, keycloak-db, redis, mongodb, minio, zookeeper, kafka
# Wait for all health checks to pass (~30-60 seconds)

Step 5: Wait for Health Checks

# Verify all infrastructure is healthy
docker compose ps

# Expected: all 7 infrastructure containers show "healthy"
# If Kafka shows "starting", wait 30 more seconds — it takes longer

# Quick health verification
docker exec salt-postgres pg_isready -U salt_admin -d salvation
docker exec salt-redis redis-cli ping
docker exec salt-kafka kafka-broker-api-versions --bootstrap-server localhost:9092

Step 6: Create Kafka Topics

# Enter Kafka container and create all 17 topics
docker compose exec salt-kafka bash -c '
  for topic in student.registered student.updated \
    enrollment.submitted enrollment.approved enrollment.rejected \
    exam.created exam.submitted exam.marked exam.failed-3x \
    assignment.uploaded assignment.marked material.uploaded \
    grading.calculated grading.level-completed \
    suspension.created suspension.lifted chat.message; do
    kafka-topics --create --bootstrap-server localhost:9092 \
      --topic $topic --partitions 3 --replication-factor 1 2>/dev/null
    echo "Created topic: $topic"
  done
'

# Verify
docker compose exec salt-kafka kafka-topics --list --bootstrap-server localhost:9092

Step 7: Initialize Database

# First-time only — load salvation.sql
bash deploy/setup-db.sh --staging

# Create MinIO bucket
docker compose exec salt-minio mc alias set local http://localhost:9000 \
  ${MINIO_ACCESS_KEY} ${MINIO_SECRET_KEY}
docker compose exec salt-minio mc mb local/salt-files --ignore-existing

Step 8: Start Platform Services

# Start Keycloak, Eureka, API Gateway (3 containers)
bash deploy/docker-dev.sh start-platform

# Wait for Eureka to be healthy (~30 seconds)
curl -s http://localhost:8761/actuator/health | grep -o '"status":"UP"'

# Wait for Keycloak (~60 seconds for first startup)
curl -s http://localhost:8180/health | grep -o '"status":"UP"'

# Wait for API Gateway
curl -s http://localhost:8080/actuator/health | grep -o '"status":"UP"'

Step 9: Start Business Services

# Start all 14 business services + frontend (15 containers)
bash deploy/docker-dev.sh start-services

# Spring Boot services take 30-90 seconds each to start
# Monitor startup progress:
docker compose logs -f --tail=50 salt-student-svc salt-enrollment-svc salt-exam-svc

Step 10: Configure Nginx

# Copy Nginx config (see Section 5)
sudo cp nginx/salt-microservices /etc/nginx/sites-available/salt-microservices
sudo ln -sf /etc/nginx/sites-available/salt-microservices \
            /etc/nginx/sites-enabled/salt-microservices
sudo rm -f /etc/nginx/sites-enabled/default

# Test and reload
sudo nginx -t && sudo systemctl reload nginx

Step 11: Verify All Services Registered in Eureka

# Check Eureka registry — all services should appear
curl -s http://localhost:8761/eureka/apps | grep -o '<name>[^<]*</name>'

# Expected output (14 business services + gateway):
# STUDENT-SERVICE
# ENROLLMENT-SERVICE
# CURRICULUM-SERVICE
# EXAM-SERVICE
# ASSIGNMENT-SERVICE
# GRADING-SERVICE
# CERTIFICATE-SERVICE
# REPORTING-SERVICE
# FILE-SERVICE
# CONFIG-SERVICE
# SUSPENSION-SERVICE
# CHATBOT-SERVICE
# NOTIFICATION-SERVICE
# CHAT-SERVICE
# API-GATEWAY

Step 12: Test API Gateway Routing

# Test through Nginx (no ports in URL)
curl -s https://stagging.saltcollegeandresourcecentre.com/SaltELearnAppApi/actuator/health

# Test individual service routes via gateway
curl -s http://localhost:8080/api/v2/students/health
curl -s http://localhost:8080/api/v2/enrollment/health
curl -s http://localhost:8080/api/v2/exam/health

# Verify container status
docker compose ps
Full Startup Time: From cold start to all 25 containers healthy: approximately 3-5 minutes on a t3.xlarge instance. Spring Boot services account for most of the startup time due to JVM initialization and Flyway migrations.
Startup Order Enforcement: Docker Compose depends_on with condition: service_healthy ensures correct ordering. If a service fails to start, check its dependencies first.

8. Health Checks & Monitoring

Eureka Dashboard

The Eureka service discovery dashboard shows all registered services, their status, and instance details.

Access PointURL (Internal)Purpose
Eureka Dashboardhttp://localhost:8761Service registry UI — shows all registered microservices
Eureka Apps APIhttp://localhost:8761/eureka/appsXML/JSON list of all registered services

Spring Boot Actuator Endpoints

Every Spring Boot service (12 business + Eureka + Gateway) exposes these actuator endpoints:

EndpointDescriptionExample
/actuator/healthOverall health status with dependency checks (DB, Redis, Kafka)curl http://localhost:8101/actuator/health
/actuator/infoService version, build info, git commitcurl http://localhost:8101/actuator/info
/actuator/metricsJVM stats, HTTP request metrics, custom counterscurl http://localhost:8101/actuator/metrics
/actuator/prometheusPrometheus-compatible metrics exportcurl http://localhost:8101/actuator/prometheus

Docker Health Check Configuration

ParameterInfrastructurePlatformBusiness Services
interval10s15s30s
timeout5s10s10s
retries555
start_period30s60s

The start_period gives Spring Boot services time to complete JVM startup and Flyway migrations before health checks begin failing.

Log Monitoring

# Tail logs for a specific service
docker compose logs -f --tail=100 salt-student-svc

# Tail logs for all services (very verbose)
docker compose logs -f --tail=20

# Tail logs for all business services
docker compose logs -f salt-student-svc salt-enrollment-svc salt-exam-svc \
  salt-assignment-svc salt-grading-svc salt-certificate-svc

# Check for errors across all services
docker compose logs --since="10m" | grep -i "error\|exception\|failed"

# Export logs to file
docker compose logs --no-color > /tmp/salt-logs-$(date +%Y%m%d).txt

Management Consoles

ConsoleInternal URLCredentialsPurpose
MinIO Consolehttp://localhost:9001MINIO_ACCESS_KEY / MINIO_SECRET_KEYFile storage management, bucket browsing
Keycloak Adminhttp://localhost:8180KEYCLOAK_ADMIN / KEYCLOAK_ADMIN_PASSWORDUser management, realm config, client settings
Eureka Dashboardhttp://localhost:8761None (unauthenticated)Service registry, instance health
Production Monitoring: For production deployments, add Prometheus + Grafana containers to the Docker Compose stack for persistent metrics, dashboards, and alerting.

9. Troubleshooting

Common Issues

ProblemCauseFix
Service not registering with Eureka Eureka server not ready when the service attempts registration Verify depends_on with condition: service_healthy in docker-compose.yml. Check Eureka health: curl http://localhost:8761/actuator/health. Restart the service: docker compose restart salt-student-svc
Kafka connection refused Kafka or Zookeeper not fully started; Kafka takes 15-30s after Zookeeper Wait 30 seconds after infrastructure start. Verify: docker compose exec salt-kafka kafka-broker-api-versions --bootstrap-server localhost:9092. Check Zookeeper: docker compose logs salt-zookeeper
Database connection pool exhausted Too many services with large HikariCP pools; PostgreSQL max_connections exceeded Reduce HIKARI_MAX_POOL per service to 15-20. Increase PostgreSQL max_connections to 300+. Check active connections: docker exec salt-postgres psql -U salt_admin -d salvation -c "SELECT count(*) FROM pg_stat_activity;"
502 Bad Gateway Target service not started or crashed; Nginx cannot reach upstream Check docker compose ps for unhealthy containers. Check service logs: docker compose logs salt-api-gateway. Wait for start_period to elapse. Verify port mapping.
File upload fails MinIO bucket salt-files does not exist Create bucket: docker compose exec salt-minio mc mb local/salt-files --ignore-existing. Verify MinIO health: curl http://localhost:9000/minio/health/live
Chat not connecting (WebSocket) Nginx not proxying WebSocket Upgrade headers Verify Nginx /ws/ location has proxy_set_header Upgrade $http_upgrade and Connection "upgrade". Check chat service: curl http://localhost:8300/api/v2/chat/health
Keycloak startup failure Keycloak DB not ready or credentials mismatch Check salt-keycloak-db is healthy. Verify KC_DB_USER and KC_DB_PASSWORD match. Check logs: docker compose logs salt-keycloak
Out of memory (OOM) All 25 containers exceed available RAM Reduce -Xmx per service. Use docker stats to identify memory-hungry containers. Consider scaling to a larger instance (t3.xlarge recommended).
Flyway migration failure Schema mismatch or corrupted migration history Check flyway_schema_history table. Never modify existing migrations. Create a new V{N}__fix.sql migration. Run: docker compose logs salt-student-svc | grep -i flyway
Services registering then deregistering Health check failing after initial registration; Eureka evicts unhealthy instances Increase start_period. Check /actuator/health for specific dependency failures (DB, Redis, Kafka). Verify environment variables are correct.

Diagnostic Commands

# Check all container statuses
docker compose ps

# View resource usage (CPU, memory) per container
docker stats --no-stream

# Inspect a specific container
docker inspect salt-student-svc | grep -A 5 "Health"

# View Docker network details
docker network inspect salt-network

# List all Kafka consumer groups
docker compose exec salt-kafka kafka-consumer-groups \
  --bootstrap-server localhost:9092 --list

# Check Kafka consumer lag
docker compose exec salt-kafka kafka-consumer-groups \
  --bootstrap-server localhost:9092 \
  --group notification-svc-group --describe

# PostgreSQL active connections per service
docker exec salt-postgres psql -U salt_admin -d salvation -c \
  "SELECT application_name, count(*) FROM pg_stat_activity GROUP BY application_name ORDER BY count DESC;"

# Force restart a single service
docker compose restart salt-exam-svc

# Rebuild and restart a single service
docker compose up -d --build salt-exam-svc

10. AWS Security Group & Port Exposure

This section guides the AWS Security Group configuration for the EC2 instance. Only 3 ports should be exposed to the internet. All other ports are internal to the Docker salt-network bridge and must NOT be accessible from outside the host.

Critical Security Rule: Never expose database, cache, message broker, or internal service ports to the internet. All inter-service communication happens over the Docker bridge network (salt-network) and does not need host-level port exposure.

10.1 Internet-Facing Ports (Inbound Rules)

These are the only ports that must be open in the AWS Security Group inbound rules.

PortProtocolSourceServicePurpose
443TCP0.0.0.0/0Nginx (HTTPS)All client traffic — SSL-terminated reverse proxy to all services
80TCP0.0.0.0/0Nginx (HTTP)HTTP → HTTPS redirect only (301 to port 443)
22TCPYour IP / VPN CIDRSSHServer administration — restrict to known IPs only
SSH Restriction: Never set SSH source to 0.0.0.0/0. Restrict to your office IP, VPN CIDR, or use AWS Systems Manager Session Manager instead of SSH.

10.2 Internal-Only Ports (No Inbound Rules)

These ports are used for container-to-container communication on salt-network. They should NOT have AWS Security Group inbound rules. Even though docker-compose.yml maps them to the host (for development convenience), in production they must be firewalled.

PortContainerServiceWhy Internal
2181salt-zookeeperZookeeperKafka coordination only — no external clients
5432salt-postgresPostgreSQL (salvation)Database — accessed by Spring Boot services via Docker DNS
5433salt-keycloak-dbPostgreSQL (keycloak)Keycloak database — only Keycloak connects to it
6379salt-redisRedisCache — accessed by services via Docker DNS
8080salt-api-gatewayAPI GatewayAccessed by Nginx upstream, not directly by clients
8081salt-web-adminNext.js Web AdminAccessed by Nginx upstream, not directly by clients
8101salt-student-svcStudent ServiceAccessed only by API Gateway via Eureka discovery
8102salt-enrollment-svcEnrollment ServiceAccessed only by API Gateway via Eureka discovery
8103salt-curriculum-svcCurriculum ServiceAccessed only by API Gateway via Eureka discovery
8104salt-exam-svcExam ServiceAccessed only by API Gateway via Eureka discovery
8105salt-assignment-svcAssignment ServiceAccessed only by API Gateway via Eureka discovery
8106salt-grading-svcGrading ServiceAccessed only by API Gateway via Eureka discovery
8107salt-certificate-svcCertificate ServiceAccessed only by API Gateway via Eureka discovery
8108salt-reporting-svcReporting ServiceAccessed only by API Gateway via Eureka discovery
8109salt-file-svcFile ServiceAccessed only by API Gateway via Eureka discovery
8110salt-config-svcConfig ServiceAccessed only by API Gateway via Eureka discovery
8111salt-suspension-svcSuspension ServiceAccessed only by API Gateway via Eureka discovery
8112salt-chatbot-svcChatbot ServiceAccessed only by API Gateway via Eureka discovery
8180salt-keycloakKeycloakAccessed by Nginx upstream at /auth path
8200salt-notification-svcNotification ServiceAccessed only by API Gateway via Eureka discovery
8300salt-chat-svcChat ServiceWebSocket via Nginx upstream at /ws path
8761salt-eurekaEureka ServerService discovery — only consumed by internal services
9000salt-minioMinIO S3 APIFile storage — accessed by services via Docker DNS
9001salt-minioMinIO ConsoleAdmin console — access via SSH tunnel if needed
9092salt-kafkaKafka BrokerEvent bus — only consumed by internal services
27017salt-mongodbMongoDBChat database — only chat-service connects to it

10.3 Recommended Security Group Configuration

Create an AWS Security Group named salt-microservices-sg with these exact rules:

Inbound Rules

TypeProtocolPort RangeSourceDescription
HTTPSTCP4430.0.0.0/0Platform HTTPS traffic
HTTPTCP800.0.0.0/0HTTP redirect to HTTPS
SSHTCP22Your-Office-IP/32Admin SSH access (restrict!)

Outbound Rules

TypeProtocolPort RangeDestinationDescription
All trafficAllAll0.0.0.0/0Allow outbound (SMTP, Twilio API, FCM, etc.)
Production Hardening: For production, also consider:
  • Remove host port mappings from docker-compose.yml for internal services (only expose to salt-network)
  • Use expose: instead of ports: for services that don't need host-level access
  • Set up AWS CloudWatch for monitoring and alerting
  • Enable AWS WAF (Web Application Firewall) for additional protection on port 443
  • Use VPC Flow Logs to monitor network traffic patterns

Accessing Internal Services for Debugging

If you need to access internal service ports (e.g., Eureka dashboard, MinIO console, database) from your workstation, use SSH tunneling instead of opening Security Group rules:

# SSH tunnel to Eureka dashboard (access at localhost:8761)
ssh -L 8761:localhost:8761 ubuntu@3.13.144.24

# SSH tunnel to MinIO console (access at localhost:9001)
ssh -L 9001:localhost:9001 ubuntu@3.13.144.24

# SSH tunnel to PostgreSQL (connect at localhost:5432)
ssh -L 5432:localhost:5432 ubuntu@3.13.144.24

# Multiple tunnels in one command
ssh -L 8761:localhost:8761 -L 9001:localhost:9001 -L 5432:localhost:5432 ubuntu@3.13.144.24

11. Keycloak Setup & Configuration

Keycloak 24 provides centralized OAuth2/OIDC authentication for all platform clients. This section details the complete setup process, from initial admin creation to user migration from the existing tbl_sys_users table.

Architecture: Keycloak replaces the legacy JWT authentication (JwtTokenUtil + per-service validation). The API Gateway validates tokens via Keycloak, and individual services receive pre-validated claims.

11.1 Initial Admin Setup

Step 1: Verify Keycloak Container is Running

# Check Keycloak is healthy
docker compose ps salt-keycloak

# View Keycloak startup logs
docker compose logs -f salt-keycloak --tail=50

# Expected: "Running the server in production mode" or "Listening on: http://0.0.0.0:8080"

Step 2: Access Admin Console

# Keycloak is accessible via Nginx at:
# https://stagging.saltcollegeandresourcecentre.com/auth

# Initial admin credentials (from .env):
# Username: ${KEYCLOAK_ADMIN}       (default: admin)
# Password: ${KEYCLOAK_ADMIN_PASSWORD}

# IMPORTANT: Change the admin password after first login!

Step 3: Docker Compose Environment Variables

# Keycloak container environment (docker-compose.yml)
salt-keycloak:
  image: quay.io/keycloak/keycloak:24.0
  container_name: salt-keycloak
  environment:
    KC_DB: postgres
    KC_DB_URL: jdbc:postgresql://salt-keycloak-db:5432/keycloak
    KC_DB_USERNAME: ${KC_DB_USER:-keycloak}
    KC_DB_PASSWORD: ${KC_DB_PASSWORD}
    KC_HOSTNAME: stagging.saltcollegeandresourcecentre.com
    KC_HOSTNAME_PATH: /auth
    KC_HTTP_RELATIVE_PATH: /auth
    KC_PROXY_HEADERS: xforwarded
    KC_HTTP_ENABLED: "true"
    KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN:-admin}
    KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
    TZ: Africa/Nairobi
  command: start --optimized --import-realm
  depends_on:
    salt-keycloak-db:
      condition: service_healthy
  volumes:
    - ./keycloak/salt-realm.json:/opt/keycloak/data/import/salt-realm.json:ro
  ports:
    - "8180:8080"
  networks:
    - salt-network

.env Variables for Keycloak

VariableExample ValueDescription
KC_DB_USERkeycloakKeycloak database username
KC_DB_PASSWORD<strong-password>Keycloak database password
KC_DB_PORT5433Host port for Keycloak PostgreSQL
KEYCLOAK_ADMINadminKeycloak admin username
KEYCLOAK_ADMIN_PASSWORD<strong-password>Keycloak admin password

11.2 Realm Configuration

The salt realm must be configured with settings that match the existing platform behavior.

Realm Settings

SettingValueReason
Realm NamesaltNamespace for all platform authentication
Display NameCollege of Africa E-LearningShown on login pages
EnabledtrueActive realm
Registration AllowedfalseStudents register through the API, not Keycloak directly
Forgot PasswordtrueEnable password reset flow
Remember MetruePersistent login sessions
Login With EmailtrueStudents can login with email or admission number
Duplicate EmailsfalseEach email must be unique
Internationalizationen, fr, sw, pt4 platform languages
Default LocaleenEnglish as default

Token Settings

SettingValueReason
Access Token Lifespan15 minutesShort-lived for security; clients use refresh tokens
Refresh Token Lifespan24 hoursOne full day before re-authentication required
Client Session Idle30 minutesMatches existing idle timeout in tbl_sys_settings
Client Session Max24 hoursMaximum session duration
SSO Session Idle30 minutesSSO idle timeout
SSO Session Max24 hoursMaximum SSO session

Password Policy

# Keycloak password policy (set in Realm → Authentication → Password Policy):
# - hashAlgorithm(bcrypt)    ← CRITICAL: must match existing $2a$10$ hashes
# - length(6)                ← minimum 6 characters (matches current platform policy)
# - notUsername               ← password cannot equal username
BCrypt Compatibility: The existing passwords in tbl_sys_users.password use BCrypt with cost factor 10 ($2a$10$...). Keycloak 24 supports BCrypt natively. When migrating users, their existing password hashes can be imported directly — no re-hashing needed.

11.3 Client Applications

Create 4 OAuth2 clients in the salt realm, one for each application that authenticates against Keycloak.

Client IDClient TypeProtocolRoot URLValid Redirect URIsWeb Origins
salt-web-adminPublicopenid-connecthttps://stagging.saltcollegeandresourcecentre.com/SaltElearning/SaltElearning/*+
salt-mobilePublicopenid-connecthttps://stagging.saltcollegeandresourcecentre.com/student/student/*, com.salt.mobile:/callback+
salt-exec-dashboardPublicopenid-connecthttps://stagging.saltcollegeandresourcecentre.com/dashboard/dashboard/*+
salt-gatewayConfidentialopenid-connect

Client Configuration Details

# salt-web-admin (Public client — Next.js SPA)
Client ID:              salt-web-admin
Access Type:            public
Standard Flow:          enabled
Direct Access Grants:   enabled  (for admin login forms)
PKCE:                   S256 (recommended for public clients)

# salt-mobile (Public client — Flutter native app)
Client ID:              salt-mobile
Access Type:            public
Standard Flow:          enabled
Direct Access Grants:   enabled  (for mobile login)
PKCE:                   S256
# Note: "com.salt.mobile:/callback" enables deep-linking for native app

# salt-gateway (Confidential client — API Gateway)
Client ID:              salt-gateway
Access Type:            confidential
Service Accounts:       enabled  (for backend-to-backend auth)
Standard Flow:          disabled (gateway doesn't serve login forms)
Client Secret:          ${KC_GATEWAY_SECRET}  (store in .env)

API Gateway Integration

# API Gateway application.yml — Spring Security OAuth2 Resource Server
spring:
  security:
    oauth2:
      resourceserver:
        jwt:
          issuer-uri: http://salt-keycloak:8080/auth/realms/salt
          jwk-set-uri: http://salt-keycloak:8080/auth/realms/salt/protocol/openid-connect/certs

11.4 Role Mapping & User Federation

The platform uses 4 roles with numeric access levels. These must be mapped to Keycloak realm roles.

Realm Roles

Keycloak RoleAccess LevelRole NameDescription
salt-admin5AdminFull platform management, user approvals, system settings
salt-tutor8TutorSubject teaching, exam creation, assignment marking
salt-eto9ETOEducation Training Officer — territory-level oversight
salt-student12StudentLearning, exams, assignments, chatbot

Custom Token Claims (Protocol Mappers)

Add these protocol mappers to include platform-specific claims in the JWT access token:

Mapper NameMapper TypeUser AttributeToken Claim NameClaim JSON Type
access_levelUser Attributeaccess_levelaccess_levelint
territory_idUser Attributeterritory_idterritory_idlong
admission_noUser Attributeadmission_noadmission_noString
student_idUser Attributestudent_idstudent_idlong
full_nameUser Attributefull_namefull_nameString
Token Claim Example: After configuration, a decoded JWT access token will contain:
{
  "sub": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
  "realm_access": { "roles": ["salt-student"] },
  "access_level": 12,
  "territory_id": 5,
  "admission_no": "2024049",
  "student_id": 1234,
  "full_name": "John Doe",
  "preferred_username": "2024049",
  "email": "john.doe@example.com"
}

11.5 User Migration Strategy

Existing users in tbl_sys_users must be migrated to Keycloak while preserving their BCrypt password hashes. Two approaches are supported:

Option A: Bulk Import (Recommended for Initial Deployment)

# 1. Export users from PostgreSQL to JSON
docker compose exec salt-postgres psql -U salt_admin -d salvation -c "
  SELECT json_agg(json_build_object(
    'username', u.username,
    'email', u.email_address,
    'firstName', u.f_name,
    'lastName', u.l_name,
    'enabled', u.enabled,
    'credentials', json_build_array(json_build_object(
      'type', 'password',
      'hashedSaltedValue', u.password,
      'algorithm', 'bcrypt'
    )),
    'attributes', json_build_object(
      'access_level', ARRAY[u.level_id::text],
      'territory_id', ARRAY[u.user_territory_id::text],
      'admission_no', ARRAY[COALESCE(u.admissionno, '')],
      'student_id', ARRAY[COALESCE(s.id::text, '')]
    ),
    'realmRoles', ARRAY[
      CASE u.level_id
        WHEN 5 THEN 'salt-admin'
        WHEN 8 THEN 'salt-tutor'
        WHEN 9 THEN 'salt-eto'
        WHEN 12 THEN 'salt-student'
      END
    ]
  ))
  FROM tbl_sys_users u
  LEFT JOIN tbl_student s ON s.user_id = u.id
  WHERE u.deletion_date IS NULL
" -t > /tmp/keycloak-users.json

# 2. Import into Keycloak using partial import API
curl -X POST \
  http://localhost:8180/auth/admin/realms/salt/partialImport \
  -H "Authorization: Bearer $(get_admin_token)" \
  -H "Content-Type: application/json" \
  -d @/tmp/keycloak-users.json

Option B: Federated User Storage (Zero-Downtime Migration)

# Use Keycloak's User Storage SPI to read users from tbl_sys_users at login time.
# This allows gradual migration — users are synced on first login.
#
# Configuration in Keycloak Admin Console:
# 1. Go to User Federation → Add provider → jdbc
# 2. JDBC URL: jdbc:postgresql://salt-postgres:5432/salvation
# 3. Username/Password: same as SPRING_DATASOURCE credentials
# 4. User query: SELECT username, password, email_address, f_name, l_name, enabled
#                FROM tbl_sys_users WHERE username = ?
# 5. Import users: ON  (copy to Keycloak DB on first login)
# 6. Sync mode: IMPORT  (one-way from PostgreSQL to Keycloak)
Migration Note: During migration, both authentication systems (legacy JWT and Keycloak) should run in parallel. The API Gateway can be configured to accept tokens from either issuer. Once all users have been migrated, disable the legacy JWT path.

12. MongoDB Setup & Configuration

MongoDB 7 stores real-time chat messages and conversation data for the chat-service (Node.js + Socket.IO). It replaces the existing PostgreSQL tbl_chat table for new messages while maintaining backward compatibility with historical data.

12.1 Database & Collections

Connection Details

SettingValue
Containersalt-mongodb
Docker DNSsalt-mongodb:27017
Database Namesalt_chat
Auth Databaseadmin
Username${MONGO_USER} (default: salt)
Password${MONGO_PASSWORD}
Connection Stringmongodb://${MONGO_USER}:${MONGO_PASSWORD}@salt-mongodb:27017/salt_chat?authSource=admin

Collections

CollectionPurposeEstimated Size
messagesIndividual chat messages (text, file refs, reactions)~500K documents/year
conversationsConversation metadata (participants, type, last activity)~10K documents
read_receiptsPer-user read position in each conversation~50K documents
online_statusUser online/offline/last-seen tracking (TTL-indexed)~5K documents

Document Schemas

// messages collection
{
  _id: ObjectId,
  conversationId: ObjectId,     // ref → conversations._id
  senderId: Long,               // ref → tbl_sys_users.id
  senderName: String,           // denormalized for display
  content: String,              // message text
  type: String,                 // "text" | "file" | "image" | "system"
  fileUrl: String,              // MinIO file path (if type=file/image)
  replyTo: ObjectId,            // ref → messages._id (threaded replies)
  readBy: [Long],               // array of user IDs who read this
  createdAt: ISODate,           // message timestamp (EAT)
  updatedAt: ISODate            // edit timestamp
}

// conversations collection
{
  _id: ObjectId,
  type: String,                 // "direct" | "group" | "subject" | "territory"
  name: String,                 // group/subject name (null for direct)
  participants: [Long],         // array of tbl_sys_users.id
  subjectId: Long,              // ref → tbl_subjects.id (for subject chats)
  territoryId: Long,            // ref → tbl_territory.id (for territory chats)
  lastMessage: {                // denormalized latest message
    content: String,
    senderId: Long,
    createdAt: ISODate
  },
  createdAt: ISODate,
  updatedAt: ISODate
}

// read_receipts collection
{
  _id: ObjectId,
  conversationId: ObjectId,
  userId: Long,
  lastReadMessageId: ObjectId,
  lastReadAt: ISODate
}

// online_status collection (TTL-indexed)
{
  _id: Long,                    // tbl_sys_users.id
  status: String,               // "online" | "away" | "offline"
  lastSeen: ISODate,
  socketId: String,             // Socket.IO connection ID
  updatedAt: ISODate            // TTL trigger (expires after 5 min idle)
}

12.2 Indexes

Create these indexes after the database is initialized to ensure query performance.

# Connect to MongoDB and create indexes
docker compose exec salt-mongodb mongosh \
  --username ${MONGO_USER} --password ${MONGO_PASSWORD} \
  --authenticationDatabase admin salt_chat --eval '

// messages indexes
db.messages.createIndex({ conversationId: 1, createdAt: -1 });
db.messages.createIndex({ senderId: 1, createdAt: -1 });
db.messages.createIndex({ conversationId: 1, senderId: 1 });
db.messages.createIndex({ createdAt: 1 }, { expireAfterSeconds: 31536000 }); // 1 year TTL (optional)

// conversations indexes
db.conversations.createIndex({ participants: 1 });
db.conversations.createIndex({ type: 1, subjectId: 1 });
db.conversations.createIndex({ type: 1, territoryId: 1 });
db.conversations.createIndex({ "lastMessage.createdAt": -1 });
db.conversations.createIndex(
  { type: 1, participants: 1 },
  { unique: true, partialFilterExpression: { type: "direct" } }
);

// read_receipts indexes
db.read_receipts.createIndex({ conversationId: 1, userId: 1 }, { unique: true });

// online_status indexes (TTL: auto-delete after 5 minutes of no update)
db.online_status.createIndex({ updatedAt: 1 }, { expireAfterSeconds: 300 });

print("All indexes created successfully");
'

Index Rationale

IndexCollectionQuery Pattern
{ conversationId: 1, createdAt: -1 }messagesLoad messages in a conversation, newest first (main chat view)
{ senderId: 1, createdAt: -1 }messagesFind all messages by a user (admin moderation)
{ participants: 1 }conversationsFind all conversations a user is in (conversation list)
{ type: 1, subjectId: 1 }conversationsFind the chat room for a specific subject
{ conversationId: 1, userId: 1 }read_receiptsCheck read position for unread badge count
TTL on updatedAtonline_statusAuto-cleanup stale online entries (disconnected without logout)

12.3 Chat Data Migration

Existing chat messages in PostgreSQL tbl_chat can be migrated to MongoDB for a unified chat experience.

# Step 1: Export existing chats from PostgreSQL
docker compose exec salt-postgres psql -U salt_admin -d salvation -c "
  COPY (
    SELECT json_build_object(
      'senderId', c.sender_id,
      'senderName', CONCAT(u.f_name, ' ', u.l_name),
      'content', c.message,
      'type', 'text',
      'createdAt', json_build_object('\$date', c.created_date),
      'subjectId', c.subject_id,
      'territoryId', u.user_territory_id
    )
    FROM tbl_chat c
    JOIN tbl_sys_users u ON c.sender_id = u.id
    WHERE c.deletion_date IS NULL
    ORDER BY c.created_date
  ) TO '/tmp/chat_export.jsonl'
" 2>/dev/null

# Step 2: Copy to MongoDB container
docker cp salt-postgres:/tmp/chat_export.jsonl /tmp/chat_export.jsonl
docker cp /tmp/chat_export.jsonl salt-mongodb:/tmp/chat_export.jsonl

# Step 3: Import into MongoDB
docker compose exec salt-mongodb mongoimport \
  --uri "mongodb://${MONGO_USER}:${MONGO_PASSWORD}@localhost:27017/salt_chat?authSource=admin" \
  --collection messages \
  --file /tmp/chat_export.jsonl

# Step 4: Create conversation documents from migrated messages
docker compose exec salt-mongodb mongosh \
  --username ${MONGO_USER} --password ${MONGO_PASSWORD} \
  --authenticationDatabase admin salt_chat --eval '
  // Group messages by subject and create conversation documents
  db.messages.aggregate([
    { $group: {
      _id: "$subjectId",
      participants: { $addToSet: "$senderId" },
      lastMessage: { $last: { content: "$content", senderId: "$senderId", createdAt: "$createdAt" }},
      firstDate: { $min: "$createdAt" }
    }},
    { $project: {
      type: "subject",
      subjectId: "$_id",
      participants: 1,
      lastMessage: 1,
      createdAt: "$firstDate",
      updatedAt: new Date()
    }}
  ]).forEach(doc => {
    var conv = db.conversations.insertOne(doc);
    db.messages.updateMany(
      { subjectId: doc.subjectId },
      { $set: { conversationId: conv.insertedId }}
    );
  });
  print("Migration complete: conversations created and messages linked");
'
Post-Migration: After migration, the tbl_chat table in PostgreSQL should remain intact (read-only) for audit purposes. New messages are written exclusively to MongoDB via the chat-service.

12.4 Backup & Maintenance

Backup Commands

# Full backup of salt_chat database
docker compose exec salt-mongodb mongodump \
  --uri "mongodb://${MONGO_USER}:${MONGO_PASSWORD}@localhost:27017/salt_chat?authSource=admin" \
  --out /tmp/mongo_backup_$(date +%Y%m%d)

# Copy backup to host
docker cp salt-mongodb:/tmp/mongo_backup_$(date +%Y%m%d) ./backups/mongodb/

# Restore from backup
docker compose exec salt-mongodb mongorestore \
  --uri "mongodb://${MONGO_USER}:${MONGO_PASSWORD}@localhost:27017/?authSource=admin" \
  /tmp/mongo_backup_20260218/

Monitoring Commands

# Check database stats
docker compose exec salt-mongodb mongosh \
  --username ${MONGO_USER} --password ${MONGO_PASSWORD} \
  --authenticationDatabase admin salt_chat --eval '
  printjson(db.stats());
  print("Messages: " + db.messages.countDocuments());
  print("Conversations: " + db.conversations.countDocuments());
  print("Online users: " + db.online_status.countDocuments({ status: "online" }));
'

# Check index usage
docker compose exec salt-mongodb mongosh \
  --username ${MONGO_USER} --password ${MONGO_PASSWORD} \
  --authenticationDatabase admin salt_chat --eval '
  db.messages.aggregate([{ $indexStats: {} }]).forEach(printjson);
'

Scheduled Maintenance

TaskFrequencyCommand
BackupDaily (cron)0 2 * * * bash /opt/salt-microservices/deploy/backup-mongodb.sh
CompactWeeklydb.runCommand({ compact: "messages" })
Index rebuildMonthlydb.messages.reIndex()
Old backup cleanupWeeklyfind /opt/salt-microservices/backups/mongodb -mtime +30 -delete

13. Quick Reference

Complete Port Reference (All 25 Containers)

PortContainerServiceTechnology
2181salt-zookeeperZookeeperConfluent CP
5432salt-postgresMain DatabasePostgreSQL 17
5433salt-keycloak-dbKeycloak DatabasePostgreSQL 17
6379salt-redisCache & SessionsRedis 7 Alpine
8080salt-api-gatewayAPI GatewaySpring Cloud Gateway
8081salt-web-adminWeb AdminNext.js
8101salt-student-svcStudent ManagementSpring Boot
8102salt-enrollment-svcEnrollment & ApprovalsSpring Boot
8103salt-curriculum-svcCurriculum & SubjectsSpring Boot
8104salt-exam-svcExaminationsSpring Boot
8105salt-assignment-svcAssignments & TutorSpring Boot
8106salt-grading-svcGrading & BillingSpring Boot
8107salt-certificate-svcCertificates & TranscriptsSpring Boot
8108salt-reporting-svcReporting & AnalyticsSpring Boot
8109salt-file-svcFile StorageSpring Boot
8110salt-config-svcSystem ConfigurationSpring Boot
8111salt-suspension-svcSuspension & IntakeSpring Boot
8112salt-chatbot-svcChatbot & FAQSpring Boot
8180salt-keycloakIdentity ProviderKeycloak 24
8200salt-notification-svcNotificationsFastAPI (Python)
8300salt-chat-svcReal-time ChatNode.js + Socket.IO
8761salt-eurekaService DiscoveryEureka Server
9000salt-minioMinIO S3 APIMinIO
9001salt-minioMinIO ConsoleMinIO
9092salt-kafkaMessage BrokerKafka
27017salt-mongodbChat DatabaseMongoDB 7

Useful Commands Cheat Sheet

ActionCommand
Start infrastructure onlybash deploy/docker-dev.sh start-infra
Start platform servicesbash deploy/docker-dev.sh start-platform
Start business servicesbash deploy/docker-dev.sh start-services
Start everythingbash deploy/docker-dev.sh start-all
Stop everythingdocker compose down
Stop and remove volumesdocker compose down -v
View container statusdocker compose ps
View resource usagedocker stats --no-stream
Tail service logsdocker compose logs -f --tail=100 <service-name>
Restart a single servicedocker compose restart <service-name>
Rebuild and restartdocker compose up -d --build <service-name>
Open database shellbash deploy/docker-dev.sh db-shell
Open Redis CLIdocker compose exec salt-redis redis-cli
List Kafka topicsdocker compose exec salt-kafka kafka-topics --list --bootstrap-server localhost:9092
Check Eureka servicescurl -s http://localhost:8761/eureka/apps
Backup databasebash deploy/backup-db.sh --staging
Full reset (DANGER)bash deploy/docker-dev.sh reset

Related Documentation

DocumentIDDescription
Microservice ArchitectureSALT-MSA-2026-001Service catalog, Kafka topics, database ownership, decomposition roadmap
Technical DocumentationMonolith backend reference, entity descriptions, API endpoints
Monolith Deployment GuideMonolith Docker Compose, staging/production profiles

Document ID: SALT-MSD-2026-001 | Version 1.0 | 18th February 2026

© 2021-2026 College E-Learning — Developed by Sumba Group Limited for Salvation Army SALT College of Africa