Docker Container Orchestration & Nginx Configuration
| Document ID: | SALT-MSD-2026-001 |
| Version: | 1.0 |
| Date: | 18th February 2026 |
| Prepared By: | Sumba Group Limited |
| Developed By | Sumba Group Limited |
| Classification: | Internal — Confidential |
This document provides comprehensive instructions for deploying the E-Learning Platform microservice architecture. The platform runs as 25 Docker containers orchestrated via Docker Compose on a single EC2 instance (Ubuntu 22.04), with Nginx as the SSL-terminating reverse proxy.
| Attribute | Detail |
|---|---|
| Total Containers | 25 (7 infrastructure + 3 platform + 14 business + 1 frontend) |
| Docker Network | salt-network (bridge mode) |
| Timezone | Africa/Nairobi (East Africa Time, UTC+3) — set on ALL containers |
| Reverse Proxy | Nginx with Let's Encrypt SSL termination |
| Staging URL | https://stagging.saltcollegeandresourcecentre.com |
| Staging IP | 3.13.144.24 |
| Production URL | https://app.saltcollegeandresourcecentre.com |
| Production IP | 3.13.155.123 |
| Repository | https://github.com/SGL2024/Salvation-Army-Backend-Microservice.git |
salvation (PostgreSQL 17) must remain intact. All Spring Boot services connect to the same shared database. No table dropping, renaming, or schema-breaking changes are permitted.Ensure the following software and resources are available on the deployment host before proceeding.
| Prerequisite | Minimum Version | Purpose | Required On |
|---|---|---|---|
| Docker | 24+ | Container runtime for all 25 services | EC2 host |
| Docker Compose | V2 (plugin) | Multi-container orchestration (docker compose syntax) | EC2 host |
| Git | 2.30+ | Clone the microservice repository | EC2 host |
| Java 21 (JDK) | 21 LTS | Build Spring Boot services (not needed at runtime — Docker images include JRE) | Build machine only |
| Node.js | 20 LTS | Build chat-service and web-admin (not needed at runtime) | Build machine only |
| Python | 3.12 | Build notification-service (not needed at runtime) | Build machine only |
| OpenSSL | 3.0+ | SSL certificate management (Let's Encrypt) | EC2 host |
| Nginx | 1.24+ | Reverse proxy, SSL termination, route management (already installed on EC2) | EC2 host |
| RAM | 8 GB+ | 25 containers require significant memory; 16 GB recommended for production | EC2 host |
| Disk Space | 50 GB+ | Docker images, database volumes, MinIO file storage | EC2 host |
t3.xlarge (4 vCPU, 16 GB RAM) or equivalent for staging/production. A t3.large (2 vCPU, 8 GB RAM) works for development but may experience memory pressure under full load.All 25 containers with their images, port mappings, dependencies, health checks, and purpose.
| # | Container Name | Image | Port(s) | Depends On | Health Check | Purpose |
|---|---|---|---|---|---|---|
| 1 | salt-postgres | postgres:17 | 5432 | — | pg_isready | Main salvation database |
| 2 | salt-keycloak-db | postgres:17 | 5433 | — | pg_isready | Keycloak database (separate) |
| 3 | salt-redis | redis:7-alpine | 6379 | — | redis-cli ping | Caching & sessions |
| 4 | salt-mongodb | mongo:7 | 27017 | — | mongosh --eval | Chat message storage |
| 5 | salt-minio | minio/minio:latest | 9000, 9001 | — | curl /minio/health/live | S3-compatible file storage |
| 6 | salt-zookeeper | confluentinc/cp-zookeeper:7.6.0 | 2181 | — | echo ruok | Kafka coordination |
| 7 | salt-kafka | confluentinc/cp-kafka:7.6.0 | 9092 | zookeeper | kafka-broker-api-versions | Message broker |
| # | Container Name | Image | Port(s) | Depends On | Health Check | Purpose |
|---|---|---|---|---|---|---|
| 8 | salt-keycloak | quay.io/keycloak/keycloak:24.0 | 8180 | keycloak-db | curl /health | OAuth2/OIDC identity provider |
| 9 | salt-eureka | salt-eureka:latest | 8761 | — | curl /actuator/health | Service discovery |
| 10 | salt-api-gateway | salt-api-gateway:latest | 8080 | eureka, keycloak | curl /actuator/health | API routing, JWT validation |
| # | Container Name | Image | Port | Depends On | Health Check | Purpose |
|---|---|---|---|---|---|---|
| 11 | salt-student-svc | salt-student-service:latest | 8101 | postgres, eureka, kafka | curl /actuator/health | Student management |
| 12 | salt-enrollment-svc | salt-enrollment-service:latest | 8102 | postgres, eureka, kafka | curl /actuator/health | Enrollment & approvals |
| 13 | salt-curriculum-svc | salt-curriculum-service:latest | 8103 | postgres, eureka, kafka | curl /actuator/health | Subjects & learning levels |
| 14 | salt-exam-svc | salt-exam-service:latest | 8104 | postgres, eureka, kafka | curl /actuator/health | Exam papers & marking |
| 15 | salt-assignment-svc | salt-assignment-service:latest | 8105 | postgres, eureka, kafka, minio | curl /actuator/health | Assignments & materials |
| 16 | salt-grading-svc | salt-grading-service:latest | 8106 | postgres, eureka, kafka | curl /actuator/health | Grading & billing |
| 17 | salt-certificate-svc | salt-certificate-service:latest | 8107 | postgres, eureka, kafka, minio | curl /actuator/health | PDF certificate generation |
| 18 | salt-reporting-svc | salt-reporting-service:latest | 8108 | postgres, eureka | curl /actuator/health | Analytics & reports |
| 19 | salt-file-svc | salt-file-service:latest | 8109 | postgres, eureka, minio | curl /actuator/health | File upload/download |
| 20 | salt-config-svc | salt-config-service:latest | 8110 | postgres, eureka | curl /actuator/health | System settings |
| 21 | salt-suspension-svc | salt-suspension-service:latest | 8111 | postgres, eureka, kafka | curl /actuator/health | Suspension & intake |
| 22 | salt-chatbot-svc | salt-chatbot-service:latest | 8112 | postgres, eureka | curl /actuator/health | FAQ chatbot |
| # | Container Name | Image | Port | Depends On | Health Check | Purpose |
|---|---|---|---|---|---|---|
| 23 | salt-notification-svc | salt-notification-svc:latest | 8200 | postgres, kafka, redis | curl /health | Email/SMS/FCM notifications |
| 24 | salt-chat-svc | salt-chat-svc:latest | 8300 | mongodb, kafka, redis, eureka | curl /api/v2/chat/health | Real-time chat (WebSocket) |
| # | Container Name | Image | Port | Depends On | Health Check | Purpose |
|---|---|---|---|---|---|---|
| 25 | salt-web-admin | salt-web-admin:latest | 8081 | api-gateway | curl / | Next.js web admin panel |
All 25 containers share a single bridge network. Seven named volumes provide data persistence across container restarts.
# docker-compose.yml — Network & Volume definitions
networks:
salt-network:
driver: bridge
name: salt-network
volumes:
postgres_data:
driver: local
keycloak_db_data:
driver: local
redis_data:
driver: local
mongodb_data:
driver: local
minio_data:
driver: local
kafka_data:
driver: local
zookeeper_data:
driver: local
Infrastructure containers start first and must pass health checks before platform and business services launch.
# PostgreSQL 17 — Main salvation database
salt-postgres:
image: postgres:17
container_name: salt-postgres
environment:
POSTGRES_DB: salvation
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
TZ: Africa/Nairobi
ports:
- "${POSTGRES_PORT:-5432}:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- salt-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d salvation"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Keycloak Database (separate from salvation)
salt-keycloak-db:
image: postgres:17
container_name: salt-keycloak-db
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: ${KC_DB_USER:-keycloak}
POSTGRES_PASSWORD: ${KC_DB_PASSWORD}
TZ: Africa/Nairobi
ports:
- "${KC_DB_PORT:-5433}:5432"
volumes:
- keycloak_db_data:/var/lib/postgresql/data
networks:
- salt-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${KC_DB_USER:-keycloak}"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Redis 7 — Caching & sessions
salt-redis:
image: redis:7-alpine
container_name: salt-redis
environment:
TZ: Africa/Nairobi
ports:
- "${REDIS_PORT:-6379}:6379"
volumes:
- redis_data:/data
networks:
- salt-network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# MongoDB 7 — Chat message storage
salt-mongodb:
image: mongo:7
container_name: salt-mongodb
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER:-salt}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
TZ: Africa/Nairobi
ports:
- "${MONGO_PORT:-27017}:27017"
volumes:
- mongodb_data:/data/db
networks:
- salt-network
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# MinIO — S3-compatible file storage
salt-minio:
image: minio/minio:latest
container_name: salt-minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}
TZ: Africa/Nairobi
ports:
- "${MINIO_API_PORT:-9000}:9000"
- "${MINIO_CONSOLE_PORT:-9001}:9001"
volumes:
- minio_data:/data
networks:
- salt-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 15s
timeout: 5s
retries: 5
restart: unless-stopped
# Zookeeper — Kafka coordination
salt-zookeeper:
image: confluentinc/cp-zookeeper:7.6.0
container_name: salt-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
TZ: Africa/Nairobi
ports:
- "${ZK_PORT:-2181}:2181"
volumes:
- zookeeper_data:/var/lib/zookeeper/data
networks:
- salt-network
healthcheck:
test: ["CMD-SHELL", "echo ruok | nc localhost 2181 | grep imok"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Kafka — Message broker
salt-kafka:
image: confluentinc/cp-kafka:7.6.0
container_name: salt-kafka
depends_on:
salt-zookeeper:
condition: service_healthy
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: salt-zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://salt-kafka:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
TZ: Africa/Nairobi
ports:
- "${KAFKA_PORT:-9092}:9092"
volumes:
- kafka_data:/var/lib/kafka/data
networks:
- salt-network
healthcheck:
test: ["CMD-SHELL", "kafka-broker-api-versions --bootstrap-server localhost:9092"]
interval: 15s
timeout: 10s
retries: 10
restart: unless-stopped
All 12 Spring Boot business services follow this template pattern. The service name, port, and specific dependencies vary.
# Template: Spring Boot microservice
salt-student-svc:
image: salt-student-service:latest
container_name: salt-student-svc
depends_on:
salt-postgres:
condition: service_healthy
salt-eureka:
condition: service_healthy
salt-kafka:
condition: service_healthy
environment:
SPRING_PROFILES_ACTIVE: staging
SERVER_PORT: 8101
# Database
SPRING_DATASOURCE_URL: jdbc:postgresql://salt-postgres:5432/salvation
SPRING_DATASOURCE_USERNAME: ${POSTGRES_USER}
SPRING_DATASOURCE_PASSWORD: ${POSTGRES_PASSWORD}
# Eureka
EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE: http://salt-eureka:8761/eureka/
EUREKA_INSTANCE_PREFER_IP_ADDRESS: "true"
# Kafka
SPRING_KAFKA_BOOTSTRAP_SERVERS: salt-kafka:9092
# Redis
SPRING_REDIS_HOST: salt-redis
SPRING_REDIS_PORT: 6379
# HikariCP (per-service pool)
SPRING_DATASOURCE_HIKARI_MAXIMUM_POOL_SIZE: 20
SPRING_DATASOURCE_HIKARI_MINIMUM_IDLE: 3
# Timezone
TZ: Africa/Nairobi
JAVA_OPTS: "-Duser.timezone=Africa/Nairobi -Xms256m -Xmx512m"
ports:
- "8101:8101"
networks:
- salt-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8101/actuator/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
restart: unless-stopped
-Xms256m -Xmx512m. For 12 services, this requires ~6 GB minimum. Adjust based on available RAM. The gateway and Eureka server use -Xms128m -Xmx256m.Services that require MinIO add the additional environment variables:
# MinIO (for file-dependent services: assignment, certificate, file-storage)
MINIO_ENDPOINT: http://salt-minio:9000
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
MINIO_BUCKET: salt-files
# FastAPI Notification Service (Python 3.12)
salt-notification-svc:
image: salt-notification-svc:latest
container_name: salt-notification-svc
depends_on:
salt-postgres:
condition: service_healthy
salt-kafka:
condition: service_healthy
salt-redis:
condition: service_healthy
environment:
DATABASE_URL: postgresql+asyncpg://\${POSTGRES_USER}:\${POSTGRES_PASSWORD}@salt-postgres:5432/salvation
KAFKA_BOOTSTRAP_SERVERS: salt-kafka:9092
REDIS_URL: redis://salt-redis:6379/0
NOTIFICATION_CHANNEL: ${NOTIFICATION_CHANNEL:-EMAIL}
SMTP_HOST: ${SMTP_HOST:-smtp.office365.com}
SMTP_PORT: ${SMTP_PORT:-587}
SMTP_USER: ${SMTP_USER}
SMTP_PASSWORD: ${SMTP_PASSWORD}
TWILIO_ACCOUNT_SID: ${TWILIO_ACCOUNT_SID}
TWILIO_AUTH_TOKEN: ${TWILIO_AUTH_TOKEN}
TWILIO_PHONE_NUMBER: ${TWILIO_PHONE_NUMBER}
FCM_SERVER_KEY: ${FCM_SERVER_KEY}
TZ: Africa/Nairobi
ports:
- "8200:8200"
networks:
- salt-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8200/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
restart: unless-stopped
# Node.js Chat Service (Socket.IO + MongoDB)
salt-chat-svc:
image: salt-chat-svc:latest
container_name: salt-chat-svc
depends_on:
salt-mongodb:
condition: service_healthy
salt-kafka:
condition: service_healthy
salt-redis:
condition: service_healthy
salt-eureka:
condition: service_healthy
environment:
NODE_ENV: production
PORT: 8300
MONGODB_URI: mongodb://\${MONGO_USER}:\${MONGO_PASSWORD}@salt-mongodb:27017/salt_chat?authSource=admin
KAFKA_BROKERS: salt-kafka:9092
REDIS_URL: redis://salt-redis:6379/1
EUREKA_HOST: salt-eureka
EUREKA_PORT: 8761
TZ: Africa/Nairobi
ports:
- "8300:8300"
networks:
- salt-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8300/api/v2/chat/health"]
interval: 30s
timeout: 10s
retries: 5
start_period: 20s
restart: unless-stopped
The following Nginx configuration provides SSL termination, route-based proxying to all services, WebSocket support, gzip compression, and security headers. Save this file to /etc/nginx/sites-available/salt-microservices and symlink to sites-enabled.
server_name to match your deployment:
stagging.saltcollegeandresourcecentre.com
app.saltcollegeandresourcecentre.com# /etc/nginx/sites-available/salt-microservices
# Microservice Deployment — Nginx Reverse Proxy
# Last updated: 18th February 2026
# ─────────────────────────────────────────────
# Upstream Definitions
# ─────────────────────────────────────────────
upstream api_gateway {
server 127.0.0.1:8080;
keepalive 32;
}
upstream frontend_backend {
server 127.0.0.1:8081;
keepalive 16;
}
upstream mobile_backend {
server 127.0.0.1:8082;
keepalive 16;
}
upstream jenkins_backend {
server 127.0.0.1:8090;
keepalive 8;
}
# ─────────────────────────────────────────────
# HTTP → HTTPS Redirect
# ─────────────────────────────────────────────
server {
listen 80;
listen [::]:80;
server_name stagging.saltcollegeandresourcecentre.com;
# Let's Encrypt ACME challenge
location /.well-known/acme-challenge/ {
root /var/www/html;
}
# Redirect all other HTTP traffic to HTTPS
location / {
return 301 https://$host$request_uri;
}
}
# ─────────────────────────────────────────────
# HTTPS Server Block
# ─────────────────────────────────────────────
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name stagging.saltcollegeandresourcecentre.com;
# ── SSL Configuration ──────────────────────
ssl_certificate /etc/letsencrypt/live/stagging.saltcollegeandresourcecentre.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/stagging.saltcollegeandresourcecentre.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!RC4;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# ── Security Headers ───────────────────────
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# ── Gzip Compression ──────────────────────
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 256;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml
application/rss+xml
image/svg+xml;
# ── Client Settings ───────────────────────
client_max_body_size 100m;
proxy_read_timeout 300s;
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
# ── Common Proxy Headers ──────────────────
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# ── Landing Page ──────────────────────────
# Static files served directly
location / {
root /var/www/html;
index index.html;
try_files $uri $uri/ =404;
}
# ── Next.js Web Admin ─────────────────────
location /SaltElearning/ {
proxy_pass http://frontend_backend/SaltElearning/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# ── API Gateway (All Microservices) ───────
# The API Gateway routes to individual services
# via Eureka service discovery (lb://service-name)
location /SaltELearnAppApi/ {
proxy_pass http://api_gateway/SaltELearnAppApi/;
proxy_http_version 1.1;
}
# ── Flutter Student Mobile ────────────────
location /student/ {
proxy_pass http://mobile_backend/student/;
proxy_http_version 1.1;
}
# ── Flutter Executive Dashboard ───────────
location /dashboard/ {
proxy_pass http://mobile_backend/dashboard/;
proxy_http_version 1.1;
}
# ── WebSocket (Chat Service via Gateway) ──
location /ws/ {
proxy_pass http://api_gateway/ws/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
# ── Jenkins CI/CD ─────────────────────────
location /jenkins/ {
proxy_pass http://jenkins_backend/jenkins/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# ── Error Pages ───────────────────────────
error_page 502 503 504 /50x.html;
location = /50x.html {
root /var/www/html;
internal;
}
}
# Symlink to sites-enabled
sudo ln -sf /etc/nginx/sites-available/salt-microservices \
/etc/nginx/sites-enabled/salt-microservices
# Remove default site if present
sudo rm -f /etc/nginx/sites-enabled/default
# Test configuration syntax
sudo nginx -t
# Reload Nginx (zero-downtime)
sudo systemctl reload nginx
sudo certbot certonly --webroot -w /var/www/html -d stagging.saltcollegeandresourcecentre.com
sudo certbot renew --dry-run.| URL Path | Upstream | Internal Port | Service |
|---|---|---|---|
/ | Static files | — | Landing page (/var/www/html) |
/SaltElearning/ | frontend_backend | 8081 | Next.js Web Admin |
/SaltELearnAppApi/ | api_gateway | 8080 | API Gateway → All Microservices |
/student/ | mobile_backend | 8082 | Flutter Student App |
/dashboard/ | mobile_backend | 8082 | Flutter Executive Dashboard |
/ws/ | api_gateway | 8080 | WebSocket proxy → Chat Service |
/jenkins/ | jenkins_backend | 8090 | Jenkins CI/CD |
Copy this template to .env on the staging server. All containers read from this single file.
# ===================================================================
# Microservice — Staging Environment (.env.staging)
# Server: stagging.saltcollegeandresourcecentre.com (3.13.144.24)
# ===================================================================
# ── PostgreSQL (salvation database) ──────────
POSTGRES_USER=salt_admin
POSTGRES_PASSWORD=<CHANGE_ME>
POSTGRES_PORT=5432
# ── Keycloak Database ────────────────────────
KC_DB_USER=keycloak
KC_DB_PASSWORD=<CHANGE_ME>
KC_DB_PORT=5433
# ── Keycloak Admin ───────────────────────────
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=<CHANGE_ME>
KC_DB_URL=jdbc:postgresql://salt-keycloak-db:5432/keycloak
KC_HOSTNAME=stagging.saltcollegeandresourcecentre.com
# ── Redis ────────────────────────────────────
REDIS_PORT=6379
# ── MongoDB ──────────────────────────────────
MONGO_USER=salt
MONGO_PASSWORD=<CHANGE_ME>
MONGO_PORT=27017
# ── MinIO ────────────────────────────────────
MINIO_ACCESS_KEY=salt-minio-admin
MINIO_SECRET_KEY=<CHANGE_ME>
MINIO_API_PORT=9000
MINIO_CONSOLE_PORT=9001
# ── Kafka ────────────────────────────────────
KAFKA_PORT=9092
ZK_PORT=2181
# ── Spring Boot Common ──────────────────────
SPRING_PROFILES_ACTIVE=staging
SPRING_JPA_DDL_AUTO=none
# ── Email (SMTP — Office 365) ───────────────
SMTP_HOST=smtp.office365.com
SMTP_PORT=587
SMTP_USER=noreply.salt.africa@saitco.onmicrosoft.com
SMTP_PASSWORD=<CHANGE_ME>
EMAIL_FROM=noreply.salt.africa@saitco.onmicrosoft.com
EMAIL_DISPLAY_NAME=College of Africa E-Learning
# ── SMS (Twilio) ─────────────────────────────
TWILIO_ACCOUNT_SID=<CHANGE_ME>
TWILIO_AUTH_TOKEN=<CHANGE_ME>
TWILIO_PHONE_NUMBER=<CHANGE_ME>
# ── Firebase Cloud Messaging ─────────────────
FCM_SERVER_KEY=<CHANGE_ME>
# ── Notification Channel ─────────────────────
NOTIFICATION_CHANNEL=EMAIL
# ── Eureka ───────────────────────────────────
EUREKA_PORT=8761
GATEWAY_PORT=8080
.env files to Git. They are gitignored. Store secrets in a vault or encrypted backup.Production differs from staging in these key areas:
| Variable | Staging | Production |
|---|---|---|
KC_HOSTNAME | stagging.saltcollegeandresourcecentre.com | app.saltcollegeandresourcecentre.com |
SPRING_PROFILES_ACTIVE | staging | production |
SPRING_JPA_DDL_AUTO | none | validate |
JAVA_OPTS | -Xms256m -Xmx512m | -Xms512m -Xmx1g |
HIKARI_MAX_POOL | 20 | 30 |
| Kafka replication | 1 | 3 (if multi-broker) |
All <CHANGE_ME> | Staging passwords | Production passwords (different!) |
The salvation database must be initialized once with the production dump. Flyway migrations run automatically on Spring Boot startup.
# First-time database setup (staging Docker)
bash deploy/setup-db.sh --staging
# Verify database is accessible
docker exec -it salt-postgres psql -U salt_admin -d salvation -c "\dt tbl_*"
# Flyway applies V1-V16+ migrations automatically
# when Spring Boot services start — no manual action needed
setup-db.sh loads salvation.sql once. It is idempotent — running again will skip if the database already exists. Use --force flag only to drop and recreate (destructive!).Kafka auto-creation is disabled (KAFKA_AUTO_CREATE_TOPICS_ENABLE=false). Create topics manually after Kafka starts.
# Wait for Kafka to be healthy
docker compose exec salt-kafka bash
# Create all 17 topics (3 partitions, replication-factor 1 for staging)
kafka-topics --create --bootstrap-server localhost:9092 \
--topic student.registered --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic student.updated --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic enrollment.submitted --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic enrollment.approved --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic enrollment.rejected --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic exam.created --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic exam.submitted --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic exam.marked --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic exam.failed-3x --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic assignment.uploaded --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic assignment.marked --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic material.uploaded --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic grading.calculated --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic grading.level-completed --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic suspension.created --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic suspension.lifted --partitions 3 --replication-factor 1
kafka-topics --create --bootstrap-server localhost:9092 \
--topic chat.message --partitions 3 --replication-factor 1
# Verify all topics created
kafka-topics --list --bootstrap-server localhost:9092
Import the realm configuration after Keycloak starts for the first time.
# Option 1: Auto-import on startup (mount realm file)
# Add to salt-keycloak service in docker-compose.yml:
# volumes:
# - ./keycloak/salt-realm.json:/opt/keycloak/data/import/salt-realm.json
# command: start --import-realm
# Option 2: Manual import via Admin Console
# 1. Access Keycloak admin: https://stagging.saltcollegeandresourcecentre.com/auth
# 2. Login with KEYCLOAK_ADMIN credentials
# 3. Create realm → Import → Upload salt-realm.json
# Option 3: CLI import
docker compose exec salt-keycloak \
/opt/keycloak/bin/kc.sh import --file /opt/keycloak/data/import/salt-realm.json
| Setting | Value |
|---|---|
| Realm Name | salt |
| Client IDs | salt-web-admin, salt-mobile, salt-exec-dashboard, salt-gateway |
| Roles | admin (level 5), tutor (level 8), eto (level 9), student (level 12) |
| Access Token TTL | 15 minutes |
| Refresh Token TTL | 24 hours |
| Password Policy | BCrypt hashing (compatible with existing tbl_sys_users hashes) |
Follow these steps in order. Infrastructure must be healthy before starting platform services, and platform services must be healthy before starting business services.
cd /opt
git clone https://github.com/SGL2024/Salvation-Army-Backend-Microservice.git salt-microservices
cd salt-microservices
# For staging deployment
cp .env.staging .env
# Edit and fill in all placeholders
nano .env
# Build all service images (this may take 10-15 minutes)
bash deploy/docker-dev.sh build
# Or build individually:
docker compose build salt-eureka
docker compose build salt-api-gateway
docker compose build salt-student-svc
# ... (repeat for each service)
# Start infrastructure tier first (7 containers)
bash deploy/docker-dev.sh start-infra
# This starts: postgres, keycloak-db, redis, mongodb, minio, zookeeper, kafka
# Wait for all health checks to pass (~30-60 seconds)
# Verify all infrastructure is healthy
docker compose ps
# Expected: all 7 infrastructure containers show "healthy"
# If Kafka shows "starting", wait 30 more seconds — it takes longer
# Quick health verification
docker exec salt-postgres pg_isready -U salt_admin -d salvation
docker exec salt-redis redis-cli ping
docker exec salt-kafka kafka-broker-api-versions --bootstrap-server localhost:9092
# Enter Kafka container and create all 17 topics
docker compose exec salt-kafka bash -c '
for topic in student.registered student.updated \
enrollment.submitted enrollment.approved enrollment.rejected \
exam.created exam.submitted exam.marked exam.failed-3x \
assignment.uploaded assignment.marked material.uploaded \
grading.calculated grading.level-completed \
suspension.created suspension.lifted chat.message; do
kafka-topics --create --bootstrap-server localhost:9092 \
--topic $topic --partitions 3 --replication-factor 1 2>/dev/null
echo "Created topic: $topic"
done
'
# Verify
docker compose exec salt-kafka kafka-topics --list --bootstrap-server localhost:9092
# First-time only — load salvation.sql
bash deploy/setup-db.sh --staging
# Create MinIO bucket
docker compose exec salt-minio mc alias set local http://localhost:9000 \
${MINIO_ACCESS_KEY} ${MINIO_SECRET_KEY}
docker compose exec salt-minio mc mb local/salt-files --ignore-existing
# Start Keycloak, Eureka, API Gateway (3 containers)
bash deploy/docker-dev.sh start-platform
# Wait for Eureka to be healthy (~30 seconds)
curl -s http://localhost:8761/actuator/health | grep -o '"status":"UP"'
# Wait for Keycloak (~60 seconds for first startup)
curl -s http://localhost:8180/health | grep -o '"status":"UP"'
# Wait for API Gateway
curl -s http://localhost:8080/actuator/health | grep -o '"status":"UP"'
# Start all 14 business services + frontend (15 containers)
bash deploy/docker-dev.sh start-services
# Spring Boot services take 30-90 seconds each to start
# Monitor startup progress:
docker compose logs -f --tail=50 salt-student-svc salt-enrollment-svc salt-exam-svc
# Copy Nginx config (see Section 5)
sudo cp nginx/salt-microservices /etc/nginx/sites-available/salt-microservices
sudo ln -sf /etc/nginx/sites-available/salt-microservices \
/etc/nginx/sites-enabled/salt-microservices
sudo rm -f /etc/nginx/sites-enabled/default
# Test and reload
sudo nginx -t && sudo systemctl reload nginx
# Check Eureka registry — all services should appear
curl -s http://localhost:8761/eureka/apps | grep -o '<name>[^<]*</name>'
# Expected output (14 business services + gateway):
# STUDENT-SERVICE
# ENROLLMENT-SERVICE
# CURRICULUM-SERVICE
# EXAM-SERVICE
# ASSIGNMENT-SERVICE
# GRADING-SERVICE
# CERTIFICATE-SERVICE
# REPORTING-SERVICE
# FILE-SERVICE
# CONFIG-SERVICE
# SUSPENSION-SERVICE
# CHATBOT-SERVICE
# NOTIFICATION-SERVICE
# CHAT-SERVICE
# API-GATEWAY
# Test through Nginx (no ports in URL)
curl -s https://stagging.saltcollegeandresourcecentre.com/SaltELearnAppApi/actuator/health
# Test individual service routes via gateway
curl -s http://localhost:8080/api/v2/students/health
curl -s http://localhost:8080/api/v2/enrollment/health
curl -s http://localhost:8080/api/v2/exam/health
# Verify container status
docker compose ps
depends_on with condition: service_healthy ensures correct ordering. If a service fails to start, check its dependencies first.The Eureka service discovery dashboard shows all registered services, their status, and instance details.
| Access Point | URL (Internal) | Purpose |
|---|---|---|
| Eureka Dashboard | http://localhost:8761 | Service registry UI — shows all registered microservices |
| Eureka Apps API | http://localhost:8761/eureka/apps | XML/JSON list of all registered services |
Every Spring Boot service (12 business + Eureka + Gateway) exposes these actuator endpoints:
| Endpoint | Description | Example |
|---|---|---|
/actuator/health | Overall health status with dependency checks (DB, Redis, Kafka) | curl http://localhost:8101/actuator/health |
/actuator/info | Service version, build info, git commit | curl http://localhost:8101/actuator/info |
/actuator/metrics | JVM stats, HTTP request metrics, custom counters | curl http://localhost:8101/actuator/metrics |
/actuator/prometheus | Prometheus-compatible metrics export | curl http://localhost:8101/actuator/prometheus |
| Parameter | Infrastructure | Platform | Business Services |
|---|---|---|---|
interval | 10s | 15s | 30s |
timeout | 5s | 10s | 10s |
retries | 5 | 5 | 5 |
start_period | — | 30s | 60s |
The start_period gives Spring Boot services time to complete JVM startup and Flyway migrations before health checks begin failing.
# Tail logs for a specific service
docker compose logs -f --tail=100 salt-student-svc
# Tail logs for all services (very verbose)
docker compose logs -f --tail=20
# Tail logs for all business services
docker compose logs -f salt-student-svc salt-enrollment-svc salt-exam-svc \
salt-assignment-svc salt-grading-svc salt-certificate-svc
# Check for errors across all services
docker compose logs --since="10m" | grep -i "error\|exception\|failed"
# Export logs to file
docker compose logs --no-color > /tmp/salt-logs-$(date +%Y%m%d).txt
| Console | Internal URL | Credentials | Purpose |
|---|---|---|---|
| MinIO Console | http://localhost:9001 | MINIO_ACCESS_KEY / MINIO_SECRET_KEY | File storage management, bucket browsing |
| Keycloak Admin | http://localhost:8180 | KEYCLOAK_ADMIN / KEYCLOAK_ADMIN_PASSWORD | User management, realm config, client settings |
| Eureka Dashboard | http://localhost:8761 | None (unauthenticated) | Service registry, instance health |
| Problem | Cause | Fix |
|---|---|---|
| Service not registering with Eureka | Eureka server not ready when the service attempts registration | Verify depends_on with condition: service_healthy in docker-compose.yml. Check Eureka health: curl http://localhost:8761/actuator/health. Restart the service: docker compose restart salt-student-svc |
| Kafka connection refused | Kafka or Zookeeper not fully started; Kafka takes 15-30s after Zookeeper | Wait 30 seconds after infrastructure start. Verify: docker compose exec salt-kafka kafka-broker-api-versions --bootstrap-server localhost:9092. Check Zookeeper: docker compose logs salt-zookeeper |
| Database connection pool exhausted | Too many services with large HikariCP pools; PostgreSQL max_connections exceeded |
Reduce HIKARI_MAX_POOL per service to 15-20. Increase PostgreSQL max_connections to 300+. Check active connections: docker exec salt-postgres psql -U salt_admin -d salvation -c "SELECT count(*) FROM pg_stat_activity;" |
| 502 Bad Gateway | Target service not started or crashed; Nginx cannot reach upstream | Check docker compose ps for unhealthy containers. Check service logs: docker compose logs salt-api-gateway. Wait for start_period to elapse. Verify port mapping. |
| File upload fails | MinIO bucket salt-files does not exist |
Create bucket: docker compose exec salt-minio mc mb local/salt-files --ignore-existing. Verify MinIO health: curl http://localhost:9000/minio/health/live |
| Chat not connecting (WebSocket) | Nginx not proxying WebSocket Upgrade headers |
Verify Nginx /ws/ location has proxy_set_header Upgrade $http_upgrade and Connection "upgrade". Check chat service: curl http://localhost:8300/api/v2/chat/health |
| Keycloak startup failure | Keycloak DB not ready or credentials mismatch | Check salt-keycloak-db is healthy. Verify KC_DB_USER and KC_DB_PASSWORD match. Check logs: docker compose logs salt-keycloak |
| Out of memory (OOM) | All 25 containers exceed available RAM | Reduce -Xmx per service. Use docker stats to identify memory-hungry containers. Consider scaling to a larger instance (t3.xlarge recommended). |
| Flyway migration failure | Schema mismatch or corrupted migration history | Check flyway_schema_history table. Never modify existing migrations. Create a new V{N}__fix.sql migration. Run: docker compose logs salt-student-svc | grep -i flyway |
| Services registering then deregistering | Health check failing after initial registration; Eureka evicts unhealthy instances | Increase start_period. Check /actuator/health for specific dependency failures (DB, Redis, Kafka). Verify environment variables are correct. |
# Check all container statuses
docker compose ps
# View resource usage (CPU, memory) per container
docker stats --no-stream
# Inspect a specific container
docker inspect salt-student-svc | grep -A 5 "Health"
# View Docker network details
docker network inspect salt-network
# List all Kafka consumer groups
docker compose exec salt-kafka kafka-consumer-groups \
--bootstrap-server localhost:9092 --list
# Check Kafka consumer lag
docker compose exec salt-kafka kafka-consumer-groups \
--bootstrap-server localhost:9092 \
--group notification-svc-group --describe
# PostgreSQL active connections per service
docker exec salt-postgres psql -U salt_admin -d salvation -c \
"SELECT application_name, count(*) FROM pg_stat_activity GROUP BY application_name ORDER BY count DESC;"
# Force restart a single service
docker compose restart salt-exam-svc
# Rebuild and restart a single service
docker compose up -d --build salt-exam-svc
This section guides the AWS Security Group configuration for the EC2 instance. Only 3 ports should be exposed to the internet. All other ports are internal to the Docker salt-network bridge and must NOT be accessible from outside the host.
salt-network) and does not need host-level port exposure.These are the only ports that must be open in the AWS Security Group inbound rules.
| Port | Protocol | Source | Service | Purpose |
|---|---|---|---|---|
| 443 | TCP | 0.0.0.0/0 | Nginx (HTTPS) | All client traffic — SSL-terminated reverse proxy to all services |
| 80 | TCP | 0.0.0.0/0 | Nginx (HTTP) | HTTP → HTTPS redirect only (301 to port 443) |
| 22 | TCP | Your IP / VPN CIDR | SSH | Server administration — restrict to known IPs only |
0.0.0.0/0. Restrict to your office IP, VPN CIDR, or use AWS Systems Manager Session Manager instead of SSH.These ports are used for container-to-container communication on salt-network. They should NOT have AWS Security Group inbound rules. Even though docker-compose.yml maps them to the host (for development convenience), in production they must be firewalled.
| Port | Container | Service | Why Internal |
|---|---|---|---|
| 2181 | salt-zookeeper | Zookeeper | Kafka coordination only — no external clients |
| 5432 | salt-postgres | PostgreSQL (salvation) | Database — accessed by Spring Boot services via Docker DNS |
| 5433 | salt-keycloak-db | PostgreSQL (keycloak) | Keycloak database — only Keycloak connects to it |
| 6379 | salt-redis | Redis | Cache — accessed by services via Docker DNS |
| 8080 | salt-api-gateway | API Gateway | Accessed by Nginx upstream, not directly by clients |
| 8081 | salt-web-admin | Next.js Web Admin | Accessed by Nginx upstream, not directly by clients |
| 8101 | salt-student-svc | Student Service | Accessed only by API Gateway via Eureka discovery |
| 8102 | salt-enrollment-svc | Enrollment Service | Accessed only by API Gateway via Eureka discovery |
| 8103 | salt-curriculum-svc | Curriculum Service | Accessed only by API Gateway via Eureka discovery |
| 8104 | salt-exam-svc | Exam Service | Accessed only by API Gateway via Eureka discovery |
| 8105 | salt-assignment-svc | Assignment Service | Accessed only by API Gateway via Eureka discovery |
| 8106 | salt-grading-svc | Grading Service | Accessed only by API Gateway via Eureka discovery |
| 8107 | salt-certificate-svc | Certificate Service | Accessed only by API Gateway via Eureka discovery |
| 8108 | salt-reporting-svc | Reporting Service | Accessed only by API Gateway via Eureka discovery |
| 8109 | salt-file-svc | File Service | Accessed only by API Gateway via Eureka discovery |
| 8110 | salt-config-svc | Config Service | Accessed only by API Gateway via Eureka discovery |
| 8111 | salt-suspension-svc | Suspension Service | Accessed only by API Gateway via Eureka discovery |
| 8112 | salt-chatbot-svc | Chatbot Service | Accessed only by API Gateway via Eureka discovery |
| 8180 | salt-keycloak | Keycloak | Accessed by Nginx upstream at /auth path |
| 8200 | salt-notification-svc | Notification Service | Accessed only by API Gateway via Eureka discovery |
| 8300 | salt-chat-svc | Chat Service | WebSocket via Nginx upstream at /ws path |
| 8761 | salt-eureka | Eureka Server | Service discovery — only consumed by internal services |
| 9000 | salt-minio | MinIO S3 API | File storage — accessed by services via Docker DNS |
| 9001 | salt-minio | MinIO Console | Admin console — access via SSH tunnel if needed |
| 9092 | salt-kafka | Kafka Broker | Event bus — only consumed by internal services |
| 27017 | salt-mongodb | MongoDB | Chat database — only chat-service connects to it |
Create an AWS Security Group named salt-microservices-sg with these exact rules:
| Type | Protocol | Port Range | Source | Description |
|---|---|---|---|---|
| HTTPS | TCP | 443 | 0.0.0.0/0 | Platform HTTPS traffic |
| HTTP | TCP | 80 | 0.0.0.0/0 | HTTP redirect to HTTPS |
| SSH | TCP | 22 | Your-Office-IP/32 | Admin SSH access (restrict!) |
| Type | Protocol | Port Range | Destination | Description |
|---|---|---|---|---|
| All traffic | All | All | 0.0.0.0/0 | Allow outbound (SMTP, Twilio API, FCM, etc.) |
docker-compose.yml for internal services (only expose to salt-network)expose: instead of ports: for services that don't need host-level accessIf you need to access internal service ports (e.g., Eureka dashboard, MinIO console, database) from your workstation, use SSH tunneling instead of opening Security Group rules:
# SSH tunnel to Eureka dashboard (access at localhost:8761)
ssh -L 8761:localhost:8761 ubuntu@3.13.144.24
# SSH tunnel to MinIO console (access at localhost:9001)
ssh -L 9001:localhost:9001 ubuntu@3.13.144.24
# SSH tunnel to PostgreSQL (connect at localhost:5432)
ssh -L 5432:localhost:5432 ubuntu@3.13.144.24
# Multiple tunnels in one command
ssh -L 8761:localhost:8761 -L 9001:localhost:9001 -L 5432:localhost:5432 ubuntu@3.13.144.24
Keycloak 24 provides centralized OAuth2/OIDC authentication for all platform clients. This section details the complete setup process, from initial admin creation to user migration from the existing tbl_sys_users table.
# Check Keycloak is healthy
docker compose ps salt-keycloak
# View Keycloak startup logs
docker compose logs -f salt-keycloak --tail=50
# Expected: "Running the server in production mode" or "Listening on: http://0.0.0.0:8080"
# Keycloak is accessible via Nginx at:
# https://stagging.saltcollegeandresourcecentre.com/auth
# Initial admin credentials (from .env):
# Username: ${KEYCLOAK_ADMIN} (default: admin)
# Password: ${KEYCLOAK_ADMIN_PASSWORD}
# IMPORTANT: Change the admin password after first login!
# Keycloak container environment (docker-compose.yml)
salt-keycloak:
image: quay.io/keycloak/keycloak:24.0
container_name: salt-keycloak
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://salt-keycloak-db:5432/keycloak
KC_DB_USERNAME: ${KC_DB_USER:-keycloak}
KC_DB_PASSWORD: ${KC_DB_PASSWORD}
KC_HOSTNAME: stagging.saltcollegeandresourcecentre.com
KC_HOSTNAME_PATH: /auth
KC_HTTP_RELATIVE_PATH: /auth
KC_PROXY_HEADERS: xforwarded
KC_HTTP_ENABLED: "true"
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN:-admin}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
TZ: Africa/Nairobi
command: start --optimized --import-realm
depends_on:
salt-keycloak-db:
condition: service_healthy
volumes:
- ./keycloak/salt-realm.json:/opt/keycloak/data/import/salt-realm.json:ro
ports:
- "8180:8080"
networks:
- salt-network
| Variable | Example Value | Description |
|---|---|---|
KC_DB_USER | keycloak | Keycloak database username |
KC_DB_PASSWORD | <strong-password> | Keycloak database password |
KC_DB_PORT | 5433 | Host port for Keycloak PostgreSQL |
KEYCLOAK_ADMIN | admin | Keycloak admin username |
KEYCLOAK_ADMIN_PASSWORD | <strong-password> | Keycloak admin password |
The salt realm must be configured with settings that match the existing platform behavior.
| Setting | Value | Reason |
|---|---|---|
| Realm Name | salt | Namespace for all platform authentication |
| Display Name | College of Africa E-Learning | Shown on login pages |
| Enabled | true | Active realm |
| Registration Allowed | false | Students register through the API, not Keycloak directly |
| Forgot Password | true | Enable password reset flow |
| Remember Me | true | Persistent login sessions |
| Login With Email | true | Students can login with email or admission number |
| Duplicate Emails | false | Each email must be unique |
| Internationalization | en, fr, sw, pt | 4 platform languages |
| Default Locale | en | English as default |
| Setting | Value | Reason |
|---|---|---|
| Access Token Lifespan | 15 minutes | Short-lived for security; clients use refresh tokens |
| Refresh Token Lifespan | 24 hours | One full day before re-authentication required |
| Client Session Idle | 30 minutes | Matches existing idle timeout in tbl_sys_settings |
| Client Session Max | 24 hours | Maximum session duration |
| SSO Session Idle | 30 minutes | SSO idle timeout |
| SSO Session Max | 24 hours | Maximum SSO session |
# Keycloak password policy (set in Realm → Authentication → Password Policy):
# - hashAlgorithm(bcrypt) ← CRITICAL: must match existing $2a$10$ hashes
# - length(6) ← minimum 6 characters (matches current platform policy)
# - notUsername ← password cannot equal username
tbl_sys_users.password use BCrypt with cost factor 10 ($2a$10$...). Keycloak 24 supports BCrypt natively. When migrating users, their existing password hashes can be imported directly — no re-hashing needed.Create 4 OAuth2 clients in the salt realm, one for each application that authenticates against Keycloak.
| Client ID | Client Type | Protocol | Root URL | Valid Redirect URIs | Web Origins |
|---|---|---|---|---|---|
salt-web-admin | Public | openid-connect | https://stagging.saltcollegeandresourcecentre.com/SaltElearning | /SaltElearning/* | + |
salt-mobile | Public | openid-connect | https://stagging.saltcollegeandresourcecentre.com/student | /student/*, com.salt.mobile:/callback | + |
salt-exec-dashboard | Public | openid-connect | https://stagging.saltcollegeandresourcecentre.com/dashboard | /dashboard/* | + |
salt-gateway | Confidential | openid-connect | — | — | — |
# salt-web-admin (Public client — Next.js SPA)
Client ID: salt-web-admin
Access Type: public
Standard Flow: enabled
Direct Access Grants: enabled (for admin login forms)
PKCE: S256 (recommended for public clients)
# salt-mobile (Public client — Flutter native app)
Client ID: salt-mobile
Access Type: public
Standard Flow: enabled
Direct Access Grants: enabled (for mobile login)
PKCE: S256
# Note: "com.salt.mobile:/callback" enables deep-linking for native app
# salt-gateway (Confidential client — API Gateway)
Client ID: salt-gateway
Access Type: confidential
Service Accounts: enabled (for backend-to-backend auth)
Standard Flow: disabled (gateway doesn't serve login forms)
Client Secret: ${KC_GATEWAY_SECRET} (store in .env)
# API Gateway application.yml — Spring Security OAuth2 Resource Server
spring:
security:
oauth2:
resourceserver:
jwt:
issuer-uri: http://salt-keycloak:8080/auth/realms/salt
jwk-set-uri: http://salt-keycloak:8080/auth/realms/salt/protocol/openid-connect/certs
The platform uses 4 roles with numeric access levels. These must be mapped to Keycloak realm roles.
| Keycloak Role | Access Level | Role Name | Description |
|---|---|---|---|
salt-admin | 5 | Admin | Full platform management, user approvals, system settings |
salt-tutor | 8 | Tutor | Subject teaching, exam creation, assignment marking |
salt-eto | 9 | ETO | Education Training Officer — territory-level oversight |
salt-student | 12 | Student | Learning, exams, assignments, chatbot |
Add these protocol mappers to include platform-specific claims in the JWT access token:
| Mapper Name | Mapper Type | User Attribute | Token Claim Name | Claim JSON Type |
|---|---|---|---|---|
| access_level | User Attribute | access_level | access_level | int |
| territory_id | User Attribute | territory_id | territory_id | long |
| admission_no | User Attribute | admission_no | admission_no | String |
| student_id | User Attribute | student_id | student_id | long |
| full_name | User Attribute | full_name | full_name | String |
{
"sub": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"realm_access": { "roles": ["salt-student"] },
"access_level": 12,
"territory_id": 5,
"admission_no": "2024049",
"student_id": 1234,
"full_name": "John Doe",
"preferred_username": "2024049",
"email": "john.doe@example.com"
}Existing users in tbl_sys_users must be migrated to Keycloak while preserving their BCrypt password hashes. Two approaches are supported:
# 1. Export users from PostgreSQL to JSON
docker compose exec salt-postgres psql -U salt_admin -d salvation -c "
SELECT json_agg(json_build_object(
'username', u.username,
'email', u.email_address,
'firstName', u.f_name,
'lastName', u.l_name,
'enabled', u.enabled,
'credentials', json_build_array(json_build_object(
'type', 'password',
'hashedSaltedValue', u.password,
'algorithm', 'bcrypt'
)),
'attributes', json_build_object(
'access_level', ARRAY[u.level_id::text],
'territory_id', ARRAY[u.user_territory_id::text],
'admission_no', ARRAY[COALESCE(u.admissionno, '')],
'student_id', ARRAY[COALESCE(s.id::text, '')]
),
'realmRoles', ARRAY[
CASE u.level_id
WHEN 5 THEN 'salt-admin'
WHEN 8 THEN 'salt-tutor'
WHEN 9 THEN 'salt-eto'
WHEN 12 THEN 'salt-student'
END
]
))
FROM tbl_sys_users u
LEFT JOIN tbl_student s ON s.user_id = u.id
WHERE u.deletion_date IS NULL
" -t > /tmp/keycloak-users.json
# 2. Import into Keycloak using partial import API
curl -X POST \
http://localhost:8180/auth/admin/realms/salt/partialImport \
-H "Authorization: Bearer $(get_admin_token)" \
-H "Content-Type: application/json" \
-d @/tmp/keycloak-users.json
# Use Keycloak's User Storage SPI to read users from tbl_sys_users at login time.
# This allows gradual migration — users are synced on first login.
#
# Configuration in Keycloak Admin Console:
# 1. Go to User Federation → Add provider → jdbc
# 2. JDBC URL: jdbc:postgresql://salt-postgres:5432/salvation
# 3. Username/Password: same as SPRING_DATASOURCE credentials
# 4. User query: SELECT username, password, email_address, f_name, l_name, enabled
# FROM tbl_sys_users WHERE username = ?
# 5. Import users: ON (copy to Keycloak DB on first login)
# 6. Sync mode: IMPORT (one-way from PostgreSQL to Keycloak)
MongoDB 7 stores real-time chat messages and conversation data for the chat-service (Node.js + Socket.IO). It replaces the existing PostgreSQL tbl_chat table for new messages while maintaining backward compatibility with historical data.
| Setting | Value |
|---|---|
| Container | salt-mongodb |
| Docker DNS | salt-mongodb:27017 |
| Database Name | salt_chat |
| Auth Database | admin |
| Username | ${MONGO_USER} (default: salt) |
| Password | ${MONGO_PASSWORD} |
| Connection String | mongodb://${MONGO_USER}:${MONGO_PASSWORD}@salt-mongodb:27017/salt_chat?authSource=admin |
| Collection | Purpose | Estimated Size |
|---|---|---|
messages | Individual chat messages (text, file refs, reactions) | ~500K documents/year |
conversations | Conversation metadata (participants, type, last activity) | ~10K documents |
read_receipts | Per-user read position in each conversation | ~50K documents |
online_status | User online/offline/last-seen tracking (TTL-indexed) | ~5K documents |
// messages collection
{
_id: ObjectId,
conversationId: ObjectId, // ref → conversations._id
senderId: Long, // ref → tbl_sys_users.id
senderName: String, // denormalized for display
content: String, // message text
type: String, // "text" | "file" | "image" | "system"
fileUrl: String, // MinIO file path (if type=file/image)
replyTo: ObjectId, // ref → messages._id (threaded replies)
readBy: [Long], // array of user IDs who read this
createdAt: ISODate, // message timestamp (EAT)
updatedAt: ISODate // edit timestamp
}
// conversations collection
{
_id: ObjectId,
type: String, // "direct" | "group" | "subject" | "territory"
name: String, // group/subject name (null for direct)
participants: [Long], // array of tbl_sys_users.id
subjectId: Long, // ref → tbl_subjects.id (for subject chats)
territoryId: Long, // ref → tbl_territory.id (for territory chats)
lastMessage: { // denormalized latest message
content: String,
senderId: Long,
createdAt: ISODate
},
createdAt: ISODate,
updatedAt: ISODate
}
// read_receipts collection
{
_id: ObjectId,
conversationId: ObjectId,
userId: Long,
lastReadMessageId: ObjectId,
lastReadAt: ISODate
}
// online_status collection (TTL-indexed)
{
_id: Long, // tbl_sys_users.id
status: String, // "online" | "away" | "offline"
lastSeen: ISODate,
socketId: String, // Socket.IO connection ID
updatedAt: ISODate // TTL trigger (expires after 5 min idle)
}
Create these indexes after the database is initialized to ensure query performance.
# Connect to MongoDB and create indexes
docker compose exec salt-mongodb mongosh \
--username ${MONGO_USER} --password ${MONGO_PASSWORD} \
--authenticationDatabase admin salt_chat --eval '
// messages indexes
db.messages.createIndex({ conversationId: 1, createdAt: -1 });
db.messages.createIndex({ senderId: 1, createdAt: -1 });
db.messages.createIndex({ conversationId: 1, senderId: 1 });
db.messages.createIndex({ createdAt: 1 }, { expireAfterSeconds: 31536000 }); // 1 year TTL (optional)
// conversations indexes
db.conversations.createIndex({ participants: 1 });
db.conversations.createIndex({ type: 1, subjectId: 1 });
db.conversations.createIndex({ type: 1, territoryId: 1 });
db.conversations.createIndex({ "lastMessage.createdAt": -1 });
db.conversations.createIndex(
{ type: 1, participants: 1 },
{ unique: true, partialFilterExpression: { type: "direct" } }
);
// read_receipts indexes
db.read_receipts.createIndex({ conversationId: 1, userId: 1 }, { unique: true });
// online_status indexes (TTL: auto-delete after 5 minutes of no update)
db.online_status.createIndex({ updatedAt: 1 }, { expireAfterSeconds: 300 });
print("All indexes created successfully");
'
| Index | Collection | Query Pattern |
|---|---|---|
{ conversationId: 1, createdAt: -1 } | messages | Load messages in a conversation, newest first (main chat view) |
{ senderId: 1, createdAt: -1 } | messages | Find all messages by a user (admin moderation) |
{ participants: 1 } | conversations | Find all conversations a user is in (conversation list) |
{ type: 1, subjectId: 1 } | conversations | Find the chat room for a specific subject |
{ conversationId: 1, userId: 1 } | read_receipts | Check read position for unread badge count |
TTL on updatedAt | online_status | Auto-cleanup stale online entries (disconnected without logout) |
Existing chat messages in PostgreSQL tbl_chat can be migrated to MongoDB for a unified chat experience.
# Step 1: Export existing chats from PostgreSQL
docker compose exec salt-postgres psql -U salt_admin -d salvation -c "
COPY (
SELECT json_build_object(
'senderId', c.sender_id,
'senderName', CONCAT(u.f_name, ' ', u.l_name),
'content', c.message,
'type', 'text',
'createdAt', json_build_object('\$date', c.created_date),
'subjectId', c.subject_id,
'territoryId', u.user_territory_id
)
FROM tbl_chat c
JOIN tbl_sys_users u ON c.sender_id = u.id
WHERE c.deletion_date IS NULL
ORDER BY c.created_date
) TO '/tmp/chat_export.jsonl'
" 2>/dev/null
# Step 2: Copy to MongoDB container
docker cp salt-postgres:/tmp/chat_export.jsonl /tmp/chat_export.jsonl
docker cp /tmp/chat_export.jsonl salt-mongodb:/tmp/chat_export.jsonl
# Step 3: Import into MongoDB
docker compose exec salt-mongodb mongoimport \
--uri "mongodb://${MONGO_USER}:${MONGO_PASSWORD}@localhost:27017/salt_chat?authSource=admin" \
--collection messages \
--file /tmp/chat_export.jsonl
# Step 4: Create conversation documents from migrated messages
docker compose exec salt-mongodb mongosh \
--username ${MONGO_USER} --password ${MONGO_PASSWORD} \
--authenticationDatabase admin salt_chat --eval '
// Group messages by subject and create conversation documents
db.messages.aggregate([
{ $group: {
_id: "$subjectId",
participants: { $addToSet: "$senderId" },
lastMessage: { $last: { content: "$content", senderId: "$senderId", createdAt: "$createdAt" }},
firstDate: { $min: "$createdAt" }
}},
{ $project: {
type: "subject",
subjectId: "$_id",
participants: 1,
lastMessage: 1,
createdAt: "$firstDate",
updatedAt: new Date()
}}
]).forEach(doc => {
var conv = db.conversations.insertOne(doc);
db.messages.updateMany(
{ subjectId: doc.subjectId },
{ $set: { conversationId: conv.insertedId }}
);
});
print("Migration complete: conversations created and messages linked");
'
tbl_chat table in PostgreSQL should remain intact (read-only) for audit purposes. New messages are written exclusively to MongoDB via the chat-service.# Full backup of salt_chat database
docker compose exec salt-mongodb mongodump \
--uri "mongodb://${MONGO_USER}:${MONGO_PASSWORD}@localhost:27017/salt_chat?authSource=admin" \
--out /tmp/mongo_backup_$(date +%Y%m%d)
# Copy backup to host
docker cp salt-mongodb:/tmp/mongo_backup_$(date +%Y%m%d) ./backups/mongodb/
# Restore from backup
docker compose exec salt-mongodb mongorestore \
--uri "mongodb://${MONGO_USER}:${MONGO_PASSWORD}@localhost:27017/?authSource=admin" \
/tmp/mongo_backup_20260218/
# Check database stats
docker compose exec salt-mongodb mongosh \
--username ${MONGO_USER} --password ${MONGO_PASSWORD} \
--authenticationDatabase admin salt_chat --eval '
printjson(db.stats());
print("Messages: " + db.messages.countDocuments());
print("Conversations: " + db.conversations.countDocuments());
print("Online users: " + db.online_status.countDocuments({ status: "online" }));
'
# Check index usage
docker compose exec salt-mongodb mongosh \
--username ${MONGO_USER} --password ${MONGO_PASSWORD} \
--authenticationDatabase admin salt_chat --eval '
db.messages.aggregate([{ $indexStats: {} }]).forEach(printjson);
'
| Task | Frequency | Command |
|---|---|---|
| Backup | Daily (cron) | 0 2 * * * bash /opt/salt-microservices/deploy/backup-mongodb.sh |
| Compact | Weekly | db.runCommand({ compact: "messages" }) |
| Index rebuild | Monthly | db.messages.reIndex() |
| Old backup cleanup | Weekly | find /opt/salt-microservices/backups/mongodb -mtime +30 -delete |
| Port | Container | Service | Technology |
|---|---|---|---|
| 2181 | salt-zookeeper | Zookeeper | Confluent CP |
| 5432 | salt-postgres | Main Database | PostgreSQL 17 |
| 5433 | salt-keycloak-db | Keycloak Database | PostgreSQL 17 |
| 6379 | salt-redis | Cache & Sessions | Redis 7 Alpine |
| 8080 | salt-api-gateway | API Gateway | Spring Cloud Gateway |
| 8081 | salt-web-admin | Web Admin | Next.js |
| 8101 | salt-student-svc | Student Management | Spring Boot |
| 8102 | salt-enrollment-svc | Enrollment & Approvals | Spring Boot |
| 8103 | salt-curriculum-svc | Curriculum & Subjects | Spring Boot |
| 8104 | salt-exam-svc | Examinations | Spring Boot |
| 8105 | salt-assignment-svc | Assignments & Tutor | Spring Boot |
| 8106 | salt-grading-svc | Grading & Billing | Spring Boot |
| 8107 | salt-certificate-svc | Certificates & Transcripts | Spring Boot |
| 8108 | salt-reporting-svc | Reporting & Analytics | Spring Boot |
| 8109 | salt-file-svc | File Storage | Spring Boot |
| 8110 | salt-config-svc | System Configuration | Spring Boot |
| 8111 | salt-suspension-svc | Suspension & Intake | Spring Boot |
| 8112 | salt-chatbot-svc | Chatbot & FAQ | Spring Boot |
| 8180 | salt-keycloak | Identity Provider | Keycloak 24 |
| 8200 | salt-notification-svc | Notifications | FastAPI (Python) |
| 8300 | salt-chat-svc | Real-time Chat | Node.js + Socket.IO |
| 8761 | salt-eureka | Service Discovery | Eureka Server |
| 9000 | salt-minio | MinIO S3 API | MinIO |
| 9001 | salt-minio | MinIO Console | MinIO |
| 9092 | salt-kafka | Message Broker | Kafka |
| 27017 | salt-mongodb | Chat Database | MongoDB 7 |
| Action | Command |
|---|---|
| Start infrastructure only | bash deploy/docker-dev.sh start-infra |
| Start platform services | bash deploy/docker-dev.sh start-platform |
| Start business services | bash deploy/docker-dev.sh start-services |
| Start everything | bash deploy/docker-dev.sh start-all |
| Stop everything | docker compose down |
| Stop and remove volumes | docker compose down -v |
| View container status | docker compose ps |
| View resource usage | docker stats --no-stream |
| Tail service logs | docker compose logs -f --tail=100 <service-name> |
| Restart a single service | docker compose restart <service-name> |
| Rebuild and restart | docker compose up -d --build <service-name> |
| Open database shell | bash deploy/docker-dev.sh db-shell |
| Open Redis CLI | docker compose exec salt-redis redis-cli |
| List Kafka topics | docker compose exec salt-kafka kafka-topics --list --bootstrap-server localhost:9092 |
| Check Eureka services | curl -s http://localhost:8761/eureka/apps |
| Backup database | bash deploy/backup-db.sh --staging |
| Full reset (DANGER) | bash deploy/docker-dev.sh reset |
| Document | ID | Description |
|---|---|---|
| Microservice Architecture | SALT-MSA-2026-001 | Service catalog, Kafka topics, database ownership, decomposition roadmap |
| Technical Documentation | — | Monolith backend reference, entity descriptions, API endpoints |
| Monolith Deployment Guide | — | Monolith Docker Compose, staging/production profiles |
Document ID: SALT-MSD-2026-001 | Version 1.0 | 18th February 2026
© 2021-2026 College E-Learning — Developed by Sumba Group Limited for Salvation Army SALT College of Africa