Application Stack
⚠️ DYNAMIC STACK - PIPELINE MANAGED
This stack is HIGHLY DYNAMIC and changes continuously through automated CI/CD pipelines.
This documentation describes deployment patterns and infrastructure requirements only.
❌ DO NOT manually deploy services
❌ DO NOT edit configurations here
❌ DO NOT expect static service definitions
✅ All applications are managed by CI/CD pipelines
✅ Service configurations live in their own Git repositories
✅ Deployments happen automatically through GitOps
See individual service repositories for current deployment manifests and application configurations.
Overview
The Application Stack contains the NDP microservices that implement Egypt's National Digital Prescription platform.
Stack Architecture
Deployment Model
CI/CD Pipeline Flow
GitOps Workflow
Application Deployment Pattern
Template Reference Only
The following templates show the standard pattern that services should follow.
These are NOT actual deployments. Each service has its own specific configuration in its Git repository that may differ based on its requirements.
Use these as a reference when creating new services, not as authoritative deployment configurations.
Standard Service Template
All NDP services follow this standard Kubernetes deployment pattern:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ service-name }}
namespace: applications
labels:
app: {{ service-name }}
version: {{ version }}
managed-by: pipeline
spec:
replicas: 3
selector:
matchLabels:
app: {{ service-name }}
template:
metadata:
labels:
app: {{ service-name }}
version: {{ version }}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: {{ service-name }}
containers:
- name: {{ service-name }}
image: {{ registry }}/{{ service-name }}:{{ version }}
ports:
- containerPort: 8080
name: http
env:
- name: SERVICE_NAME
value: {{ service-name }}
- name: ENVIRONMENT
value: {{ environment }}
- name: CONSUL_ADDR
value: "consul.discovery-stack:8500"
- name: VAULT_ADDR
value: "http://vault.discovery-stack:8200"
envFrom:
- configMapRef:
name: {{ service-name }}-config
- secretRef:
name: {{ service-name }}-secrets
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: "250m"
memory: 512Mi
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
startupProbe:
httpGet:
path: /health/startup
port: 8080
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 30Service Mesh Integration
apiVersion: v1
kind: Service
metadata:
name: { { service-name } }
namespace: applications
annotations:
consul.hashicorp.com/service-name: "{{ service-name }}"
consul.hashicorp.com/service-tags: "ndp,api,{{ version }}"
consul.hashicorp.com/service-port: "8080"
labels:
app: { { service-name } }
spec:
type: ClusterIP
selector:
app: { { service-name } }
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: httpIngress Route
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: { { service-name } }
namespace: applications
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.healthflow.eg`) && PathPrefix(`/api/v1/{{ service-path }}`)
kind: Rule
services:
- name: { { service-name } }
port: 8080
middlewares:
- name: auth-middleware
- name: rate-limit
- name: cors-all
tls:
certResolver: letsencryptNamespace Structure
Applications Namespace
apiVersion: v1
kind: Namespace
metadata:
name: applications
labels:
name: applications
stack: applications
managed-by: pipelineResource Quotas
apiVersion: v1
kind: ResourceQuota
metadata:
name: app-quota
namespace: applications
spec:
hard:
requests.cpu: "20"
requests.memory: 40Gi
limits.cpu: "40"
limits.memory: 80Gi
persistentvolumeclaims: "10"
services.loadbalancers: "0"Network Policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network-policy
namespace: applications
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: gateway-stack
ports:
- protocol: TCP
port: 8080
egress:
- to:
- namespaceSelector:
matchLabels:
name: data-stack
- to:
- namespaceSelector:
matchLabels:
name: discovery-stack
- to:
- namespaceSelector:
matchLabels:
name: monitoring-stackNDP Services Overview
Pipeline-Managed Services
Each service below is deployed and managed by its own CI/CD pipeline.
This table is for reference only. For actual deployment status, configurations, and documentation:
- Check the service's Git repository
- View the CI/CD pipeline status
- Check ArgoCD/Flux dashboard
- Review service-specific documentation in the service repo
DO NOT rely on this documentation for current service states or configurations.
| Service | Purpose | Repository | Pipeline Status |
|---|---|---|---|
| Prescription Service | Create and manage prescriptions | ndp-prescription-service | 🔄 Managed by pipeline |
| Dispense Service | Track medication dispensing | ndp-dispense-service | 🔄 Managed by pipeline |
| Patient Registry | Master patient index | ndp-patient-registry | 🔄 Managed by pipeline |
| HPR Registry | Healthcare provider registry | ndp-hpr-registry | 🔄 Managed by pipeline |
| Pharmacy Registry | Pharmacy master data | ndp-pharmacy-registry | 🔄 Managed by pipeline |
| Medicine Directory | National drug database | ndp-medicine-directory | 🔄 Managed by pipeline |
| Audit Service | Compliance and audit logging | ndp-audit-service | 🔄 Managed by pipeline |
| Notification Service | SMS/Email/Push notifications | ndp-notification-service | 🔄 Managed by pipeline |
| D2D Service | Drug-drug interaction checking | ndp-d2d-service | 📋 Planned |
Where to Find Service Information
- Source Code:
https://repo.local/healthflow/<service-name> - Deployment Status: CI/CD dashboard or GitOps tool (ArgoCD/Flux)
- Current Configuration: Service repository
/k8sor/helmdirectory - Documentation: Service repository
/docsdirectory - API Specs: Service repository
/apidirectory
Pipeline Integration
Environment Variables Required
Each service pipeline requires these environment variables:
# Container Registry
REGISTRY_URL=registry.healthflow.eg
REGISTRY_USERNAME=<from-vault>
REGISTRY_PASSWORD=<from-vault>
# Kubernetes Cluster
KUBE_CONFIG=<from-vault>
KUBE_NAMESPACE=applications
# Service Configuration
SERVICE_NAME=<service-name>
VERSION_TAG=${CI_COMMIT_SHA:0:8}
# Vault Integration
VAULT_ADDR=https://vault.healthflow.eg
VAULT_TOKEN=<from-pipeline-secret>
VAULT_SECRET_PATH=secret/ndp/${SERVICE_NAME}GitHub Actions Example
name: Deploy to Kubernetes
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build Docker Image
run: |
docker build -t ${{ secrets.REGISTRY_URL }}/${{ env.SERVICE_NAME }}:${{ github.sha }} .
docker push ${{ secrets.REGISTRY_URL }}/${{ env.SERVICE_NAME }}:${{ github.sha }}
- name: Deploy to Kubernetes
uses: azure/k8s-deploy@v4
with:
manifests: |
k8s/deployment.yaml
k8s/service.yaml
k8s/ingress.yaml
images: |
${{ secrets.REGISTRY_URL }}/${{ env.SERVICE_NAME }}:${{ github.sha }}
namespace: applicationsArgoCD Application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: { { service-name } }
namespace: argocd
spec:
project: ndp
source:
repoURL: https://repo.local/healthflow/{{ service-name }}
targetRevision: main
path: k8s
destination:
server: https://kubernetes.default.svc
namespace: applications
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=falseService Communication
Service Discovery via Consul
Services discover each other using Consul DNS:
# Application environment variables
PATIENT_REGISTRY_URL: http://patient-registry.service.consul:8080
HPR_REGISTRY_URL: http://hpr-registry.service.consul:8080
MEDICINE_DIRECTORY_URL: http://medicine-directory.service.consul:8080Event-Driven Communication
Services publish events to Kafka:
# Kafka configuration
KAFKA_BROKERS: kafka.data-stack:9092
KAFKA_TOPIC_PREFIX: ndp.events
KAFKA_CONSUMER_GROUP: ${SERVICE_NAME}Secrets from Vault
Services fetch secrets from Vault at runtime:
# Vault agent sidecar injects secrets
vault agent -config=/vault/config/agent.hclMonitoring & Observability
Metrics
All services expose Prometheus metrics:
# Prometheus scrape annotations
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"Logging
All services send logs to Loki via Promtail:
# Structured logging format (JSON)
{
"timestamp": "2024-04-12T18:20:00Z",
"level": "info",
"service": "prescription-service",
"trace_id": "abc123",
"message": "Prescription created",
"prescription_id": "rx-12345",
}Tracing
Distributed tracing with correlation IDs:
# Request headers
X-Trace-ID: <uuid>
X-Request-ID: <uuid>
X-User-ID: <user-id>Resource Requirements
Dynamic Allocation
Resource requirements vary per service and are defined in individual service repositories. The following are baseline estimates for planning purposes.
Actual resource allocation is managed by:
- Horizontal Pod Autoscaler (HPA)
- Vertical Pod Autoscaler (VPA)
- Service-specific resource limits in Git
Monitor actual usage and adjust in service repositories.
Baseline Per Service
| Resource | Minimum | Typical | High Load |
|---|---|---|---|
| CPU Request | 250m | 500m | 1000m |
| CPU Limit | 500m | 1000m | 2000m |
| Memory Request | 256Mi | 512Mi | 1Gi |
| Memory Limit | 512Mi | 1Gi | 2Gi |
| Replicas | 2 | 3 | 5+ |
Cluster-Wide Application Resources
Rough Estimate
For 8-10 active microservices:
- CPU: ~12-20 cores (with autoscaling)
- Memory: ~24-40 GB (with autoscaling)
- Storage: ~50-100 GB (logs, configs)
This excludes infrastructure services (databases, monitoring, etc.)
Deployment Best Practices
1. Health Checks
Every service must implement:
// Liveness: Is the process running?
GET /health/live
Response: 200 OK
// Readiness: Can the service accept traffic?
GET /health/ready
Response: 200 OK (if DB/dependencies available)
// Startup: Has the service finished initialization?
GET /health/startup
Response: 200 OK (after warmup)2. Graceful Shutdown
process.on("SIGTERM", async () => {
console.log("SIGTERM received, starting graceful shutdown");
// Stop accepting new requests
server.close();
// Complete in-flight requests (max 30s)
await Promise.race([waitForInFlightRequests(), sleep(30000)]);
// Close connections
await database.close();
await cache.close();
process.exit(0);
});3. Configuration Management
- Environment-specific configs in ConfigMaps
- Secrets in Vault (never in Git)
- Feature flags for gradual rollouts
- Version compatibility checks on startup
4. Rolling Updates
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 05. Auto-Scaling
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: { { service-name } }
namespace: applications
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: { { service-name } }
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80Troubleshooting
Service Not Starting
# Check pod status
kubectl get pods -n applications -l app=<service-name>
# Check pod events
kubectl describe pod -n applications <pod-name>
# Check logs
kubectl logs -n applications <pod-name> --tail=100
# Check previous crashed container
kubectl logs -n applications <pod-name> --previousService Not Receiving Traffic
# Check service endpoints
kubectl get endpoints -n applications <service-name>
# Check ingress route
kubectl get ingressroute -n applications <service-name> -o yaml
# Test service internally
kubectl run curl --image=curlimages/curl -i --rm --restart=Never -- \
curl http://<service-name>.applications:8080/healthSecrets Not Available
# Check Vault status
kubectl exec -n discovery-stack vault-0 -- vault status
# Check service account token
kubectl exec -n applications <pod-name> -- \
cat /var/run/secrets/kubernetes.io/serviceaccount/token
# Test Vault authentication
kubectl exec -n applications <pod-name> -- \
curl -s http://vault.discovery-stack:8200/v1/auth/kubernetes/login \
-d '{"jwt":"'$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)'","role":"ndp-app"}'CI/CD Checklist
Before deploying a new service:
- [ ] Service repository created with standard structure
- [ ] Dockerfile optimized and secure
- [ ] Kubernetes manifests templated (Helm/Kustomize)
- [ ] Health check endpoints implemented
- [ ] Metrics endpoint exposed
- [ ] Structured logging configured
- [ ] CI/CD pipeline configured
- [ ] Service account created in K8s
- [ ] Vault policies configured
- [ ] Consul service registration configured
- [ ] Resource limits defined
- [ ] HPA configured
- [ ] Network policies applied
- [ ] Monitoring dashboards created
- [ ] Alert rules defined
- [ ] Documentation updated
Next Steps
For service-specific documentation, refer to individual service repositories:
- Prescription Service Documentation
- Dispense Service Documentation
- Patient Registry Documentation
- HPR Registry Documentation
- Pharmacy Registry Documentation
- Medicine Directory Documentation
- Audit Service Documentation
Key Takeaways
Critical Reminders
🚨 This Stack is Pipeline-Managed
- Applications are AUTOMATICALLY deployed - Manual kubectl deployments will be overwritten
- Service manifests live in SERVICE repos - Not in this infrastructure documentation
- Configuration is environment-specific - Managed through GitOps in service repos
- Secrets are ONLY in Vault - Never in Git, never in manifests
- Monitoring is mandatory - All services must expose metrics and structured logs
- Auto-scaling is configured per service - HPA settings in service repositories
- Documentation lives with CODE - Update docs in service repos, not here
📋 What This Documentation IS
- ✅ Infrastructure patterns and standards
- ✅ Deployment guidelines and best practices
- ✅ Integration patterns (Consul, Vault, Kafka)
- ✅ Monitoring and observability requirements
- ✅ Resource allocation guidelines
❌ What This Documentation IS NOT
- ❌ Current service configurations
- ❌ Actual deployment manifests
- ❌ Service-specific details
- ❌ Real-time deployment status
- ❌ API documentation
🔍 Where to Find Current Information
- Live Deployments:
kubectl get pods -n applications - Service Status: ArgoCD/Flux dashboard
- Service Configs: Service Git repository
- API Documentation: Service repository
/docs - Build Status: CI/CD pipeline dashboard
Documentation Update Notice
Keep This Updated
If deployment patterns or infrastructure requirements change:
- Update this infrastructure documentation
- Communicate changes to all service teams
- Update service repository templates
- Update CI/CD pipeline templates
Last Updated: 2026-01-12
Maintained By: DevOps Team