Skip to content

Application Stack

⚠️ DYNAMIC STACK - PIPELINE MANAGED

This stack is HIGHLY DYNAMIC and changes continuously through automated CI/CD pipelines.

This documentation describes deployment patterns and infrastructure requirements only.

❌ DO NOT manually deploy services
❌ DO NOT edit configurations here
❌ DO NOT expect static service definitions

✅ All applications are managed by CI/CD pipelines
✅ Service configurations live in their own Git repositories
✅ Deployments happen automatically through GitOps

See individual service repositories for current deployment manifests and application configurations.

Overview

The Application Stack contains the NDP microservices that implement Egypt's National Digital Prescription platform.

Stack Architecture

Deployment Model

CI/CD Pipeline Flow

GitOps Workflow

Application Deployment Pattern

Template Reference Only

The following templates show the standard pattern that services should follow.

These are NOT actual deployments. Each service has its own specific configuration in its Git repository that may differ based on its requirements.

Use these as a reference when creating new services, not as authoritative deployment configurations.

Standard Service Template

All NDP services follow this standard Kubernetes deployment pattern:

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ service-name }}
  namespace: applications
  labels:
    app: {{ service-name }}
    version: {{ version }}
    managed-by: pipeline
spec:
  replicas: 3
  selector:
    matchLabels:
      app: {{ service-name }}
  template:
    metadata:
      labels:
        app: {{ service-name }}
        version: {{ version }}
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
        prometheus.io/path: "/metrics"
    spec:
      serviceAccountName: {{ service-name }}
      containers:
      - name: {{ service-name }}
        image: {{ registry }}/{{ service-name }}:{{ version }}
        ports:
        - containerPort: 8080
          name: http
        env:
        - name: SERVICE_NAME
          value: {{ service-name }}
        - name: ENVIRONMENT
          value: {{ environment }}
        - name: CONSUL_ADDR
          value: "consul.discovery-stack:8500"
        - name: VAULT_ADDR
          value: "http://vault.discovery-stack:8200"
        envFrom:
        - configMapRef:
            name: {{ service-name }}-config
        - secretRef:
            name: {{ service-name }}-secrets
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: "250m"
            memory: 512Mi
        livenessProbe:
          httpGet:
            path: /health/live
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 5
        startupProbe:
          httpGet:
            path: /health/startup
            port: 8080
          initialDelaySeconds: 0
          periodSeconds: 5
          failureThreshold: 30

Service Mesh Integration

yaml
apiVersion: v1
kind: Service
metadata:
  name: { { service-name } }
  namespace: applications
  annotations:
    consul.hashicorp.com/service-name: "{{ service-name }}"
    consul.hashicorp.com/service-tags: "ndp,api,{{ version }}"
    consul.hashicorp.com/service-port: "8080"
  labels:
    app: { { service-name } }
spec:
  type: ClusterIP
  selector:
    app: { { service-name } }
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
      name: http

Ingress Route

yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: { { service-name } }
  namespace: applications
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`api.healthflow.eg`) && PathPrefix(`/api/v1/{{ service-path }}`)
      kind: Rule
      services:
        - name: { { service-name } }
          port: 8080
      middlewares:
        - name: auth-middleware
        - name: rate-limit
        - name: cors-all
  tls:
    certResolver: letsencrypt

Namespace Structure

Applications Namespace

yaml
apiVersion: v1
kind: Namespace
metadata:
  name: applications
  labels:
    name: applications
    stack: applications
    managed-by: pipeline

Resource Quotas

yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: app-quota
  namespace: applications
spec:
  hard:
    requests.cpu: "20"
    requests.memory: 40Gi
    limits.cpu: "40"
    limits.memory: 80Gi
    persistentvolumeclaims: "10"
    services.loadbalancers: "0"

Network Policies

yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: app-network-policy
  namespace: applications
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: gateway-stack
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: data-stack
    - to:
        - namespaceSelector:
            matchLabels:
              name: discovery-stack
    - to:
        - namespaceSelector:
            matchLabels:
              name: monitoring-stack

NDP Services Overview

Pipeline-Managed Services

Each service below is deployed and managed by its own CI/CD pipeline.

This table is for reference only. For actual deployment status, configurations, and documentation:

  1. Check the service's Git repository
  2. View the CI/CD pipeline status
  3. Check ArgoCD/Flux dashboard
  4. Review service-specific documentation in the service repo

DO NOT rely on this documentation for current service states or configurations.

ServicePurposeRepositoryPipeline Status
Prescription ServiceCreate and manage prescriptionsndp-prescription-service🔄 Managed by pipeline
Dispense ServiceTrack medication dispensingndp-dispense-service🔄 Managed by pipeline
Patient RegistryMaster patient indexndp-patient-registry🔄 Managed by pipeline
HPR RegistryHealthcare provider registryndp-hpr-registry🔄 Managed by pipeline
Pharmacy RegistryPharmacy master datandp-pharmacy-registry🔄 Managed by pipeline
Medicine DirectoryNational drug databasendp-medicine-directory🔄 Managed by pipeline
Audit ServiceCompliance and audit loggingndp-audit-service🔄 Managed by pipeline
Notification ServiceSMS/Email/Push notificationsndp-notification-service🔄 Managed by pipeline
D2D ServiceDrug-drug interaction checkingndp-d2d-service📋 Planned

Where to Find Service Information

  • Source Code: https://repo.local/healthflow/<service-name>
  • Deployment Status: CI/CD dashboard or GitOps tool (ArgoCD/Flux)
  • Current Configuration: Service repository /k8s or /helm directory
  • Documentation: Service repository /docs directory
  • API Specs: Service repository /api directory

Pipeline Integration

Environment Variables Required

Each service pipeline requires these environment variables:

bash
# Container Registry
REGISTRY_URL=registry.healthflow.eg
REGISTRY_USERNAME=<from-vault>
REGISTRY_PASSWORD=<from-vault>

# Kubernetes Cluster
KUBE_CONFIG=<from-vault>
KUBE_NAMESPACE=applications

# Service Configuration
SERVICE_NAME=<service-name>
VERSION_TAG=${CI_COMMIT_SHA:0:8}

# Vault Integration
VAULT_ADDR=https://vault.healthflow.eg
VAULT_TOKEN=<from-pipeline-secret>
VAULT_SECRET_PATH=secret/ndp/${SERVICE_NAME}

GitHub Actions Example

yaml
name: Deploy to Kubernetes

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Build Docker Image
        run: |
          docker build -t ${{ secrets.REGISTRY_URL }}/${{ env.SERVICE_NAME }}:${{ github.sha }} .
          docker push ${{ secrets.REGISTRY_URL }}/${{ env.SERVICE_NAME }}:${{ github.sha }}

      - name: Deploy to Kubernetes
        uses: azure/k8s-deploy@v4
        with:
          manifests: |
            k8s/deployment.yaml
            k8s/service.yaml
            k8s/ingress.yaml
          images: |
            ${{ secrets.REGISTRY_URL }}/${{ env.SERVICE_NAME }}:${{ github.sha }}
          namespace: applications

ArgoCD Application

yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: { { service-name } }
  namespace: argocd
spec:
  project: ndp
  source:
    repoURL: https://repo.local/healthflow/{{ service-name }}
    targetRevision: main
    path: k8s
  destination:
    server: https://kubernetes.default.svc
    namespace: applications
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=false

Service Communication

Service Discovery via Consul

Services discover each other using Consul DNS:

yaml
# Application environment variables
PATIENT_REGISTRY_URL: http://patient-registry.service.consul:8080
HPR_REGISTRY_URL: http://hpr-registry.service.consul:8080
MEDICINE_DIRECTORY_URL: http://medicine-directory.service.consul:8080

Event-Driven Communication

Services publish events to Kafka:

yaml
# Kafka configuration
KAFKA_BROKERS: kafka.data-stack:9092
KAFKA_TOPIC_PREFIX: ndp.events
KAFKA_CONSUMER_GROUP: ${SERVICE_NAME}

Secrets from Vault

Services fetch secrets from Vault at runtime:

bash
# Vault agent sidecar injects secrets
vault agent -config=/vault/config/agent.hcl

Monitoring & Observability

Metrics

All services expose Prometheus metrics:

yaml
# Prometheus scrape annotations
annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "8080"
  prometheus.io/path: "/metrics"

Logging

All services send logs to Loki via Promtail:

yaml
# Structured logging format (JSON)
{
  "timestamp": "2024-04-12T18:20:00Z",
  "level": "info",
  "service": "prescription-service",
  "trace_id": "abc123",
  "message": "Prescription created",
  "prescription_id": "rx-12345",
}

Tracing

Distributed tracing with correlation IDs:

yaml
# Request headers
X-Trace-ID: <uuid>
X-Request-ID: <uuid>
X-User-ID: <user-id>

Resource Requirements

Dynamic Allocation

Resource requirements vary per service and are defined in individual service repositories. The following are baseline estimates for planning purposes.

Actual resource allocation is managed by:

  • Horizontal Pod Autoscaler (HPA)
  • Vertical Pod Autoscaler (VPA)
  • Service-specific resource limits in Git

Monitor actual usage and adjust in service repositories.

Baseline Per Service

ResourceMinimumTypicalHigh Load
CPU Request250m500m1000m
CPU Limit500m1000m2000m
Memory Request256Mi512Mi1Gi
Memory Limit512Mi1Gi2Gi
Replicas235+

Cluster-Wide Application Resources

Rough Estimate

For 8-10 active microservices:

  • CPU: ~12-20 cores (with autoscaling)
  • Memory: ~24-40 GB (with autoscaling)
  • Storage: ~50-100 GB (logs, configs)

This excludes infrastructure services (databases, monitoring, etc.)

Deployment Best Practices

1. Health Checks

Every service must implement:

javascript
// Liveness: Is the process running?
GET /health/live
Response: 200 OK

// Readiness: Can the service accept traffic?
GET /health/ready
Response: 200 OK (if DB/dependencies available)

// Startup: Has the service finished initialization?
GET /health/startup
Response: 200 OK (after warmup)

2. Graceful Shutdown

javascript
process.on("SIGTERM", async () => {
  console.log("SIGTERM received, starting graceful shutdown");

  // Stop accepting new requests
  server.close();

  // Complete in-flight requests (max 30s)
  await Promise.race([waitForInFlightRequests(), sleep(30000)]);

  // Close connections
  await database.close();
  await cache.close();

  process.exit(0);
});

3. Configuration Management

  • Environment-specific configs in ConfigMaps
  • Secrets in Vault (never in Git)
  • Feature flags for gradual rollouts
  • Version compatibility checks on startup

4. Rolling Updates

yaml
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

5. Auto-Scaling

yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: { { service-name } }
  namespace: applications
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: { { service-name } }
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80

Troubleshooting

Service Not Starting

bash
# Check pod status
kubectl get pods -n applications -l app=<service-name>

# Check pod events
kubectl describe pod -n applications <pod-name>

# Check logs
kubectl logs -n applications <pod-name> --tail=100

# Check previous crashed container
kubectl logs -n applications <pod-name> --previous

Service Not Receiving Traffic

bash
# Check service endpoints
kubectl get endpoints -n applications <service-name>

# Check ingress route
kubectl get ingressroute -n applications <service-name> -o yaml

# Test service internally
kubectl run curl --image=curlimages/curl -i --rm --restart=Never -- \
  curl http://<service-name>.applications:8080/health

Secrets Not Available

bash
# Check Vault status
kubectl exec -n discovery-stack vault-0 -- vault status

# Check service account token
kubectl exec -n applications <pod-name> -- \
  cat /var/run/secrets/kubernetes.io/serviceaccount/token

# Test Vault authentication
kubectl exec -n applications <pod-name> -- \
  curl -s http://vault.discovery-stack:8200/v1/auth/kubernetes/login \
  -d '{"jwt":"'$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)'","role":"ndp-app"}'

CI/CD Checklist

Before deploying a new service:

  • [ ] Service repository created with standard structure
  • [ ] Dockerfile optimized and secure
  • [ ] Kubernetes manifests templated (Helm/Kustomize)
  • [ ] Health check endpoints implemented
  • [ ] Metrics endpoint exposed
  • [ ] Structured logging configured
  • [ ] CI/CD pipeline configured
  • [ ] Service account created in K8s
  • [ ] Vault policies configured
  • [ ] Consul service registration configured
  • [ ] Resource limits defined
  • [ ] HPA configured
  • [ ] Network policies applied
  • [ ] Monitoring dashboards created
  • [ ] Alert rules defined
  • [ ] Documentation updated

Next Steps

For service-specific documentation, refer to individual service repositories:

Key Takeaways

Critical Reminders

🚨 This Stack is Pipeline-Managed

  1. Applications are AUTOMATICALLY deployed - Manual kubectl deployments will be overwritten
  2. Service manifests live in SERVICE repos - Not in this infrastructure documentation
  3. Configuration is environment-specific - Managed through GitOps in service repos
  4. Secrets are ONLY in Vault - Never in Git, never in manifests
  5. Monitoring is mandatory - All services must expose metrics and structured logs
  6. Auto-scaling is configured per service - HPA settings in service repositories
  7. Documentation lives with CODE - Update docs in service repos, not here

📋 What This Documentation IS

  • ✅ Infrastructure patterns and standards
  • ✅ Deployment guidelines and best practices
  • ✅ Integration patterns (Consul, Vault, Kafka)
  • ✅ Monitoring and observability requirements
  • ✅ Resource allocation guidelines

❌ What This Documentation IS NOT

  • ❌ Current service configurations
  • ❌ Actual deployment manifests
  • ❌ Service-specific details
  • ❌ Real-time deployment status
  • ❌ API documentation

🔍 Where to Find Current Information

  • Live Deployments: kubectl get pods -n applications
  • Service Status: ArgoCD/Flux dashboard
  • Service Configs: Service Git repository
  • API Documentation: Service repository /docs
  • Build Status: CI/CD pipeline dashboard

Documentation Update Notice

Keep This Updated

If deployment patterns or infrastructure requirements change:

  1. Update this infrastructure documentation
  2. Communicate changes to all service teams
  3. Update service repository templates
  4. Update CI/CD pipeline templates

Last Updated: 2026-01-12
Maintained By: DevOps Team