Skip to content

High-Level Architecture

System Overview

The HealthFlow NDP Infrastructure is built on a multi-layered Kubernetes architecture designed for high availability, scalability, and security.

Architecture Diagram

Critical Service Flow: Prescription Creation

Critical Service Flow: Dispensing

Stack Dependencies

Deployment Order

The stacks must be deployed in the following sequence:

  1. Gateway Stack - Provides ingress and routing
  2. Discovery Stack - Enables service registration and secrets
  3. Monitoring Stack - Observability foundation
  4. Data Stack - Persistence layer
  5. Application Stack - NDP microservices

Key Architectural Principles

1. Microservices Architecture

  • Each service is independently deployable
  • Services communicate via REST APIs and message queues
  • Service discovery via Consul

2. Defense in Depth Security

  • Network policies isolate stack communication
  • Vault manages all secrets and credentials
  • mTLS between services via service mesh
  • RBAC for Kubernetes resources

3. Observability First

  • Centralized logging with Loki
  • Metrics collection with Prometheus
  • Distributed tracing support
  • Real-time alerting

4. High Availability

  • Multi-replica deployments
  • Pod anti-affinity rules
  • Health checks and auto-healing
  • Rolling updates with zero downtime

5. Scalability

  • Horizontal Pod Autoscaling (HPA)
  • Cluster autoscaling support
  • Database read replicas
  • Redis caching layer

Network Architecture

Resource Requirements

Resource Estimates

The following resource requirements are rough estimates based on typical workloads. Actual requirements will vary based on:

  • Transaction volume and concurrent users
  • Data retention policies
  • Number of microservices deployed
  • Monitoring and logging verbosity
  • High availability configuration

Recommendation: Start with these baseline specs and scale based on actual monitoring data and performance metrics.

Minimum Cluster Specification

ComponentCPUMemoryStorageReplicas
Control Plane4 cores8 GB100 GB3
Worker Nodes8 cores16 GB200 GB3+
Total28+ cores56+ GB700+ GB9+

Per Stack Requirements (Estimated)

StackCPUMemoryStorageNotes
Gateway3 cores4 GB50 GBAdjust based on traffic
Discovery4 cores6 GB100 GBVault backend storage
Monitoring6 cores12 GB200 GBDepends on retention period
Data8 cores24 GB500 GBScale with data growth
Applications12 cores24 GB100 GBVaries by service count

High Availability Strategy

Database HA

  • PostgreSQL with streaming replication
  • MySQL with master-slave setup
  • Redis Sentinel for automatic failover

Application HA

  • Minimum 3 replicas per service
  • Pod anti-affinity across nodes
  • Liveness and readiness probes
  • Circuit breakers for external calls

Infrastructure HA

  • Multi-node Consul cluster
  • Distributed Vault with HA backend
  • Traefik deployed on all worker nodes
  • GlusterFS or CephFS for shared persistent storage

Next Steps