High-Level Architecture
System Overview
The HealthFlow NDP Infrastructure is built on a multi-layered Kubernetes architecture designed for high availability, scalability, and security.
Architecture Diagram
Critical Service Flow: Prescription Creation
Critical Service Flow: Dispensing
Stack Dependencies
Deployment Order
The stacks must be deployed in the following sequence:
- Gateway Stack - Provides ingress and routing
- Discovery Stack - Enables service registration and secrets
- Monitoring Stack - Observability foundation
- Data Stack - Persistence layer
- Application Stack - NDP microservices
Key Architectural Principles
1. Microservices Architecture
- Each service is independently deployable
- Services communicate via REST APIs and message queues
- Service discovery via Consul
2. Defense in Depth Security
- Network policies isolate stack communication
- Vault manages all secrets and credentials
- mTLS between services via service mesh
- RBAC for Kubernetes resources
3. Observability First
- Centralized logging with Loki
- Metrics collection with Prometheus
- Distributed tracing support
- Real-time alerting
4. High Availability
- Multi-replica deployments
- Pod anti-affinity rules
- Health checks and auto-healing
- Rolling updates with zero downtime
5. Scalability
- Horizontal Pod Autoscaling (HPA)
- Cluster autoscaling support
- Database read replicas
- Redis caching layer
Network Architecture
Resource Requirements
Resource Estimates
The following resource requirements are rough estimates based on typical workloads. Actual requirements will vary based on:
- Transaction volume and concurrent users
- Data retention policies
- Number of microservices deployed
- Monitoring and logging verbosity
- High availability configuration
Recommendation: Start with these baseline specs and scale based on actual monitoring data and performance metrics.
Minimum Cluster Specification
| Component | CPU | Memory | Storage | Replicas |
|---|---|---|---|---|
| Control Plane | 4 cores | 8 GB | 100 GB | 3 |
| Worker Nodes | 8 cores | 16 GB | 200 GB | 3+ |
| Total | 28+ cores | 56+ GB | 700+ GB | 9+ |
Per Stack Requirements (Estimated)
| Stack | CPU | Memory | Storage | Notes |
|---|---|---|---|---|
| Gateway | 3 cores | 4 GB | 50 GB | Adjust based on traffic |
| Discovery | 4 cores | 6 GB | 100 GB | Vault backend storage |
| Monitoring | 6 cores | 12 GB | 200 GB | Depends on retention period |
| Data | 8 cores | 24 GB | 500 GB | Scale with data growth |
| Applications | 12 cores | 24 GB | 100 GB | Varies by service count |
High Availability Strategy
Database HA
- PostgreSQL with streaming replication
- MySQL with master-slave setup
- Redis Sentinel for automatic failover
Application HA
- Minimum 3 replicas per service
- Pod anti-affinity across nodes
- Liveness and readiness probes
- Circuit breakers for external calls
Infrastructure HA
- Multi-node Consul cluster
- Distributed Vault with HA backend
- Traefik deployed on all worker nodes
- GlusterFS or CephFS for shared persistent storage
Next Steps
- Service-Level Architecture - Detailed service interactions
- Network Architecture - In-depth network design
- Gateway Stack - Start with infrastructure deployment