Breaking Bad at Infrastructure: A Multi-Tenant Kubernetes Journey
By Heffenberg | January 6, 2026
"I am the one who deploys."
You know what they say in the meth business—wait, wrong profession. In the infrastructure business, they say: "You're either cooking your own infrastructure, or you're buying it from someone else." And let me tell you, I chose to cook.
My name is Heffenberg, and over the past few months, I've built something beautiful. Something pure. A multi-tenant Kubernetes platform that deploys faster than Walter White could say "Stay out of my territory."
This isn't just another "here's my k8s setup" blog post. This is the story of how I went from a simple home lab to a production-grade, multi-tenant platform that would make even Gus Fring nod in approval. Let me break it down for you—and unlike Walter's blue crystal, this recipe is open source.
The Empire Begins: Understanding the Territory
Every good empire starts with a plan. Mine began with a simple question: "How do I deploy multiple isolated application instances without building the same infrastructure over and over again?"
The answer? Multi-tenancy with shared infrastructure.
Here's the vision I cooked up:
Heffenberg Empire
┌───────────────────────────────────────────────────────────────┐
│ Heffenberg Empire │
├───────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Tenant 1 │ │ Tenant 2 │ │ Tenant N │ │
│ │ (demo) │ │ (prod) │ │ (staging) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └─────────────────────┴─────────────────────┘ │
│ │ │
│ ┌────────────────────────────▼────────────────────────────┐ │
│ │ Shared Infrastructure Layer │ │
│ ├─────────────────────────────────────────────────────────┤ │
│ │ PostgreSQL │ MSSQL │ Redis │ RabbitMQ │ │
│ │ MinIO │ Keycloak (Identity) │ │
│ └────────────────────────────┬────────────────────────────┘ │
│ │ │
│ ┌────────────────────────────▼────────────────────────────┐ │
│ │ HashiCorp Vault (Secrets Empire) │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└───────────────────────────────────────────────────────────────┘
Each tenant gets their own isolated namespace, their own application instances, but they all share the same infrastructure. Efficiency. Scalability. Purity.
The Chemistry: Building Blocks
PostgreSQL: The Base Compound
First, I needed a rock-solid database foundation. Enter Zalando PostgreSQL Operator.
This isn't your grandfather's PostgreSQL. This is high-availability, automated failover, continuous archiving to MinIO, point-in-time recovery goodness. Three nodes. Automatic leadership election. Backup retention for 30 days.
apiVersion: acid.zalan.do/v1
kind: postgresql
metadata:
name: postgres-cluster
namespace: app-infra-db
spec:
numberOfInstances: 3
volume:
size: 50Gi
storageClass: vsphere-rw-retain
backup:
target: prefer-standby
retentionPolicy: "30d"
Each tenant gets their own database on this cluster. No need to spin up separate PostgreSQL instances for each tenant—that would be like cooking meth in separate RVs when you have a perfectly good industrial lab. Wasteful.
Redis: The Catalyst
Every good operation needs speed. Redis is my catalyst—caching, session management, temporary data storage. Deployed in the app-infra-cache namespace, shared across tenants but logically isolated with key prefixes.
kubectl apply -k infrastructure/02-redis/
Simple. Fast. Effective. Like a well-timed thermite reaction.
RabbitMQ: The Distribution Network
Messages need to flow. Events need to propagate. RabbitMQ is my distribution network—a three-node cluster handling all inter-service communication.
Each tenant gets their own virtual host. Isolated queues. Isolated exchanges. But all running on the same robust infrastructure.
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq-cluster
namespace: app-infra-messaging
spec:
replicas: 3
persistence:
storage: 20Gi
MinIO: The Warehouse
Product images. User uploads. Database backups. All stored in MinIO—my S3-compatible object storage warehouse.
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: app-infra-storage
spec:
ports:
- port: 9000
name: api
- port: 9001
name: console
Each tenant gets their own buckets: tenant-demo-images, tenant-demo-uploads, tenant-demo-backups. Separation of concerns. Data sovereignty. Control.
HashiCorp Vault: The Secret Empire
This is where it gets interesting. Every empire needs secrets. Vault is my secret empire—centralized, secured, audited.
No hardcoded passwords. No ConfigMaps full of credentials. Everything dynamic. Everything rotated. Everything traceable.
vault kv put secret/app-infra/shared/postgres \
host="postgres.app-infra-db.svc.cluster.local" \
port=5432 \
username="postgres" \
password="$(openssl rand -base64 32)"
When a tenant spins up, it authenticates with Vault using Kubernetes service accounts, retrieves its credentials, and injects them at runtime. Zero trust. Maximum security.
The Cook: Deployment Process
Phase 1: Setting Up the Lab (Namespaces)
First, create your namespace structure. Clean. Organized. Professional.
# Infrastructure namespaces (app-01 cluster)
kubectl config use-context app-01
kubectl apply -k infrastructure/00-namespaces/
Created:
app-infra-db- PostgreSQL territoryapp-infra-cache- Redis cacheapp-infra-messaging- RabbitMQ message busapp-infra-storage- MinIO object storage
Phase 2: Deploy Infrastructure (The Foundation)
Now we lay the foundation. One component at a time. Methodical. Precise.
# Deploy PostgreSQL
kubectl apply -k infrastructure/01-postgres/
kubectl wait --for=condition=ready pod -l app=postgres -n app-infra-db --timeout=300s
# Deploy Redis
kubectl apply -k infrastructure/02-redis/
kubectl wait --for=condition=ready pod -l app=redis -n app-infra-cache --timeout=60s
# Deploy RabbitMQ
kubectl apply -k infrastructure/03-rabbitmq/
kubectl wait --for=condition=ready pod -l app=rabbitmq -n app-infra-messaging --timeout=180s
# Deploy MinIO
kubectl apply -k infrastructure/04-minio/
kubectl wait --for=condition=ready pod -l app=minio -n app-infra-storage --timeout=60s
Infrastructure up. Components healthy. Ready to cook.
Phase 3: Configure Vault (The Secret Recipe)
Store your infrastructure credentials in Vault. This is your secret recipe. Guard it well.
# Add PostgreSQL credentials
./scripts/add-postgres-to-vault.ps1 -RootToken $VAULT_TOKEN
# Add RabbitMQ credentials
./scripts/add-rabbitmq-to-vault.ps1 -RootToken $VAULT_TOKEN
# Add Harbor registry credentials
./scripts/add-harbor-to-vault.ps1 -RootToken $VAULT_TOKEN -Username admin -Password "YourPassword"
Credentials secured. Vault configured. Time to build the product.
Phase 4: Build Application Images (The Product)
Every empire needs a product. Mine is BlazingBlog—a full-featured blog platform built with Blazor and .NET 8.
./scripts/build-app-images.ps1 `
-SourcePath "D:\vsproj\blazingblogv2-template" `
-Registry "harbor-02.fcs-cloud.com/library" `
-Tag "latest" `
-Push
Simple. Elegant. Effective. Two main components:
- blazingblog - Blazor Server application (blog engine, admin interface, content management)
- identity - Keycloak integration for authentication and user management
The beauty? Each tenant gets their own isolated blog instance. Same codebase. Different branding. Different content. Different databases.
Images pushed to Harbor. Ready for deployment.
Phase 5: Deploy Your First Tenant (The Distribution)
Now for the magic. Deploying a complete tenant with one command.
# Switch to management cluster
kubectl config use-context management
# Deploy tenant "demo"
./scripts/phase5-deploy-tenant.ps1 -TenantName "demo" -RootToken $VAULT_TOKEN
Watch the automation unfold:
- ✅ Vault configuration - Creates tenant policies, roles, authentication
- ✅ Namespace creation -
app-tenant-demonamespace ready - ✅ Service account - Kubernetes identity for Vault auth
- ✅ Image pull secret - Harbor credentials from Vault
- ✅ Connection secrets - SQL Server, MinIO, Redis configs from Vault
- ✅ Service deployment - Blog application and identity services deployed
- ✅ Gateway configuration - Cilium Gateway API with dedicated LoadBalancer IP
- ✅ Health checks - Waits for all pods to be ready
NAME READY STATUS RESTARTS AGE
demo-blazingblog-7d9f8b6c5d-x7k2m 1/1 Running 0 2m
demo-identity-6b8c9d5f4-p9h3n 1/1 Running 0 2m
Two pods. One blog instance. Complete isolation. Beautiful.
The Distribution Network: Accessing Your Empire
Port Forward (Quick Access)
kubectl port-forward -n app-tenant-demo svc/demo-blazingblog 8080:8080
Access your blog at http://localhost:8080. Write posts. Upload images to MinIO. Watch data persist in SQL Server. Watch Redis cache your content. Watch your empire grow, one blog post at a time.
NodePort (External Access)
For production access, use the Cilium Gateway API with BGP-advertised LoadBalancer:
kubectl get gateway demo-gateway -n app-tenant-demo
Access at https://demo.fcs-cloud.com. Your tenant is live. Your empire is operational.
Edge Envoy (Professional Distribution)
For the full setup, Cilium Gateway API handles routing with BGP load balancing:
demo.fcs-cloud.com → Gateway IP 10.42.255.20 → app-tenant-demo/demo-blazingblog
prod.fcs-cloud.com → Gateway IP 10.42.255.21 → app-tenant-prod/prod-blazingblog
heffenberg.fcs-cloud.com → Gateway IP 10.42.255.22 → app-tenant-heffenberg/heffenberg-blazingblog
TLS termination. Kubernetes-native routing. BGP load balancing. Professional-grade distribution.
Expanding the Empire: Deploy More Tenants
The beauty of this system? Scaling is trivial.
# Production tenant
./scripts/phase5-deploy-tenant.ps1 -TenantName "prod" -RootToken $VAULT_TOKEN
# Staging tenant
./scripts/phase5-deploy-tenant.ps1 -TenantName "staging" -RootToken $VAULT_TOKEN
# Dev tenant
./scripts/phase5-deploy-tenant.ps1 -TenantName "dev" -RootToken $VAULT_TOKEN
Each tenant:
- Gets its own namespace
- Gets its own service account
- Gets its own Vault policies
- Shares the same infrastructure
- Runs in complete isolation
Four tenants. One infrastructure. Maximum efficiency.
Like running four separate operations from the same industrial lab. Walter White would be proud.
The Quality Control: Monitoring & Observability
Any good cook knows: you need to monitor your process.
Metrics (Prometheus)
Every component exports metrics:
- PostgreSQL: Database performance, connections, queries
- Redis: Cache hit rates, memory usage
- RabbitMQ: Queue depths, message rates
- Application: HTTP requests, latency, errors
Tracing (Jaeger)
Distributed tracing across the entire stack:
BlazingBlog → SQL Server (42ms)
↓
Redis Cache (3ms)
↓
MinIO (18ms) - Image uploads
↓
Identity API → Keycloak (24ms)
See exactly where time is spent. Optimize ruthlessly.
Logging (Loki)
Centralized logs with context:
- Tenant ID: Which tenant generated this log
- Service name: Which microservice
- Trace ID: Correlation with traces
- Log level: Info, Warning, Error
Filter. Search. Debug. Control.
Lessons from the Lab: What I Learned
1. Automation is King
That tenant deployment script? It saves me 2 hours per tenant. When you're deploying 10 tenants, that's 20 hours saved. Time is money. Automation is profit.
2. Secrets Management is Non-Negotiable
Never, EVER hardcode credentials. Vault saved me countless times when credentials needed rotation. One command, all tenants updated.
3. Shared Infrastructure is Efficient
Running separate SQL Server instances for each tenant? That's 4 GB RAM per tenant. Shared infrastructure with isolated databases? 100 MB RAM per tenant for application pods. 97.5% reduction.
4. Documentation Saves Lives
Future Heffenberg thanks past Heffenberg for writing everything down. Six months later, I can still deploy a tenant in 5 minutes because the runbook is clear.
5. Multi-Cluster Architecture is Already Here
Shared infrastructure in app-01 cluster. Tenant management and xds-edge (Envoy control plane) in management cluster. Each application cluster hosts its own tenant applications with dedicated infrastructure.
Separation of concerns. Clean boundaries. Resilience through isolation.
The future? Support for multiple application clusters. Scale horizontally by adding new app clusters. Route traffic intelligently based on geography, load, or tenant tier. One management plane, N application clusters. That's the vision.
The Empire's Future: What's Next?
Phase 2: GitOps with ArgoCD
Manual kubectl commands are so 2024. Next up: GitOps. Push to Git, ArgoCD deploys automatically. Full audit trail. Rollback capability.
Phase 3: Multi-Region Deployment
One datacenter is good. Three datacenters is better. Active-active deployment across regions. Global load balancing. CDN integration.
Phase 4: Advanced Security
OPA for policy enforcement. Falco for runtime security. Trivy for image scanning. CIS benchmarks for compliance.
Phase 5: Self-Service Portal
Web UI for tenant provisioning. Click a button, get a tenant. No scripts. No commands. Just pure, beautiful automation.
Breaking Bad at Scale
Walter White built an empire with chemistry. I built mine with Kubernetes.
His product was pure. Mine is pure infrastructure as code.
His distribution network spanned continents. Mine spans clusters.
His operations were automated and efficient. So are mine.
The difference? My empire is open source. My recipe is documented. My methods are reproducible.
You want to build your own empire? The recipe is here:
git clone https://github.com/hughfl/app-infra.git
cd app-infra
./scripts/phase1-deploy-namespaces.ps1
./scripts/phase4-deploy-infrastructure.ps1
./scripts/phase5-deploy-tenant.ps1
Three commands. One empire. Pure infrastructure.
The Final Product
After months of work, here's what I have:
✅ Multi-tenant platform - Deploy unlimited tenants
✅ Shared infrastructure - PostgreSQL, Redis, RabbitMQ, MinIO
✅ Secrets management - HashiCorp Vault integration
✅ Automated deployment - One command per tenant
✅ Full observability - Metrics, traces, logs
✅ Production-ready - High availability, backups, security
✅ Open source - Fully documented, fully reproducible
Lab infrastructure? Check.
Successfully deployed tenants? Check.
Empire built? Check.
"Say My Name"
"You're Heffenberg."
"You're goddamn right."
Now go forth and build your own infrastructure empire. The recipe is here. The tools are available. The only thing missing is you.
And remember: in the infrastructure game, you're either cooking, or you're getting cooked.
I chose to cook.
Heffenberg | Lab Infrastructure Architect
"Tread lightly."
Resources
- GitHub Repository - Full source code
- Architecture Guide - Detailed design docs
- Deployment Guide - Step-by-step deployment
- Customization Guide - Adapt for your apps
Star the repo. Fork the code. Build your empire.
Disclaimer: No laws were broken in the making of this infrastructure. Unlike Walter White, Heffenberg operates entirely within legal and ethical boundaries. This is about infrastructure, not illicit substances.
