Deploy ZITADEL on Kubernetes
This guide takes you from zero to a running ZITADEL instance on Kubernetes and then shows you how to harden it for production.
Stage 1 — Quickstart
The ZITADEL chart ships with an optional PostgreSQL subchart so a single helm install deploys the database alongside ZITADEL. You still need an ingress controller running in your cluster.
Prerequisites
No cluster yet? k3d is a quick way to spin one up locally — it runs k3s in Docker, with Traefik as the built-in ingress controller:
k3d cluster create zitadel --port "80:80@loadbalancer"Install the full stack
mkdir zitadel-helm && cd zitadel-helm &&
curl -fsSLO https://raw.githubusercontent.com/zitadel/zitadel-charts/main/examples/0-quickstart/quickstart-values.yamlEdit quickstart-values.yaml before installing. For a local k3d or k3s cluster, these values usually work as-is:
zitadel:
configmapConfig:
ExternalDomain: localhost
ExternalPort: 80
ingress:
className: traefik
login:
ingress:
className: traefikIf you are using a different ingress controller, replace both className values with that controller's IngressClass name.
helm repo add zitadel https://charts.zitadel.com &&
helm repo update &&
helm upgrade --install zitadel zitadel/zitadel --values quickstart-values.yaml --waitThat's it. Visit http://localhost/ui/console?login_hint=zitadel-admin@zitadel.localhost and log in with password Password1!.
The masterkey encrypts sensitive data at rest. The quickstart-values.yaml file contains a dev placeholder masterkey — generate a real one before any non-local deployment:
tr -dc A-Za-z0-9 </dev/urandom | head -c 32Once ZITADEL has been initialized with a masterkey, it cannot be changed without losing access to encrypted data.
The stack runs: your ingress controller → ZITADEL API (Go) + ZITADEL Login (Next.js) → PostgreSQL (bundled). See HTTP/2 for requirements on how the ingress controller must forward traffic to ZITADEL pods.
Swap out components
The bundled PostgreSQL subchart is for quickstart use only. You can replace it independently:
| Component | How to replace |
|---|---|
| Database | Set postgresql.enabled: false and configure ZITADEL_DATABASE_POSTGRES_DSN pointing to your own PostgreSQL. Use the postgres maintenance database so ZITADEL can create its own database during initialization |
| Ingress controller | Update ingress.className (and login.ingress.className) to match your controller's IngressClass |
| Caching | Add Redis by configuring zitadel.configmapConfig.Caches (see Caching) |
Stage 2 — Production Cluster
For production you need:
- A Kubernetes cluster (1.30+)
- An ingress controller (Traefik, NGINX, or cloud-native)
- A domain with DNS configured
- TLS certificates (via cert-manager, ACME, or manually)
- PostgreSQL 14+ (managed service or in-cluster)
- Helm 3.x+
PostgreSQL compatibility: ZITADEL supports PostgreSQL 14–18. When using PostgreSQL 18, ensure you run ZITADEL v4.11.0 or newer. For the most up-to-date compatibility matrix and configuration details, see Database requirements.
The ZITADEL management console requires end-to-end HTTP/2 support. Ensure your ingress controller is configured to forward HTTP/2 (h2c) traffic to ZITADEL pods.
Create secrets
# Masterkey — generate once, store safely, cannot be changed after first run
kubectl create secret generic zitadel-masterkey \
--from-literal=masterkey="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"
# Database credentials — store the full DSN
kubectl create secret generic zitadel-db-credentials \
--from-literal=dsn="postgresql://zitadel:your-secure-password@postgres.database.svc.cluster.local:5432/postgres?sslmode=verify-full"Configure values
Save the following as values.yaml. Replace zitadel.example.com with your domain:
replicaCount: 2
zitadel:
masterkeySecretName: zitadel-masterkey
env:
- name: ZITADEL_DATABASE_POSTGRES_DSN
valueFrom:
secretKeyRef:
name: zitadel-db-credentials
key: dsn
configmapConfig:
ExternalDomain: "zitadel.example.com"
ExternalSecure: true
ExternalPort: 443
TLS:
Enabled: false
FirstInstance:
Org:
Human:
UserName: "admin"
Email: "admin@example.com"
Password: "YourSecurePassword123!"
PasswordChangeRequired: true
ingress:
enabled: true
className: traefik
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
hosts:
- host: zitadel.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: zitadel-tls
hosts:
- zitadel.example.com
login:
ingress:
enabled: true
className: traefik
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
hosts:
- host: zitadel.example.com
paths:
- path: /ui/v2/login
pathType: Prefix
tls:
- secretName: zitadel-tls
hosts:
- zitadel.example.com
podDisruptionBudget:
enabled: true
minAvailable: 1Install
helm repo add zitadel https://charts.zitadel.com && helm repo update
helm install zitadel zitadel/zitadel --values values.yaml --waitVerify
Watch the pods come up:
kubectl get pods --watchYou should see the zitadel-init and zitadel-setup jobs complete, followed by the zitadel deployment pods becoming Ready.
Check the Helm release status:
helm status zitadelAccess the console at https://zitadel.example.com/ui/console.
What's next
Review the Production Checklist before going live, then use the detailed guides for each concern:
| Guide | Description |
|---|---|
| Configuration | All configmap and secret options, autoscaling, security contexts |
| Ingress | Traefik, NGINX, and cloud-native ingress setup |
| Database | PostgreSQL TLS modes, credentials, and connection pooling |
| Operations | Upgrades, manual and automatic scaling, resource limits |
| Caching | Redis/Valkey caching configuration |
| Observability | Traces (OTLP), Prometheus metrics, and log collection |
| Uninstalling | Remove ZITADEL from your cluster |
Was this page helpful?