This guide covers the Kubernetes deployment path for date-website, with k3s on Hetzner Cloud and Backblaze B2 as the expected object storage provider.
Use this together with:
charts/date-website/values-hetzner.yaml for the Hetzner k3s baselinecharts/date-website/values-backblaze-b2.example.yaml for B2 media and PostgreSQL backup storagecharts/date-website/values-kk.example.yaml, charts/date-website/values-biocum.example.yaml, or charts/date-website/values-pulterit.example.yaml for association-specific overridesREADME.md for local development and Docker Compose workflowsThe first production k3s target is expected to be:
hetzner-k3s or the Terraform k3s modulehcloud-volumes storage class for in-cluster persistent workloadsThe chart can render either Kubernetes Ingress or Gateway API resources. The Hetzner values use Gateway API:
ingress:
enabled: false
gateway:
enabled: true
className: traefik
Do not manually mount a Hetzner volume for PostgreSQL. Let the Hetzner CSI driver provision it from the chart’s PVC by using storageClass: hcloud-volumes.
Use either values-k3s.yaml or values-hetzner.yaml as the cluster-specific storage preset, not both. values-k3s.yaml is for a generic k3s cluster using local-path; values-hetzner.yaml is for Hetzner k3s using hcloud-volumes and includes the resource sizing used for the planned CX33 worker.
The chart lives in:
charts/date-website/
It is published as an OCI Helm chart to GHCR by .github/workflows/helm_chart.yaml whenever chart files change on main, and can also be published manually through the workflow dispatch button. Bump charts/date-website/Chart.yaml version whenever chart templates or values change; production deploys should pin that immutable chart version.
Current chart reference:
oci://ghcr.io/datateknologerna-vid-abo-akademi/charts/date-website
The chart deploys:
postgresql.enabled=falseGateway and HTTPRoute, or a Traefik-compatible IngressThe base chart defaults web.migrateOnStartup: false to avoid migration races when web.replicaCount is increased. For the current single-worker Hetzner setup, values-hetzner.yaml overrides this to true and disables the migration Job. If the web deployment is scaled above one replica, move migrations out of web startup and into a controlled migration step.
The public route sends WebSocket traffic to the ASGI service with asgi.wsPath, which defaults to /ws.
The application container security context drops capabilities and prevents privilege escalation, but it does not set runAsNonRoot: true yet. The current application Dockerfile does not define a non-root USER, so forcing runAsNonRoot in the chart would break the image. Add a non-root user to the image first, then tighten the chart default.
Run one Helm release per association. Do not route date, kk, biocum, and pulterit through the same release, because each release needs its own PROJECT_NAME, Django URL configuration, static/template paths, hosts, media prefixes, database, and backup prefix.
Examples:
helm upgrade --install date \
oci://ghcr.io/datateknologerna-vid-abo-akademi/charts/date-website \
--version 0.2.0 \
--namespace date \
--create-namespace \
-f charts/date-website/values-hetzner.yaml \
-f charts/date-website/values-backblaze-b2.example.yaml \
--set secret.existingSecret=date-website-prod-secrets \
--set database.external.host='<bastion-private-ip-or-dns>' \
--set image.tag='<release-tag>'
helm upgrade --install kk \
oci://ghcr.io/datateknologerna-vid-abo-akademi/charts/date-website \
--version 0.2.0 \
--namespace kk \
--create-namespace \
-f charts/date-website/values-hetzner.yaml \
-f charts/date-website/values-backblaze-b2.example.yaml \
-f charts/date-website/values-kk.example.yaml \
--set secret.existingSecret=kk-website-prod-secrets \
--set database.external.host='<bastion-private-ip-or-dns>' \
--set image.tag='<release-tag>'
helm upgrade --install biocum \
oci://ghcr.io/datateknologerna-vid-abo-akademi/charts/date-website \
--version 0.2.0 \
--namespace biocum \
--create-namespace \
-f charts/date-website/values-hetzner.yaml \
-f charts/date-website/values-backblaze-b2.example.yaml \
-f charts/date-website/values-biocum.example.yaml \
--set secret.existingSecret=biocum-website-prod-secrets \
--set database.external.host='<bastion-private-ip-or-dns>' \
--set image.tag='<release-tag>'
helm upgrade --install pulterit \
oci://ghcr.io/datateknologerna-vid-abo-akademi/charts/date-website \
--version 0.2.0 \
--namespace pulterit \
--create-namespace \
-f charts/date-website/values-hetzner.yaml \
-f charts/date-website/values-backblaze-b2.example.yaml \
-f charts/date-website/values-pulterit.example.yaml \
--set secret.existingSecret=pulterit-website-prod-secrets \
--set database.external.host='<bastion-private-ip-or-dns>' \
--set image.tag='<release-tag>'
Each release should have separate django.projectName, django.allowedHosts, django.allowedOrigins, ingress.hosts, media bucket names or prefixes, backup bucket name or prefix, and Kubernetes Secret. ingress.hosts is also used as the HTTPRoute hostname source when Gateway API is enabled. Keeping separate namespaces is optional, but it makes secrets, PVCs, and operational commands harder to mix up.
If several associations share one B2 bucket, keep unique media locations such as date/media, kk/media, biocum/media, and pulterit/media. If they use separate B2 buckets, still keep distinct backup prefixes such as date-website/postgresql, kk-website/postgresql, biocum-website/postgresql, and pulterit-website/postgresql.
Backblaze B2 should be configured through the S3-compatible API. Backblaze documents the endpoint format in its S3-compatible API guide; use the endpoint for the bucket region:
https://s3.<region>.backblazeb2.com
Example:
https://s3.us-west-000.backblazeb2.com
B2 uses v4 signatures for the S3-compatible API. The chart example sets:
signatureVersion: "s3v4"
addressingStyle: "path"
Prefer separate B2 buckets for private media, public media, and database backups:
media:
s3:
privateBucketName: "date-website-private"
publicBucketName: "date-website-public"
backups:
objectStorage:
bucketName: "date-website-backups"
This avoids depending on per-object ACL behavior. Treat public/private access as a bucket-level decision in B2.
When media.s3.enabled: true, the chart does not mount the local media PVC. Uploaded media goes to B2 instead of /code/media.
When backups.objectStorage.enabled: true and backups.persistence.enabled: false, the backup CronJob writes the dump to an emptyDir, uploads the compressed dump to B2, and does not allocate a separate Hetzner backup volume.
Prefer a pre-created Kubernetes Secret for production values:
kubectl create namespace date-website
kubectl -n date-website create secret generic date-website-prod-secrets \
--from-literal=SECRET_KEY='<django-secret-key>' \
--from-literal=DB_PASSWORD='<postgres-password>' \
--from-literal=AWS_ACCESS_KEY_ID='<b2-media-application-key-id>' \
--from-literal=AWS_SECRET_ACCESS_KEY='<b2-media-application-key>' \
--from-literal=OBJECT_STORAGE_ACCESS_KEY_ID='<b2-backup-application-key-id>' \
--from-literal=OBJECT_STORAGE_SECRET_ACCESS_KEY='<b2-backup-application-key>' \
--from-literal=EMAIL_HOST_USER='<smtp-user>' \
--from-literal=EMAIL_HOST_PASSWORD='<smtp-password>' \
--from-literal=CF_TURNSTILE_SECRET_KEY='<turnstile-secret>'
Use separate B2 application keys for media and backups if possible. The media key should only access the media buckets; the backup key should only access the backup bucket.
Do not commit real secrets to values files. If secret.existingSecret is not set, the chart-created Secret requires real django.secretKey and database.password values at render time.
When secret.existingSecret is used, pod checksum annotations cannot detect changes inside that external Secret. After rotating an external Secret, restart the affected workloads:
kubectl -n <namespace> rollout restart deploy/<release>-date-website-web
kubectl -n <namespace> rollout restart deploy/<release>-date-website-asgi
kubectl -n <namespace> rollout restart deploy/<release>-date-website-celery
Copy the B2 example values into an environment-specific private values file before production use, or override the bucket names and endpoint from CI/CD.
Typical install or upgrade:
helm upgrade --install date-website \
oci://ghcr.io/datateknologerna-vid-abo-akademi/charts/date-website \
--version 0.2.0 \
--namespace date-website \
--create-namespace \
-f charts/date-website/values-hetzner.yaml \
-f charts/date-website/values-backblaze-b2.example.yaml \
--set secret.existingSecret=date-website-prod-secrets \
--set database.external.host='<bastion-private-ip-or-dns>' \
--set image.tag='<release-tag>'
Set image.tag to a release tag, immutable commit SHA, or the prod production alias. Avoid deploying qa by accident in production.
For KK, Biologica, or Pulterit, layer the matching association values file after the B2 values file so the association-specific hosts, PROJECT_NAME, media paths, and backup prefix override the default date example.
Check Kubernetes resources:
kubectl -n date-website get pods
kubectl -n date-website get gateway,httproute
kubectl -n date-website get pvc
kubectl -n date-website logs deploy/date-website-web
Check Django probes through the public host after DNS and TLS are configured:
curl -fsS https://<host>/healthz/
curl -fsS https://<host>/readyz/
/healthz/ only checks that the app process responds. /readyz/ checks database and cache access.
The Hetzner values file enables the backup CronJob by default:
backups:
enabled: true
schedule: "17 2 * * *"
retentionDays: 14
With the B2 override, the CronJob uploads compressed PostgreSQL dumps to:
s3://<backup-bucket>/date-website/postgresql/
To trigger one backup manually:
kubectl -n date-website create job \
--from=cronjob/date-website-postgresql-backup \
date-website-postgresql-backup-manual-$(date -u +%Y%m%d%H%M%S)
Then inspect the job logs and confirm the object exists in B2:
kubectl -n date-website logs job/<manual-backup-job-name>
For object-storage uploads, prefer a pinned backup image that already contains both pg_dump and the AWS CLI, and keep backups.objectStorage.installAwsCli: false. The B2 example values intentionally keep runtime package installation disabled, so production deployments using object-storage uploads should override backups.image to a backup image with both tools installed. The backup job defaults to the PostgreSQL image’s non-root UID/GID and sets backups.podSecurityContext.fsGroup so the mounted backup directory is writable. If you temporarily set installAwsCli: true with an Alpine image, the backup container needs a root-capable backups.securityContext because apk add --no-cache aws-cli requires package-install privileges.
retentionDays prunes old dump files only from the local /backups directory. When the B2 override uses backups.persistence.enabled: false, this local retention is only for the temporary emptyDir; remote B2 objects are not pruned by the CronJob. Configure a B2 bucket lifecycle rule for date-website/postgresql/ if remote backup retention should be automatic.
values-hetzner.yaml resource requests are based on observed Docker production usage: web is the largest process at roughly 300-450Mi, Celery sits around 250Mi, ASGI around 90Mi, and idle Postgres/Redis are much smaller. Revisit requests after sustained Kubernetes traffic, especially after larger event registrations or admin uploads.values-hetzner.yaml to avoid a separate 10Gi Hetzner volume for cache and broker data. This means queued Celery tasks can be lost if Redis restarts.hcloud-volumes. B2 backups protect against bad migrations and volume corruption, but they do not remove the need to monitor and test restores.