Plex, Sonarr, Radarr, and friends deployed on Kubernetes with the bjw-s app-template Helm chart, shared NFS storage, and Traefik ingress.
This builds on the homelab infrastructure post. MetalLB, Traefik, and the NFS CSI driver should already be running.
Problem
You want a media server stack (streaming, automation, downloads) that runs on Kubernetes instead of bare Docker. Individual containers are easy. Coordinating eight services with shared storage, health checks, ingress routing, and resource limits across a cluster takes more structure.
Solution
Use bjw-s/app-template for every app. It’s a generic Helm chart that deploys any container image with a consistent values structure. One chart, one pattern, eight apps.
Full source: k8s-media-stack
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Clients (TV, Phone, Browser) │
│ │ │ │
│ <PLEX_IP>:32400 <TRAEFIK_IP> (Traefik) │
│ │ *.media.lan │
│ ▼ │ │
│ ┌─────────┐ ┌────────────────┴────────────────────┐ │
│ │ Plex │ │ Sonarr │ Radarr │ Prowlarr │ qBit │ │
│ │ (LB) │ │ Bazarr │ Overseerr │ Tautulli │ │
│ └────┬────┘ └────────────────┬────────────────────┘ │
│ │ │ │
│ └──────────┬───────────────┘ │
│ │ │
│ ┌───────┴───────┐ │
│ │ NFS (RWX) │ │
│ │ Synology │ │
│ └───────────────┘ │
└─────────────────────────────────────────────────────────────┘
Plex gets a dedicated LoadBalancer IP for direct client access (apps need to reach it directly for streaming). Everything else routes through Traefik via hostname-based ingress.
The Stack
| App | Purpose | Port | Access |
|---|---|---|---|
| Plex | Media streaming | 32400 | LoadBalancer (direct IP) |
| Sonarr | TV show automation | 8989 | Ingress: sonarr.media.lan |
| Radarr | Movie automation | 7878 | Ingress: radarr.media.lan |
| Prowlarr | Indexer management | 9696 | Ingress: prowlarr.media.lan |
| qBittorrent | Torrent client | 8080 | Ingress: qbit.media.lan |
| Bazarr | Subtitle automation | 6767 | Ingress: bazarr.media.lan |
| Overseerr | Media requests | 5055 | Ingress: overseerr.media.lan |
| Tautulli | Plex statistics | 8181 | Ingress: tautulli.media.lan |
| Homepage | Dashboard | 3000 | Ingress: home.media.lan |
Storage Strategy
“Shared state is the root of all evil, except when it’s media files. Then it’s the root of all efficiency.” - DevOps Storage Philosophy
Shared Media Volume
All apps that touch media files mount the same NFS volume at /data. This is critical for
hardlinking. When Sonarr “moves” a completed download into the media library, it creates a
hardlink instead of copying if source and destination are on the same filesystem. No duplicate
files, no wasted disk space.
# storage/media-data.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: media-data-pv
spec:
capacity:
storage: 10Ti
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: nfs.csi.k8s.io
volumeHandle: media-data-pv
volumeAttributes:
server: "<NAS_IP>"
share: "<NFS_DATA_PATH>" # e.g. /volume1/nfs01/data
mountOptions:
- nfsvers=3
- nolock
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: media-data
namespace: media
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 10Ti
volumeName: media-data-pv
NFS Directory Structure
<NFS_DATA_PATH>/
├── media/
│ ├── movies/ ← Plex + Radarr library
│ ├── tv/ ← Plex + Sonarr library
│ └── music/ ← Plex library
└── downloads/
├── complete/ ← qBittorrent finished
└── incomplete/ ← qBittorrent in-progress
Create these on your NAS before deploying. The deploy script in the repo handles this via SSH.
Per-App Config Volumes
Each app gets its own dynamically provisioned PVC via the nfs-appdata StorageClass (from
the infrastructure post). Config, databases, and app state live
here:
persistence:
config:
type: persistentVolumeClaim
accessMode: ReadWriteOnce
size: 2Gi
storageClass: nfs-appdata
globalMounts:
- path: /config
App Pattern: bjw-s/app-template
Every app follows the same Helm values structure. Here’s Sonarr as an example:
# apps/sonarr/values.yaml
controllers:
sonarr:
strategy: Recreate
containers:
app:
image:
repository: linuxserver/sonarr
tag: latest
pullPolicy: IfNotPresent
env:
PUID: "1000"
PGID: "1000"
TZ: "America/New_York"
probes:
liveness:
enabled: true
custom: true
spec:
httpGet:
path: /ping
port: 8989
periodSeconds: 10
readiness:
enabled: true
custom: true
spec:
httpGet:
path: /ping
port: 8989
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
service:
app:
controller: sonarr
ports:
http:
port: 8989
ingress:
app:
className: traefik
hosts:
- host: sonarr.media.lan
paths:
- path: /
service:
identifier: app
port: http
persistence:
config:
type: persistentVolumeClaim
accessMode: ReadWriteOnce
size: 2Gi
storageClass: nfs-appdata
globalMounts:
- path: /config
data:
type: persistentVolumeClaim
existingClaim: media-data
globalMounts:
- path: /data
Key decisions in this pattern:
strategy: Recreate- SQLite databases don’t handle concurrent writers. Kill the old pod before starting the new one.PUID/PGID- LinuxServer images support these to match NFS file ownership.- Custom health probes - Each app exposes a lightweight endpoint. Default probes are too generic.
- Two persistence entries -
/configis per-app (dynamic PVC),/datais shared (static PVC).
Plex: The Exception
Plex needs a dedicated LoadBalancer IP because streaming clients connect directly (not through an HTTP reverse proxy). The service config differs from the others:
service:
app:
controller: plex
type: LoadBalancer
annotations:
metallb.universe.tf/loadBalancerIPs: "<PLEX_IP>"
ports:
http:
port: 32400
persistence:
# ...same pattern, plus:
transcode:
type: emptyDir
globalMounts:
- path: /transcode
The transcode volume uses emptyDir (local node storage) for Plex’s transcoding scratch
space. NFS is too slow for real-time transcoding I/O.
PLEX_CLAIM in the env, deploy once, complete the wizard, then remove the
claim and redeploy. It’s single-use.qBittorrent: Dual Service
qBittorrent needs two services: one for the web UI (through Traefik) and one for incoming torrent peer connections (direct LoadBalancer):
service:
app:
controller: qbittorrent
ports:
http:
port: 8080
bittorrent:
controller: qbittorrent
type: LoadBalancer
annotations:
metallb.universe.tf/loadBalancerIPs: "<QBIT_IP>"
ports:
torrent-tcp:
port: 6881
protocol: TCP
torrent-udp:
port: 6881
protocol: UDP
Without the LoadBalancer for peer traffic, qBittorrent can only make outbound connections. Inbound peers improve download speeds significantly.
Deploy
Helm Repos
helm repo add bjw-s https://bjw-s-labs.github.io/helm-charts/
helm repo update
Namespace and Storage
kubectl apply -f foundation/namespace.yaml
kubectl apply -f storage/media-data.yaml
Apps
Each app is a standalone Helm release:
helm upgrade --install sonarr bjw-s/app-template \
-n media -f apps/sonarr/values.yaml --wait
helm upgrade --install radarr bjw-s/app-template \
-n media -f apps/radarr/values.yaml --wait
# Repeat for each app...
Or deploy all at once with the script:
./deploy.sh apps
Verify
helm list -n media
kubectl get pods -n media
kubectl get ingress -n media
All pods should be Running. All ingresses should show Traefik’s external IP.
DNS
Point your hostnames at the Traefik IP. In Pi-hole, a wildcard record handles everything:
*.media.lan → <TRAEFIK_IP>
Or add individual records per service.
Post-Deploy Configuration
Connection Order
Configure the apps in this order. Each step depends on the previous:
| Step | App | Action |
|---|---|---|
| 1 | Plex | Complete setup wizard, add libraries (/data/media/movies, /data/media/tv) |
| 2 | qBittorrent | Set download paths (/data/downloads/complete, /data/downloads/incomplete) |
| 3 | Sonarr / Radarr | Add qBittorrent as download client, set root folders |
| 4 | Prowlarr | Add indexers, connect to Sonarr + Radarr via API keys |
| 5 | Bazarr | Connect to Sonarr + Radarr, add subtitle providers |
| 6 | Overseerr | Sign in with Plex, connect Sonarr + Radarr |
<app>.media.svc.cluster.local:<port> (e.g., qbittorrent.media.svc.cluster.local:8080).Resource Budget
| App | CPU req/limit | RAM req/limit |
|---|---|---|
| Plex | 500m / 2000m | 512Mi / 2Gi |
| Sonarr | 100m / 500m | 256Mi / 512Mi |
| Radarr | 100m / 500m | 256Mi / 512Mi |
| Prowlarr | 50m / 250m | 128Mi / 256Mi |
| qBittorrent | 100m / 500m | 256Mi / 512Mi |
| Bazarr | 50m / 250m | 128Mi / 256Mi |
| Overseerr | 50m / 250m | 128Mi / 256Mi |
| Tautulli | 50m / 250m | 128Mi / 256Mi |
| Homepage | 50m / 200m | 64Mi / 128Mi |
| Total requests | 1050m | 1.7Gi |
Fits within 2 workers (2 vCPU, 4 GB RAM each).
Upgrades
Every app is a standard Helm release:
# Update an app's values, then:
helm upgrade sonarr bjw-s/app-template -n media -f apps/sonarr/values.yaml
# Or update the chart version:
helm upgrade sonarr bjw-s/app-template -n media -f apps/sonarr/values.yaml --version 4.6.2
# Check all releases:
helm list -n media
Back up app configs before upgrading. These apps store databases and settings in /config.
A quick way to snapshot them:
kubectl exec -n media deploy/sonarr -- tar czf /tmp/config-backup.tar.gz /config
kubectl cp media/sonarr-<pod-id>:/tmp/config-backup.tar.gz ./sonarr-backup.tar.gz
Or, if your NAS supports snapshots, take a snapshot of the nfs-appdata share before running
helm upgrade. Much faster and covers all apps at once.
Teardown
./teardown.sh
Removes all Helm releases and Kubernetes resources. NFS data on the NAS is preserved.
Common Issues
| Symptom | Cause | Fix |
|---|---|---|
Pod stuck ContainerCreating | NFS mount failure | Check NAS IP/export, verify CSI driver pods running |
| App crashes on startup | Config volume permissions | Verify PUID/PGID match NFS ownership |
| Hardlinks fail (files copied instead) | Source/dest on different filesystems | Both paths must be under the same PVC mount (/data) |
| Plex not reachable | Missing LoadBalancer IP | Check MetalLB, verify loadBalancerIPs annotation |
| Ingress returns 404 | DNS not pointing at Traefik | Verify DNS record and ingressClassName: traefik |
| qBittorrent slow downloads | No inbound peer connections | Check the bittorrent LoadBalancer service has an external IP |
What’s Next
The stack is running. Next step: migrate from deploy.sh to Flux.
Same Helm charts, same values, but Flux watches the Git repo and applies changes automatically.
No more SSH-ing in to run helm upgrade.