Plex, Sonarr, Radarr, and friends deployed on Kubernetes with the bjw-s app-template Helm chart, shared NFS storage, and Traefik ingress.

This builds on the homelab infrastructure post. MetalLB, Traefik, and the NFS CSI driver should already be running.


Problem

You want a media server stack (streaming, automation, downloads) that runs on Kubernetes instead of bare Docker. Individual containers are easy. Coordinating eight services with shared storage, health checks, ingress routing, and resource limits across a cluster takes more structure.

Solution

Use bjw-s/app-template for every app. It’s a generic Helm chart that deploys any container image with a consistent values structure. One chart, one pattern, eight apps.

Full source: k8s-media-stack


Architecture

┌─────────────────────────────────────────────────────────────┐
│  Clients (TV, Phone, Browser)                               │
│       │                          │                          │
│   <PLEX_IP>:32400          <TRAEFIK_IP> (Traefik)           │
│       │                     *.media.lan                     │
│       ▼                          │                          │
│  ┌─────────┐    ┌────────────────┴────────────────────┐     │
│  │  Plex   │    │ Sonarr │ Radarr │ Prowlarr │ qBit   │     │
│  │  (LB)   │    │ Bazarr │ Overseerr │ Tautulli       │     │
│  └────┬────┘    └────────────────┬────────────────────┘     │
│       │                          │                          │
│       └──────────┬───────────────┘                          │
│                  │                                          │
│          ┌───────┴───────┐                                  │
│          │  NFS (RWX)    │                                  │
│          │  Synology     │                                  │
│          └───────────────┘                                  │
└─────────────────────────────────────────────────────────────┘

Plex gets a dedicated LoadBalancer IP for direct client access (apps need to reach it directly for streaming). Everything else routes through Traefik via hostname-based ingress.


The Stack

AppPurposePortAccess
PlexMedia streaming32400LoadBalancer (direct IP)
SonarrTV show automation8989Ingress: sonarr.media.lan
RadarrMovie automation7878Ingress: radarr.media.lan
ProwlarrIndexer management9696Ingress: prowlarr.media.lan
qBittorrentTorrent client8080Ingress: qbit.media.lan
BazarrSubtitle automation6767Ingress: bazarr.media.lan
OverseerrMedia requests5055Ingress: overseerr.media.lan
TautulliPlex statistics8181Ingress: tautulli.media.lan
HomepageDashboard3000Ingress: home.media.lan

Storage Strategy

“Shared state is the root of all evil, except when it’s media files. Then it’s the root of all efficiency.” - DevOps Storage Philosophy

Shared Media Volume

All apps that touch media files mount the same NFS volume at /data. This is critical for hardlinking. When Sonarr “moves” a completed download into the media library, it creates a hardlink instead of copying if source and destination are on the same filesystem. No duplicate files, no wasted disk space.

# storage/media-data.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: media-data-pv
spec:
  capacity:
    storage: 10Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  csi:
    driver: nfs.csi.k8s.io
    volumeHandle: media-data-pv
    volumeAttributes:
      server: "<NAS_IP>"
      share: "<NFS_DATA_PATH>"           # e.g. /volume1/nfs01/data
  mountOptions:
    - nfsvers=3
    - nolock
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: media-data
  namespace: media
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Ti
  volumeName: media-data-pv

NFS Directory Structure

<NFS_DATA_PATH>/
├── media/
│   ├── movies/       ← Plex + Radarr library
│   ├── tv/           ← Plex + Sonarr library
│   └── music/        ← Plex library
└── downloads/
    ├── complete/     ← qBittorrent finished
    └── incomplete/   ← qBittorrent in-progress

Create these on your NAS before deploying. The deploy script in the repo handles this via SSH.

Per-App Config Volumes

Each app gets its own dynamically provisioned PVC via the nfs-appdata StorageClass (from the infrastructure post). Config, databases, and app state live here:

persistence:
  config:
    type: persistentVolumeClaim
    accessMode: ReadWriteOnce
    size: 2Gi
    storageClass: nfs-appdata
    globalMounts:
      - path: /config

App Pattern: bjw-s/app-template

Every app follows the same Helm values structure. Here’s Sonarr as an example:

# apps/sonarr/values.yaml
controllers:
  sonarr:
    strategy: Recreate
    containers:
      app:
        image:
          repository: linuxserver/sonarr
          tag: latest
          pullPolicy: IfNotPresent
        env:
          PUID: "1000"
          PGID: "1000"
          TZ: "America/New_York"
        probes:
          liveness:
            enabled: true
            custom: true
            spec:
              httpGet:
                path: /ping
                port: 8989
              periodSeconds: 10
          readiness:
            enabled: true
            custom: true
            spec:
              httpGet:
                path: /ping
                port: 8989
              periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 512Mi

service:
  app:
    controller: sonarr
    ports:
      http:
        port: 8989

ingress:
  app:
    className: traefik
    hosts:
      - host: sonarr.media.lan
        paths:
          - path: /
            service:
              identifier: app
              port: http

persistence:
  config:
    type: persistentVolumeClaim
    accessMode: ReadWriteOnce
    size: 2Gi
    storageClass: nfs-appdata
    globalMounts:
      - path: /config
  data:
    type: persistentVolumeClaim
    existingClaim: media-data
    globalMounts:
      - path: /data

Key decisions in this pattern:

  • strategy: Recreate - SQLite databases don’t handle concurrent writers. Kill the old pod before starting the new one.
  • PUID/PGID - LinuxServer images support these to match NFS file ownership.
  • Custom health probes - Each app exposes a lightweight endpoint. Default probes are too generic.
  • Two persistence entries - /config is per-app (dynamic PVC), /data is shared (static PVC).
💡 Tip
The bjw-s chart docs have the full values reference: app-template values. Most homelab containers from LinuxServer follow this same pattern.

Plex: The Exception

Plex needs a dedicated LoadBalancer IP because streaming clients connect directly (not through an HTTP reverse proxy). The service config differs from the others:

service:
  app:
    controller: plex
    type: LoadBalancer
    annotations:
      metallb.universe.tf/loadBalancerIPs: "<PLEX_IP>"
    ports:
      http:
        port: 32400

persistence:
  # ...same pattern, plus:
  transcode:
    type: emptyDir
    globalMounts:
      - path: /transcode

The transcode volume uses emptyDir (local node storage) for Plex’s transcoding scratch space. NFS is too slow for real-time transcoding I/O.

💡 Tip
First-time setup requires a claim token from plex.tv/claim (valid 4 minutes). Set PLEX_CLAIM in the env, deploy once, complete the wizard, then remove the claim and redeploy. It’s single-use.

qBittorrent: Dual Service

qBittorrent needs two services: one for the web UI (through Traefik) and one for incoming torrent peer connections (direct LoadBalancer):

service:
  app:
    controller: qbittorrent
    ports:
      http:
        port: 8080
  bittorrent:
    controller: qbittorrent
    type: LoadBalancer
    annotations:
      metallb.universe.tf/loadBalancerIPs: "<QBIT_IP>"
    ports:
      torrent-tcp:
        port: 6881
        protocol: TCP
      torrent-udp:
        port: 6881
        protocol: UDP

Without the LoadBalancer for peer traffic, qBittorrent can only make outbound connections. Inbound peers improve download speeds significantly.


Deploy

Helm Repos

helm repo add bjw-s https://bjw-s-labs.github.io/helm-charts/
helm repo update

Namespace and Storage

kubectl apply -f foundation/namespace.yaml
kubectl apply -f storage/media-data.yaml

Apps

Each app is a standalone Helm release:

helm upgrade --install sonarr bjw-s/app-template \
    -n media -f apps/sonarr/values.yaml --wait

helm upgrade --install radarr bjw-s/app-template \
    -n media -f apps/radarr/values.yaml --wait

# Repeat for each app...

Or deploy all at once with the script:

./deploy.sh apps

Verify

helm list -n media
kubectl get pods -n media
kubectl get ingress -n media

All pods should be Running. All ingresses should show Traefik’s external IP.


DNS

Point your hostnames at the Traefik IP. In Pi-hole, a wildcard record handles everything:

*.media.lan  →  <TRAEFIK_IP>

Or add individual records per service.


Post-Deploy Configuration

Connection Order

Configure the apps in this order. Each step depends on the previous:

StepAppAction
1PlexComplete setup wizard, add libraries (/data/media/movies, /data/media/tv)
2qBittorrentSet download paths (/data/downloads/complete, /data/downloads/incomplete)
3Sonarr / RadarrAdd qBittorrent as download client, set root folders
4ProwlarrAdd indexers, connect to Sonarr + Radarr via API keys
5BazarrConnect to Sonarr + Radarr, add subtitle providers
6OverseerrSign in with Plex, connect Sonarr + Radarr
💡 Tip
Inter-service communication uses Kubernetes DNS. When configuring download clients or API connections between apps, use the internal service address: <app>.media.svc.cluster.local:<port> (e.g., qbittorrent.media.svc.cluster.local:8080).

Resource Budget

AppCPU req/limitRAM req/limit
Plex500m / 2000m512Mi / 2Gi
Sonarr100m / 500m256Mi / 512Mi
Radarr100m / 500m256Mi / 512Mi
Prowlarr50m / 250m128Mi / 256Mi
qBittorrent100m / 500m256Mi / 512Mi
Bazarr50m / 250m128Mi / 256Mi
Overseerr50m / 250m128Mi / 256Mi
Tautulli50m / 250m128Mi / 256Mi
Homepage50m / 200m64Mi / 128Mi
Total requests1050m1.7Gi

Fits within 2 workers (2 vCPU, 4 GB RAM each).

💡 Tip
Plex software transcoding is CPU-heavy. 2 vCPU handles roughly one transcode stream. For better performance, set Plex clients to Direct Play or pass through an Intel iGPU from Proxmox for hardware transcoding.

Upgrades

Every app is a standard Helm release:

# Update an app's values, then:
helm upgrade sonarr bjw-s/app-template -n media -f apps/sonarr/values.yaml

# Or update the chart version:
helm upgrade sonarr bjw-s/app-template -n media -f apps/sonarr/values.yaml --version 4.6.2

# Check all releases:
helm list -n media
💡 Tip

Back up app configs before upgrading. These apps store databases and settings in /config. A quick way to snapshot them:

kubectl exec -n media deploy/sonarr -- tar czf /tmp/config-backup.tar.gz /config
kubectl cp media/sonarr-<pod-id>:/tmp/config-backup.tar.gz ./sonarr-backup.tar.gz

Or, if your NAS supports snapshots, take a snapshot of the nfs-appdata share before running helm upgrade. Much faster and covers all apps at once.


Teardown

./teardown.sh

Removes all Helm releases and Kubernetes resources. NFS data on the NAS is preserved.


Common Issues

SymptomCauseFix
Pod stuck ContainerCreatingNFS mount failureCheck NAS IP/export, verify CSI driver pods running
App crashes on startupConfig volume permissionsVerify PUID/PGID match NFS ownership
Hardlinks fail (files copied instead)Source/dest on different filesystemsBoth paths must be under the same PVC mount (/data)
Plex not reachableMissing LoadBalancer IPCheck MetalLB, verify loadBalancerIPs annotation
Ingress returns 404DNS not pointing at TraefikVerify DNS record and ingressClassName: traefik
qBittorrent slow downloadsNo inbound peer connectionsCheck the bittorrent LoadBalancer service has an external IP

What’s Next

The stack is running. Next step: migrate from deploy.sh to Flux. Same Helm charts, same values, but Flux watches the Git repo and applies changes automatically. No more SSH-ing in to run helm upgrade.


References