Performance testing for your homelab. Measure network bandwidth, NFS throughput, storage IOPS, and CPU performance. Establish baselines before things break.
“Why is Plex buffering?” - Questions answered by having performance baselines
Why Benchmark Your Homelab#
Ran a media stack for eight months. Plex started buffering during peak hours. Was it:
- Network congestion?
- NFS too slow?
- Proxmox CPU bottleneck?
- Storage IOPS maxed out?
Had no baseline. Spent three evenings guessing. Finally found the issue: 100 Mbps link negotiation on one interface (should’ve been 1 Gbps). Would’ve been obvious with network benchmarks.
Benchmarking gives you:
- Performance baselines - Know what “normal” looks like
- Bottleneck identification - Find weak links before they cause problems
- Upgrade justification - Prove you need 10 GbE (or prove you don’t)
- Troubleshooting data - “It used to get 900 Mbps, now it’s 100 Mbps” beats “it feels slow”
This post covers practical benchmarks for homelab scenarios: network throughput, NFS storage, disk IOPS, CPU performance.
Test Environment#
My setup (adjust commands for yours):
┌─────────────────────────────────────────────────────────────┐
│ Network: 1 Gbps managed switch (EdgeSwitch 24) │
│ │
│ Nodes: │
│ • Proxmox VE (192.168.2.10) - Intel i7, 32 GB │
│ • Synology NAS (192.168.2.129) - Celeron, 8 GB │
│ • Talos K8s CP (192.168.2.70) - 2 vCPU, 4 GB │
│ • Talos K8s W1 (192.168.2.80) - 2 vCPU, 4 GB │
│ • Talos K8s W2 (192.168.2.81) - 2 vCPU, 4 GB │
│ • EdgeRouter X (192.168.2.1) - Router │
│ │
│ Storage: Synology 4-bay NAS (RAID 5, NFS export) │
└─────────────────────────────────────────────────────────────┘Network Performance (iperf3)#
Tests: TCP/UDP bandwidth between nodes, identifies link speed issues, switch bottlenecks, and NIC problems.
Install iperf3#
Proxmox / Linux:
apt update && apt install iperf3Synology:
Via SSH:
sudo synopkg install iperf3
# Or use Docker
docker run -d --name iperf3-server --network host mlabbe/iperf3 -sKubernetes (for testing pod → NAS bandwidth):
kubectl run iperf3-client -n default --rm -it --image=mlabbe/iperf3 -- shTest: Proxmox ↔ Synology#
On Synology (server):
iperf3 -sOn Proxmox (client):
iperf3 -c 192.168.2.129 -t 30 -i 5Flags:
-t 30- Test duration (30 seconds)-i 5- Report interval (every 5 seconds)
Expected results (1 Gbps network):
[ ID] Interval Transfer Bitrate
[ 5] 0.00-30.00 sec 3.28 GBytes 940 Mbits/sec sender
[ 5] 0.00-30.00 sec 3.28 GBytes 939 Mbits/sec receiverWhat’s normal:
- 900-940 Mbps: Perfect (TCP overhead ~6%)
- 700-900 Mbps: Good (might have light interference)
- 400-700 Mbps: Poor (duplex mismatch, cable issue, congestion)
- < 400 Mbps: Bad (serious problem)
Test: Parallel Streams (Simulate Multi-User Load)#
iperf3 -c 192.168.2.129 -P 10 -t 30-P 10: 10 parallel streams (simulates multiple users)
Expected:
- Aggregate bandwidth should still hit 900+ Mbps
- If it drops significantly, switch/router is the bottleneck
Test: UDP Bandwidth (Jitter and Packet Loss)#
iperf3 -c 192.168.2.129 -u -b 1G -t 30Flags:
-u- UDP mode-b 1G- Target bandwidth (1 Gbps)
Expected output:
[ 5] 0.00-30.00 sec 3.58 GBytes 1.03 Gbits/sec 0.012 ms 0/2619842 (0%)What to watch:
- Jitter < 1 ms: Excellent
- Packet loss 0%: Perfect
- Packet loss > 1%: Problem (bad cable, switch dropping packets)
Test: K8s Pod → Synology NAS#
Verify pods can reach full network speed (important for NFS-backed PVCs):
# Start iperf3 server on Synology (if not running)
ssh jlambert@192.168.2.129 "iperf3 -s -D"
# From K8s cluster
kubectl run iperf3-client --rm -it --image=mlabbe/iperf3 -- iperf3 -c 192.168.2.129 -t 30Expected: Same ~940 Mbps
If lower:
- Check CNI overhead (Flannel/Calico add encapsulation)
- Check pod network policies (throttling?)
- Check NAS CPU usage during test (might be maxed)
NFS Performance#
Tests: Sequential read/write, random I/O, metadata operations. Critical for media servers and K8s PVCs.
Baseline: Local Disk on NAS#
SSH to Synology, test raw storage performance:
ssh jlambert@192.168.2.129
# Write test (create 10 GB file)
dd if=/dev/zero of=/volume1/test-write.img bs=1M count=10240 conv=fdatasync
# Note the MB/s
# Read test (read the file)
dd if=/volume1/test-write.img of=/dev/null bs=1M count=10240
# Note the MB/s
# Cleanup
rm /volume1/test-write.imgMy results (Synology DS920+, RAID 5, 4x4TB WD Red):
- Write: 220 MB/s
- Read: 280 MB/s
This is the storage ceiling. NFS will be slower (network + protocol overhead).
Test: NFS from Proxmox#
Mount NFS share temporarily:
# On Proxmox
mkdir /mnt/nfs-test
mount -t nfs 192.168.2.129:/volume1/nfs01 /mnt/nfs-test
# Write test
dd if=/dev/zero of=/mnt/nfs-test/test-write.img bs=1M count=10240 conv=fdatasync
# Read test
dd if=/mnt/nfs-test/test-write.img of=/dev/null bs=1M count=10240
# Cleanup
rm /mnt/nfs-test/test-write.img
umount /mnt/nfs-testMy results:
- Write: 110 MB/s (50% of local)
- Read: 115 MB/s (41% of local)
Overhead breakdown:
- Network: ~6% (TCP)
- NFS protocol: ~10%
- Synology CPU (NFS daemon): ~30%
What’s normal (1 Gbps network):
- 100-115 MB/s: Expected max (network bandwidth limit)
- 70-100 MB/s: Good (some NFS overhead)
- < 70 MB/s: Poor (investigate)
Test: Random I/O with fio#
fio (Flexible I/O Tester) simulates real workloads:
Install:
apt install fioRandom read (4K blocks, simulates database/VM workload):
fio --name=random-read \
--ioengine=libaio \
--rw=randread \
--bs=4k \
--size=1G \
--numjobs=4 \
--runtime=30 \
--group_reporting \
--directory=/mnt/nfs-testOutput:
random-read: (groupid=0, jobs=4): err= 0: pid=12345
read: IOPS=8432, BW=32.9MiB/s (34.5MB/s)(987MiB/30001msec)Key metrics:
- IOPS: 8432 (operations per second)
- Bandwidth: 32.9 MiB/s
Random write (4K blocks):
fio --name=random-write \
--ioengine=libaio \
--rw=randwrite \
--bs=4k \
--size=1G \
--numjobs=4 \
--runtime=30 \
--group_reporting \
--directory=/mnt/nfs-testMy results (NFS on Synology RAID 5):
- Random read IOPS: 8,000-9,000
- Random write IOPS: 1,500-2,000 (RAID 5 write penalty)
Sequential read (simulates streaming media):
fio --name=sequential-read \
--ioengine=libaio \
--rw=read \
--bs=1M \
--size=2G \
--numjobs=1 \
--runtime=30 \
--directory=/mnt/nfs-testExpected: 110-115 MB/s (matches network limit)
K8s PVC Performance#
Test NFS-backed persistent volumes (media stack scenario):
Deploy a test pod with fio:
# fio-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: fio-test
namespace: default
spec:
containers:
- name: fio
image: nixery.dev/shell/fio
command: ["sleep", "3600"]
volumeMounts:
- name: test-pvc
mountPath: /data
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: fio-test-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fio-test-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-appdata
resources:
requests:
storage: 5GiApply and test:
kubectl apply -f fio-test.yaml
kubectl wait --for=condition=ready pod/fio-test
# Sequential write (simulates Plex recording)
kubectl exec fio-test -- fio --name=seq-write \
--ioengine=libaio --rw=write --bs=1M --size=2G \
--numjobs=1 --runtime=30 --directory=/data
# Random read (simulates database queries)
kubectl exec fio-test -- fio --name=rand-read \
--ioengine=libaio --rw=randread --bs=4k --size=1G \
--numjobs=4 --runtime=30 --directory=/data
# Cleanup
kubectl delete -f fio-test.yamlExpected: Similar to direct NFS mount (100-115 MB/s sequential, 8k IOPS random read)
If slower:
- CSI driver overhead
- Pod CPU limits throttling
- NFS mount options (check
values.yamlfor NFS CSI driver)
CPU and Compute Performance#
sysbench: CPU Benchmark#
Install:
apt install sysbenchSingle-threaded performance:
sysbench cpu --threads=1 --time=30 runOutput:
CPU speed:
events per second: 1234.56Multi-threaded performance:
sysbench cpu --threads=$(nproc) --time=30 runMy results:
| Host | Single-thread (events/s) | Multi-thread (events/s) |
|---|---|---|
| Proxmox (i7-6700) | 1850 | 7200 (4 cores) |
| Talos K8s Worker VM | 1200 | 2400 (2 vCPU) |
| Synology (Celeron J4125) | 650 | 2100 (4 cores) |
Use cases:
- Plex transcoding: ~1500 events/s per stream (software transcode)
- K8s etcd: Single-thread performance matters most
Stress Test: Sustained Load#
Install stress-ng:
apt install stress-ngCPU + memory stress (10 minutes):
stress-ng --cpu $(nproc) --vm 2 --vm-bytes 50% --timeout 10m --metricsWatch:
- CPU temp:
watch -n 1 sensors(iflm-sensorsinstalled) - Throttling:
dmesg | grep -i "cpu clock throttled"
Expected:
- Temps stable under 80°C (desktop CPUs)
- No throttling messages
- System remains responsive
If temps spike above 90°C or you see throttling: airflow problem, thermal paste dried out, or insufficient cooling.
Results Summary: My Homelab Baselines#
Document these for future comparison:
| Test | Value | Notes |
|---|---|---|
| Network | ||
| Proxmox ↔ Synology TCP | 940 Mbps | Expected max for 1 Gbps |
| K8s Pod → Synology TCP | 935 Mbps | Negligible overhead |
| UDP packet loss | 0% | No congestion |
| Storage | ||
| Synology local write | 220 MB/s | RAID 5 ceiling |
| Synology local read | 280 MB/s | RAID 5 ceiling |
| NFS write (Proxmox) | 110 MB/s | Network-limited |
| NFS read (Proxmox) | 115 MB/s | Network-limited |
| NFS random read IOPS | 8,500 | Good for RAID 5 |
| NFS random write IOPS | 1,800 | RAID 5 write penalty |
| K8s PVC write | 108 MB/s | CSI driver overhead ~2% |
| Compute | ||
| Proxmox CPU (single) | 1850 events/s | Sufficient for etcd |
| Proxmox CPU (multi) | 7200 events/s | ~2 Plex transcodes |
| K8s Worker CPU (single) | 1200 events/s | VM overhead ~35% |
| K8s Worker CPU (multi) | 2400 events/s | ~1 Plex transcode |
Troubleshooting Playbook#
“My network is slow”#
- Baseline test: iperf3 between two nodes
- < 400 Mbps → Check link speed:
ethtool eth0 | grep Speed - Duplex mismatch:
ethtool eth0 | grep Duplex(should be “Full”)
- < 400 Mbps → Check link speed:
- Switch stats: Check for errors
- EdgeSwitch: Web UI → Ports → Look for “CRC Errors” or “Collisions”
- Cable test: Swap cable, re-test
“NFS is slow”#
- Network first: iperf3 to NAS (if network is slow, NFS will be slow)
- NFS vs local: Compare
ddwrite speeds on NFS vs NAS local disk- If local is also slow → disk problem (RAID rebuild? Failing drive?)
- NFS mount options: Check mount options:
mount | grep nfs # Look for: rsize=1048576,wsize=1048576 (1 MB buffers) - NAS CPU: SSH to NAS, check CPU during transfer:
top # Is nfsd process at 100%?
“Plex is buffering”#
Systematically test each layer:
- Network: iperf3 from Plex server to client device (should be > 50 Mbps for 1080p, > 100 Mbps for 4K)
- Storage: fio test on media PVC (should sustain > 50 MB/s read)
- CPU: Check Plex transcoding load:
kubectl top pod -n media -l app.kubernetes.io/name=plex # Is CPU at limit? - Client: Is the client forcing transcode? (Check Plex dashboard → Now Playing → Transcode reason)
Automating Benchmarks#
Create a script to run quarterly:
#!/bin/bash
# homelab-benchmark.sh
OUTPUT="benchmark-$(date +%Y%m%d).txt"
echo "=== Homelab Benchmark Report ===" | tee "$OUTPUT"
echo "Date: $(date)" | tee -a "$OUTPUT"
echo "" | tee -a "$OUTPUT"
echo "--- Network: Proxmox → Synology ---" | tee -a "$OUTPUT"
iperf3 -c 192.168.2.129 -t 10 | grep sender | tee -a "$OUTPUT"
echo "" | tee -a "$OUTPUT"
echo "--- NFS Write ---" | tee -a "$OUTPUT"
dd if=/dev/zero of=/mnt/nfs-test/benchmark.img bs=1M count=1024 conv=fdatasync 2>&1 | grep copied | tee -a "$OUTPUT"
echo "" | tee -a "$OUTPUT"
echo "--- NFS Read ---" | tee -a "$OUTPUT"
dd if=/mnt/nfs-test/benchmark.img of=/dev/null bs=1M 2>&1 | grep copied | tee -a "$OUTPUT"
rm /mnt/nfs-test/benchmark.img
echo "" | tee -a "$OUTPUT"
echo "--- CPU (single-thread) ---" | tee -a "$OUTPUT"
sysbench cpu --threads=1 --time=10 run | grep "events per second" | tee -a "$OUTPUT"
echo "" | tee -a "$OUTPUT"
echo "=== Benchmark Complete ===" | tee -a "$OUTPUT"
echo "Report saved: $OUTPUT"Run quarterly, compare results over time. Detect degradation before it causes outages.
What I Learned#
1. Baselines Prevent Wild Goose Chases#
Before benchmarking, every problem was a guess. “Is it the network? The disk? The CPU?” Now I check baselines first. If network benchmarks show 940 Mbps but Plex is buffering, I know network isn’t the problem. Saves hours.
2. 1 Gbps Is Fine for Most Homelabs#
I was convinced I needed 10 GbE. Benchmarks showed:
- Plex 4K streams: 60-80 Mbps
- NFS writes during backups: 110 MB/s (near 1 Gbps max)
- Total household usage: < 300 Mbps peak
10 GbE would be overkill. Saved $800 on NICs and switches.
3. NFS Overhead Is Real but Acceptable#
50% overhead vs local disk (220 MB/s → 110 MB/s). But 110 MB/s is still plenty for:
- Streaming multiple 4K movies simultaneously (60 MB/s)
- Kubernetes PVCs (most apps bottleneck on CPU or code, not I/O)
- Backups (compress while writing, CPU-bound before I/O-bound)
4. CPU Performance Matters More Than You Think#
Upgraded from Celeron to i7 for Proxmox. Single-thread performance went from 650 → 1850 events/s. Kubernetes etcd operations got noticeably faster (consensus writes are single-threaded). Felt it before I measured it.
5. Random I/O Is the Real Bottleneck#
Sequential reads hit 115 MB/s (great). Random 4K reads hit 8,500 IOPS (sounds high, but translates to 33 MB/s). Databases on NFS are slow for a reason. If you run Postgres on your homelab, use local SSDs or iSCSI.
What’s Next#
You have performance baselines for your homelab. Network bandwidth, NFS throughput, storage IOPS, and CPU performance documented.
Optional enhancements:
- Grafana dashboards - Visualize trends over time (network throughput degrading?)
- 10 GbE upgrade - If benchmarks show you’re hitting 1 Gbps limits
- SSD caching - Add SSD read/write cache to Synology for IOPS boost
- Automated regression testing - Run benchmarks after every major change (DSM update, kernel upgrade)
The core workflow works. Baseline, test, compare. Find bottlenecks before they find you.