Skip to main content
  1. All Blog Posts/

Homelab Network Performance Testing and Benchmarking

Author
Jourdan Lambert
Welcome! I’m Jourdan — an SRE and Security engineer writing about my journey through cloud and DevOps technology. This site covers Docker, Kubernetes, Terraform, Packer, and more.
Table of Contents
Homelab Infrastructure - This article is part of a series.
Part : This Article

Performance testing for your homelab. Measure network bandwidth, NFS throughput, storage IOPS, and CPU performance. Establish baselines before things break.

“Why is Plex buffering?” - Questions answered by having performance baselines


Why Benchmark Your Homelab
#

Ran a media stack for eight months. Plex started buffering during peak hours. Was it:

  • Network congestion?
  • NFS too slow?
  • Proxmox CPU bottleneck?
  • Storage IOPS maxed out?

Had no baseline. Spent three evenings guessing. Finally found the issue: 100 Mbps link negotiation on one interface (should’ve been 1 Gbps). Would’ve been obvious with network benchmarks.

Benchmarking gives you:

  • Performance baselines - Know what “normal” looks like
  • Bottleneck identification - Find weak links before they cause problems
  • Upgrade justification - Prove you need 10 GbE (or prove you don’t)
  • Troubleshooting data - “It used to get 900 Mbps, now it’s 100 Mbps” beats “it feels slow”

This post covers practical benchmarks for homelab scenarios: network throughput, NFS storage, disk IOPS, CPU performance.


Test Environment
#

My setup (adjust commands for yours):

┌─────────────────────────────────────────────────────────────┐
│  Network: 1 Gbps managed switch (EdgeSwitch 24)             │
│                                                              │
│  Nodes:                                                      │
│   • Proxmox VE (192.168.2.10)          - Intel i7, 32 GB   │
│   • Synology NAS (192.168.2.129)       - Celeron, 8 GB     │
│   • Talos K8s CP (192.168.2.70)        - 2 vCPU, 4 GB      │
│   • Talos K8s W1 (192.168.2.80)        - 2 vCPU, 4 GB      │
│   • Talos K8s W2 (192.168.2.81)        - 2 vCPU, 4 GB      │
│   • EdgeRouter X (192.168.2.1)         - Router            │
│                                                              │
│  Storage: Synology 4-bay NAS (RAID 5, NFS export)          │
└─────────────────────────────────────────────────────────────┘

Network Performance (iperf3)
#

Tests: TCP/UDP bandwidth between nodes, identifies link speed issues, switch bottlenecks, and NIC problems.

Install iperf3
#

Proxmox / Linux:

apt update && apt install iperf3

Synology:

Via SSH:

sudo synopkg install iperf3
# Or use Docker
docker run -d --name iperf3-server --network host mlabbe/iperf3 -s

Kubernetes (for testing pod → NAS bandwidth):

kubectl run iperf3-client -n default --rm -it --image=mlabbe/iperf3 -- sh

Test: Proxmox ↔ Synology
#

On Synology (server):

iperf3 -s

On Proxmox (client):

iperf3 -c 192.168.2.129 -t 30 -i 5

Flags:

  • -t 30 - Test duration (30 seconds)
  • -i 5 - Report interval (every 5 seconds)

Expected results (1 Gbps network):

[ ID] Interval           Transfer     Bitrate
[  5]   0.00-30.00  sec  3.28 GBytes   940 Mbits/sec   sender
[  5]   0.00-30.00  sec  3.28 GBytes   939 Mbits/sec   receiver

What’s normal:

  • 900-940 Mbps: Perfect (TCP overhead ~6%)
  • 700-900 Mbps: Good (might have light interference)
  • 400-700 Mbps: Poor (duplex mismatch, cable issue, congestion)
  • < 400 Mbps: Bad (serious problem)
💡 Tip
Run tests at different times (peak vs. off-peak hours). If throughput drops during evenings, you have network congestion (kids streaming, backups running, etc.).

Test: Parallel Streams (Simulate Multi-User Load)
#

iperf3 -c 192.168.2.129 -P 10 -t 30

-P 10: 10 parallel streams (simulates multiple users)

Expected:

  • Aggregate bandwidth should still hit 900+ Mbps
  • If it drops significantly, switch/router is the bottleneck

Test: UDP Bandwidth (Jitter and Packet Loss)
#

iperf3 -c 192.168.2.129 -u -b 1G -t 30

Flags:

  • -u - UDP mode
  • -b 1G - Target bandwidth (1 Gbps)

Expected output:

[  5]   0.00-30.00  sec  3.58 GBytes  1.03 Gbits/sec  0.012 ms  0/2619842 (0%)

What to watch:

  • Jitter < 1 ms: Excellent
  • Packet loss 0%: Perfect
  • Packet loss > 1%: Problem (bad cable, switch dropping packets)

Test: K8s Pod → Synology NAS
#

Verify pods can reach full network speed (important for NFS-backed PVCs):

# Start iperf3 server on Synology (if not running)
ssh jlambert@192.168.2.129 "iperf3 -s -D"

# From K8s cluster
kubectl run iperf3-client --rm -it --image=mlabbe/iperf3 -- iperf3 -c 192.168.2.129 -t 30

Expected: Same ~940 Mbps

If lower:

  • Check CNI overhead (Flannel/Calico add encapsulation)
  • Check pod network policies (throttling?)
  • Check NAS CPU usage during test (might be maxed)

NFS Performance
#

Tests: Sequential read/write, random I/O, metadata operations. Critical for media servers and K8s PVCs.

Baseline: Local Disk on NAS
#

SSH to Synology, test raw storage performance:

ssh jlambert@192.168.2.129

# Write test (create 10 GB file)
dd if=/dev/zero of=/volume1/test-write.img bs=1M count=10240 conv=fdatasync
# Note the MB/s

# Read test (read the file)
dd if=/volume1/test-write.img of=/dev/null bs=1M count=10240
# Note the MB/s

# Cleanup
rm /volume1/test-write.img

My results (Synology DS920+, RAID 5, 4x4TB WD Red):

  • Write: 220 MB/s
  • Read: 280 MB/s

This is the storage ceiling. NFS will be slower (network + protocol overhead).

Test: NFS from Proxmox
#

Mount NFS share temporarily:

# On Proxmox
mkdir /mnt/nfs-test
mount -t nfs 192.168.2.129:/volume1/nfs01 /mnt/nfs-test

# Write test
dd if=/dev/zero of=/mnt/nfs-test/test-write.img bs=1M count=10240 conv=fdatasync

# Read test
dd if=/mnt/nfs-test/test-write.img of=/dev/null bs=1M count=10240

# Cleanup
rm /mnt/nfs-test/test-write.img
umount /mnt/nfs-test

My results:

  • Write: 110 MB/s (50% of local)
  • Read: 115 MB/s (41% of local)

Overhead breakdown:

  • Network: ~6% (TCP)
  • NFS protocol: ~10%
  • Synology CPU (NFS daemon): ~30%

What’s normal (1 Gbps network):

  • 100-115 MB/s: Expected max (network bandwidth limit)
  • 70-100 MB/s: Good (some NFS overhead)
  • < 70 MB/s: Poor (investigate)
ℹ️ Info
1 Gbps = 125 MB/s theoretical. NFS overhead + network protocol overhead = ~110 MB/s realistic max.

Test: Random I/O with fio
#

fio (Flexible I/O Tester) simulates real workloads:

Install:

apt install fio

Random read (4K blocks, simulates database/VM workload):

fio --name=random-read \
    --ioengine=libaio \
    --rw=randread \
    --bs=4k \
    --size=1G \
    --numjobs=4 \
    --runtime=30 \
    --group_reporting \
    --directory=/mnt/nfs-test

Output:

random-read: (groupid=0, jobs=4): err= 0: pid=12345
  read: IOPS=8432, BW=32.9MiB/s (34.5MB/s)(987MiB/30001msec)

Key metrics:

  • IOPS: 8432 (operations per second)
  • Bandwidth: 32.9 MiB/s

Random write (4K blocks):

fio --name=random-write \
    --ioengine=libaio \
    --rw=randwrite \
    --bs=4k \
    --size=1G \
    --numjobs=4 \
    --runtime=30 \
    --group_reporting \
    --directory=/mnt/nfs-test

My results (NFS on Synology RAID 5):

  • Random read IOPS: 8,000-9,000
  • Random write IOPS: 1,500-2,000 (RAID 5 write penalty)

Sequential read (simulates streaming media):

fio --name=sequential-read \
    --ioengine=libaio \
    --rw=read \
    --bs=1M \
    --size=2G \
    --numjobs=1 \
    --runtime=30 \
    --directory=/mnt/nfs-test

Expected: 110-115 MB/s (matches network limit)


K8s PVC Performance
#

Test NFS-backed persistent volumes (media stack scenario):

Deploy a test pod with fio:

# fio-test.yaml
apiVersion: v1
kind: Pod
metadata:
  name: fio-test
  namespace: default
spec:
  containers:
  - name: fio
    image: nixery.dev/shell/fio
    command: ["sleep", "3600"]
    volumeMounts:
    - name: test-pvc
      mountPath: /data
  volumes:
  - name: test-pvc
    persistentVolumeClaim:
      claimName: fio-test-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fio-test-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs-appdata
  resources:
    requests:
      storage: 5Gi

Apply and test:

kubectl apply -f fio-test.yaml
kubectl wait --for=condition=ready pod/fio-test

# Sequential write (simulates Plex recording)
kubectl exec fio-test -- fio --name=seq-write \
    --ioengine=libaio --rw=write --bs=1M --size=2G \
    --numjobs=1 --runtime=30 --directory=/data

# Random read (simulates database queries)
kubectl exec fio-test -- fio --name=rand-read \
    --ioengine=libaio --rw=randread --bs=4k --size=1G \
    --numjobs=4 --runtime=30 --directory=/data

# Cleanup
kubectl delete -f fio-test.yaml

Expected: Similar to direct NFS mount (100-115 MB/s sequential, 8k IOPS random read)

If slower:

  • CSI driver overhead
  • Pod CPU limits throttling
  • NFS mount options (check values.yaml for NFS CSI driver)

CPU and Compute Performance
#

sysbench: CPU Benchmark
#

Install:

apt install sysbench

Single-threaded performance:

sysbench cpu --threads=1 --time=30 run

Output:

CPU speed:
    events per second:  1234.56

Multi-threaded performance:

sysbench cpu --threads=$(nproc) --time=30 run

My results:

HostSingle-thread (events/s)Multi-thread (events/s)
Proxmox (i7-6700)18507200 (4 cores)
Talos K8s Worker VM12002400 (2 vCPU)
Synology (Celeron J4125)6502100 (4 cores)

Use cases:

  • Plex transcoding: ~1500 events/s per stream (software transcode)
  • K8s etcd: Single-thread performance matters most

Stress Test: Sustained Load
#

Install stress-ng:

apt install stress-ng

CPU + memory stress (10 minutes):

stress-ng --cpu $(nproc) --vm 2 --vm-bytes 50% --timeout 10m --metrics

Watch:

  • CPU temp: watch -n 1 sensors (if lm-sensors installed)
  • Throttling: dmesg | grep -i "cpu clock throttled"

Expected:

  • Temps stable under 80°C (desktop CPUs)
  • No throttling messages
  • System remains responsive

If temps spike above 90°C or you see throttling: airflow problem, thermal paste dried out, or insufficient cooling.


Results Summary: My Homelab Baselines
#

Document these for future comparison:

TestValueNotes
Network
Proxmox ↔ Synology TCP940 MbpsExpected max for 1 Gbps
K8s Pod → Synology TCP935 MbpsNegligible overhead
UDP packet loss0%No congestion
Storage
Synology local write220 MB/sRAID 5 ceiling
Synology local read280 MB/sRAID 5 ceiling
NFS write (Proxmox)110 MB/sNetwork-limited
NFS read (Proxmox)115 MB/sNetwork-limited
NFS random read IOPS8,500Good for RAID 5
NFS random write IOPS1,800RAID 5 write penalty
K8s PVC write108 MB/sCSI driver overhead ~2%
Compute
Proxmox CPU (single)1850 events/sSufficient for etcd
Proxmox CPU (multi)7200 events/s~2 Plex transcodes
K8s Worker CPU (single)1200 events/sVM overhead ~35%
K8s Worker CPU (multi)2400 events/s~1 Plex transcode

Troubleshooting Playbook
#

“My network is slow”
#

  1. Baseline test: iperf3 between two nodes
    • < 400 Mbps → Check link speed: ethtool eth0 | grep Speed
    • Duplex mismatch: ethtool eth0 | grep Duplex (should be “Full”)
  2. Switch stats: Check for errors
    • EdgeSwitch: Web UI → Ports → Look for “CRC Errors” or “Collisions”
  3. Cable test: Swap cable, re-test

“NFS is slow”
#

  1. Network first: iperf3 to NAS (if network is slow, NFS will be slow)
  2. NFS vs local: Compare dd write speeds on NFS vs NAS local disk
    • If local is also slow → disk problem (RAID rebuild? Failing drive?)
  3. NFS mount options: Check mount options:
    mount | grep nfs
    # Look for: rsize=1048576,wsize=1048576 (1 MB buffers)
  4. NAS CPU: SSH to NAS, check CPU during transfer:
    top
    # Is nfsd process at 100%?

“Plex is buffering”
#

Systematically test each layer:

  1. Network: iperf3 from Plex server to client device (should be > 50 Mbps for 1080p, > 100 Mbps for 4K)
  2. Storage: fio test on media PVC (should sustain > 50 MB/s read)
  3. CPU: Check Plex transcoding load:
    kubectl top pod -n media -l app.kubernetes.io/name=plex
    # Is CPU at limit?
  4. Client: Is the client forcing transcode? (Check Plex dashboard → Now Playing → Transcode reason)

Automating Benchmarks
#

Create a script to run quarterly:

#!/bin/bash
# homelab-benchmark.sh

OUTPUT="benchmark-$(date +%Y%m%d).txt"

echo "=== Homelab Benchmark Report ===" | tee "$OUTPUT"
echo "Date: $(date)" | tee -a "$OUTPUT"
echo "" | tee -a "$OUTPUT"

echo "--- Network: Proxmox → Synology ---" | tee -a "$OUTPUT"
iperf3 -c 192.168.2.129 -t 10 | grep sender | tee -a "$OUTPUT"

echo "" | tee -a "$OUTPUT"
echo "--- NFS Write ---" | tee -a "$OUTPUT"
dd if=/dev/zero of=/mnt/nfs-test/benchmark.img bs=1M count=1024 conv=fdatasync 2>&1 | grep copied | tee -a "$OUTPUT"

echo "" | tee -a "$OUTPUT"
echo "--- NFS Read ---" | tee -a "$OUTPUT"
dd if=/mnt/nfs-test/benchmark.img of=/dev/null bs=1M 2>&1 | grep copied | tee -a "$OUTPUT"

rm /mnt/nfs-test/benchmark.img

echo "" | tee -a "$OUTPUT"
echo "--- CPU (single-thread) ---" | tee -a "$OUTPUT"
sysbench cpu --threads=1 --time=10 run | grep "events per second" | tee -a "$OUTPUT"

echo "" | tee -a "$OUTPUT"
echo "=== Benchmark Complete ===" | tee -a "$OUTPUT"
echo "Report saved: $OUTPUT"

Run quarterly, compare results over time. Detect degradation before it causes outages.


What I Learned
#

1. Baselines Prevent Wild Goose Chases
#

Before benchmarking, every problem was a guess. “Is it the network? The disk? The CPU?” Now I check baselines first. If network benchmarks show 940 Mbps but Plex is buffering, I know network isn’t the problem. Saves hours.

2. 1 Gbps Is Fine for Most Homelabs
#

I was convinced I needed 10 GbE. Benchmarks showed:

  • Plex 4K streams: 60-80 Mbps
  • NFS writes during backups: 110 MB/s (near 1 Gbps max)
  • Total household usage: < 300 Mbps peak

10 GbE would be overkill. Saved $800 on NICs and switches.

3. NFS Overhead Is Real but Acceptable
#

50% overhead vs local disk (220 MB/s → 110 MB/s). But 110 MB/s is still plenty for:

  • Streaming multiple 4K movies simultaneously (60 MB/s)
  • Kubernetes PVCs (most apps bottleneck on CPU or code, not I/O)
  • Backups (compress while writing, CPU-bound before I/O-bound)

4. CPU Performance Matters More Than You Think
#

Upgraded from Celeron to i7 for Proxmox. Single-thread performance went from 650 → 1850 events/s. Kubernetes etcd operations got noticeably faster (consensus writes are single-threaded). Felt it before I measured it.

5. Random I/O Is the Real Bottleneck
#

Sequential reads hit 115 MB/s (great). Random 4K reads hit 8,500 IOPS (sounds high, but translates to 33 MB/s). Databases on NFS are slow for a reason. If you run Postgres on your homelab, use local SSDs or iSCSI.


What’s Next
#

You have performance baselines for your homelab. Network bandwidth, NFS throughput, storage IOPS, and CPU performance documented.

Optional enhancements:

  • Grafana dashboards - Visualize trends over time (network throughput degrading?)
  • 10 GbE upgrade - If benchmarks show you’re hitting 1 Gbps limits
  • SSD caching - Add SSD read/write cache to Synology for IOPS boost
  • Automated regression testing - Run benchmarks after every major change (DSM update, kernel upgrade)

The core workflow works. Baseline, test, compare. Find bottlenecks before they find you.


References
#

Homelab Infrastructure - This article is part of a series.
Part : This Article