A single USB stick that produces a fully configured Proxmox VE host with zero manual intervention.

“Automate yourself out of a job.” - Every SRE Engineer, probably


Why Automate?

Installing Proxmox manually works fine once. But manual installs are one-of-a-kind. They drift, they can’t be reproduced, and they don’t survive a drive failure without tribal knowledge.

Proxmox supports automated installation through answer files, and you can extend it with first-boot hooks to handle everything the answer file can’t. I put together a small repo that automates the full process: from ISO preparation to a configured host.

⚠️ Lab context
This configuration is built for a homelab environment, not production. Some choices here (passwordless sudo, no-subscription repos, patching the subscription nag) are conveniences that trade operational strictness for simplicity. I’ll call out the key differences where they come up. The automation patterns themselves transfer to production just fine.

Three files handle everything from building the installer to configuring the host:

FilePurpose
answer.tomlDrives the unattended Proxmox installer
first-boot.shRuns once after install to configure the host
prepare-usb.shBakes everything into a bootable USB

The Answer File

The answer file is TOML that the Proxmox installer reads directly from the ISO. No human input required. The repo ships an answer.toml.example. Copy it and fill in your values.

Global Settings

[global]
keyboard = "en-us"
country = "us"
fqdn = "your-host.local"
mailto = "root@your-host.local"
timezone = "America/New_York"

# Generate with: mkpasswd --method=sha-512 'your-password-here'
root-password-hashed = "<YOUR_SHA512_HASH>"

root-ssh-keys = [
    "ssh-ed25519 <YOUR_PUBLIC_KEY>",
]

reboot-on-error = false
reboot-mode = "reboot"

root-password-hashed takes a SHA-512 hash, so no plaintext passwords live in the file. The SSH key grants immediate key-based root access after install.

Setting reboot-on-error = false keeps the installer visible on failure so you can inspect it.

💡 Tip
mkpasswd lives in the whois package on Debian/Ubuntu. Install it with apt install whois if the command isn’t found.

Network

[network]
source = "from-answer"
cidr = "<MGMT_IP>/24"
dns = "1.1.1.1"
gateway = "<GATEWAY_IP>"

filter.ID_NET_NAME = "eno*"

source = "from-answer" bypasses DHCP. The ID_NET_NAME filter matches the NIC by its predictable interface name: eno* for onboard, enp* for PCIe, eth* for legacy. This prevents the installer from binding to the wrong interface on multi-NIC machines.

💡 Tip
Not sure what your NIC is named? Boot any Linux live USB on the target machine and run ip link to see the interface names.

Disk Setup

[disk-setup]
filesystem = "ext4"
disk-list = ["<TARGET_DISK>"]

lvm.maxroot = 100
lvm.swapsize = 8
lvm.minfree = 0

disk-list names the target device explicitly. This is critical on multi-drive hosts where you don’t want the installer guessing. If device names shift between boots, use a model filter instead: filter.ID_MODEL = "Your Drive Model*".

💡 Tip
Run lsblk -d -o NAME,SIZE,MODEL from a live USB to identify your target disk and its model string.

The LVM layout: 100 GB root, 8 GB swap, everything else becomes a thin pool for VM disks. minfree = 0 allocates all remaining space rather than leaving it unallocated in the VG.

First-Boot Hook

[first-boot]
source = "from-iso"
ordering = "fully-up"

from-iso means the script is embedded in the prepared ISO. fully-up means it runs after Proxmox services are available. This is required because the script calls pveum and pvesm, which need the API stack running.


The First-Boot Script

Everything the answer file can’t do lives here: repository switching, user creation, security hardening, storage, and NTP. The script uses set -euo pipefail and logs every step to /var/log/first-boot-config.log.

💡 Tip
If something goes wrong after install, check this log first: cat /var/log/first-boot-config.log. It shows exactly which step failed and why.

Switch to the No-Subscription Repository

# Disable enterprise repos — handles both .list (PVE 8) and .sources (PVE 9+)
for f in /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/ceph.list; do
    [ -f "$f" ] && sed -i 's/^deb/#deb/' "$f"
done
for f in /etc/apt/sources.list.d/pve-enterprise.sources /etc/apt/sources.list.d/ceph.sources; do
    if [ -f "$f" ]; then
        sed -i 's/^Enabled: yes/Enabled: no/' "$f"
        grep -q "^Enabled:" "$f" || sed -i '/^Types:/i Enabled: no' "$f"
    fi
done

# Add no-subscription repo
CODENAME=$(grep VERSION_CODENAME /etc/os-release | cut -d= -f2)
echo "deb http://download.proxmox.com/debian/pve ${CODENAME} pve-no-subscription" \
    > /etc/apt/sources.list.d/pve-no-subscription.list

Proxmox ships with enterprise repos enabled. Without a subscription, apt update fails. This disables them and adds the community repo. It handles both .list (PVE 8) and .sources (PVE 9+) formats, so the script works across versions.

🏭 In production
Use the enterprise repository with a valid subscription. It provides tested, stable updates and access to Proxmox support.

Install Baseline Packages

DEBIAN_FRONTEND=noninteractive apt-get install -y -qq \
    curl wget vim htop iotop net-tools dnsutils \
    gnupg lsb-release ca-certificates sudo rsync \
    tmux unzip tree jq nfs-common ufw fail2ban chrony

The tools needed for sysadmin work plus the services configured later in the script.

Create an Admin User

useradd -m -s /bin/bash -G sudo "<ADMIN_USER>"

# SSH key
mkdir -p "/home/<ADMIN_USER>/.ssh"
echo "<SSH_PUBLIC_KEY>" > "/home/<ADMIN_USER>/.ssh/authorized_keys"
chmod 700 "/home/<ADMIN_USER>/.ssh"
chmod 600 "/home/<ADMIN_USER>/.ssh/authorized_keys"

# Passwordless sudo
echo "<ADMIN_USER> ALL=(ALL) NOPASSWD:ALL" > "/etc/sudoers.d/<ADMIN_USER>"
chmod 440 "/etc/sudoers.d/<ADMIN_USER>"

# Proxmox access
pveum user add "<ADMIN_USER>@pam"
pveum aclmod / -user "<ADMIN_USER>@pam" -role PVEAdmin

A dedicated admin user with SSH key access, passwordless sudo, and PVEAdmin in the Web UI. Day-to-day work uses this named account (audit trail); root is reachable via sudo when needed.

🏭 In production
Avoid NOPASSWD sudo. Require password confirmation for privilege escalation and scope the PVE role more tightly than PVEAdmin based on actual needs.

Harden SSH

sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config
sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i 's/^#\?PubkeyAuthentication.*/PubkeyAuthentication yes/' /etc/ssh/sshd_config
sed -i 's/^#\?X11Forwarding.*/X11Forwarding no/' /etc/ssh/sshd_config

systemctl restart sshd

Keys only, no passwords. Root can still authenticate with a key (prohibit-password). X11 forwarding disabled because a headless hypervisor has no use for it.

💡 Tip
Before closing your current session, open a second SSH connection to verify key auth works. Locking yourself out of a headless machine means pulling out a monitor and keyboard.

Configure NTP

cat > /etc/chrony/chrony.conf <<'CHRONY'
pool 0.pool.ntp.org iburst
pool 1.pool.ntp.org iburst
pool 2.pool.ntp.org iburst
pool 3.pool.ntp.org iburst

driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
CHRONY

systemctl enable chrony
systemctl restart chrony

Accurate time matters more on a hypervisor than almost anywhere else. VM clocks derive from the host, certificate validation depends on it, and cluster operations break without it. iburst speeds up the initial sync after a fresh boot.

Configure the Firewall

ufw --force reset
ufw default deny incoming
ufw default allow outgoing

ufw allow 22/tcp comment 'SSH'
ufw allow 8006/tcp comment 'Proxmox Web UI'
ufw allow 3128/tcp comment 'SPICE Proxy'
ufw allow 5900:5999/tcp comment 'VNC for VMs'
ufw allow 111/udp comment 'NFS rpcbind'

ufw --force enable

Default deny with an explicit allowlist. Only management (SSH, Web UI), VM console (SPICE, VNC), and NFS traffic gets through.

🏭 In production
Consider using Proxmox’s built-in firewall (pve-firewall) instead of UFW. It integrates with the cluster config, supports per-VM rules, and is manageable through the Web UI and API. Also restrict source IPs for SSH and the Web UI to management VLANs.

Configure fail2ban

cat > /etc/fail2ban/jail.local <<'F2B'
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5

[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log

[proxmox]
enabled = true
port = https,8006
filter = proxmox
backend = systemd
F2B

cat > /etc/fail2ban/filter.d/proxmox.conf <<'F2BF'
[Definition]
failregex = pvedaemon\[.*authentication (verification )?failure; rhost=<HOST> user=\S+ msg=.*
ignoreregex =
journalmatch = _SYSTEMD_UNIT=pvedaemon.service
F2BF

Two jails: SSH and the Proxmox Web UI. Five failures in 10 minutes triggers a one-hour ban. The Proxmox filter watches pvedaemon.service via the systemd journal.

💡 Tip
Locked yourself out? From the console, run fail2ban-client set sshd unbanip <YOUR_IP> to unban immediately. Run fail2ban-client status sshd to see who’s currently banned.

Add NFS Storage

pvesm add nfs "<NFS_STORAGE_NAME>" \
    --server "<NFS_SERVER_IP>" \
    --export "<NFS_EXPORT_PATH>" \
    --path "/mnt/pve/<NFS_STORAGE_NAME>" \
    --content "images,iso,backup,snippets,vztmpl" \
    --options "vers=3,soft,intr"

Adds an NFS share as a Proxmox storage backend. The content flag controls what types of data can live there: disk images, ISOs, backups, snippets, and container templates. The soft,intr mount options mean NFS operations time out and return errors rather than hanging indefinitely if the server goes offline.

🏭 In production
Use NFS v4 with Kerberos authentication, or dedicated storage networks (iSCSI, Ceph) with redundancy. A single NAS is a single point of failure for all VM storage and backups.

Remove the Subscription Nag

NAG_FILE="/usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js"
if [ -f "$NAG_FILE" ] && grep -q "Ext.Msg.show" "$NAG_FILE"; then
    cp "$NAG_FILE" "${NAG_FILE}.bak"
    sed -Ei "s/^\s*(Ext\.Msg\.show\(\{)$/void({ \/\/ \1/" "$NAG_FILE"
    systemctl restart pveproxy
fi

Patches out the subscription reminder dialog. A backup of the original file is created first. This is purely a lab convenience.

🏭 In production
Purchase a Proxmox subscription. You get stable enterprise repos, direct support, and you fund the project that makes all of this possible. Don’t patch this out in production environments.

The USB Prep Script

The glue. Takes a stock Proxmox ISO, embeds the answer file and first-boot script, and writes the result to a USB drive.

sudo ./prepare-usb.sh <proxmox-iso> <usb-device>
# Example: sudo ./prepare-usb.sh ~/Downloads/proxmox-ve_8.3-1.iso /dev/sda

The script:

  1. Validates dependencies: xorriso, dd, proxmox-auto-install-assistant
  2. Validates inputs: ISO exists, USB is a block device, password isn’t the placeholder
  3. Validates the answer file: runs proxmox-auto-install-assistant validate-answer
  4. Prepares the ISO: embeds answer file + first-boot script via prepare-iso
  5. Writes to USB: dd with 4M blocks, confirmation prompt before destroying data
  6. Cleans up: removes the temp ISO from /tmp
ℹ️ Info
proxmox-auto-install-assistant is the official Proxmox utility for preparing automated install media. The script provides installation instructions if it’s missing.
💡 Tip
Double-check which device is your USB before running the script. lsblk will show all block devices and their sizes. The script prompts for confirmation, but it’s worth verifying you’re not about to wipe the wrong drive.

End-to-End Workflow

  1. Clone the repo
  2. cp answer.toml.example answer.toml
  3. Generate a root password hash: mkpasswd --method=sha-512
  4. Fill in your network, disk, and credential details
  5. sudo ./prepare-usb.sh <iso> <usb>
  6. Boot from USB (check BIOS for the boot menu key)
  7. Select Automated Installation (auto-selects after 10s)
  8. Wait for install + reboot
  9. First-boot script runs and configures everything
  10. Hit the Web UI at https://<MGMT_IP>:8006 and verify
💡 Tip
The browser will warn about a self-signed certificate. This is expected. Accept the warning to proceed. Log in as root with the password you hashed in answer.toml.

What’s Next

This gives you a fully configured Proxmox host from a single USB stick. If the host needs rebuilding, it’s the same USB and the same result. No runbooks, no memory.

For ongoing management beyond the initial setup (backups, patching, security auditing), Ansible picks up where first-boot.sh leaves off. The script logs a next-steps reminder with the playbook command at the end of its run.


References