#
Proxmox
#
Switching Proxmox from static IP to DHCP
When you first install Proxmox, it will set whatever initial IP you give it to be that IP forever (wow is that annoying). If you SSH into the Proxmox box and have a look at /etc/network/interfaces
it probably looks something like this:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.0.20.210/24
gateway 10.0.20.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
iface wlp0s20f3 inet manual
source /etc/network/interfaces.d/*
What you'll want to do is set the iface vmbr0 inet static
to be iface vmbr0 net dhcp
and then comment out a few lines (see below) that give the box the static IP:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet dhcp
# address 10.0.20.207/24
# gateway 10.0.20.1
bridge-ports eno1
# bridge-stp off
# bridge-fd 0
iface wlp0s20f3 inet manual
Then reboot and DHCP should get picked up this time!
I just learned that the new version of Proxmox (9.0 and greater) seems to have "broken" the ability to set DHCP the way I describe above. I became aware of this when I ran ifreload -a
I got this error:
error: vmbr0; cmd '/sbin/dhclient -pf /run/dhclient.vmbr0.pid -lf /var/lib/dhcp/dhclient.vmbr0.leases vmbr0' failed ([Error 2] no such file or directory: '/sbin/dhclient')
After trying a lot of things that didn't work, I was able to resolve this by installing dhclient
:
apt update
apt install isc-dhcp-client -y
#
Install QEMU guest agent (Windows)
- Mount the virtio-win.iso and install
D:\guest-agent\qemu-ga-x86_64.msi
- Shutdown the system.
- On the Proxmox system, head to Options > QEMU Guest Agent > Use QEMU Guest Agent (tick the box)
- Power up the VM
- Open devmgmt.msc - look for PCI Simple Communications Controller and right-click it, then click Update Driver
- Select Browse my computer for drivers and feed it the path of
D:\vioserial\YOUR-WINDOWS-VERSION\amd64\
- Per Proxmox docs, check if the service is running in PowerShell:
Get-Service QEMU-GA
- Additionally, from Proxmox command line you can run
qm agent xxx ping
to make sure an empty prompt is returned back (if you get QEMU guest agent is not running start at step 1 and double check everything).
#
Install QEMU guest agent (Linux)
Check out these instructions but basically:
sudo apt-get install qemu-guest-agent -y
sudo systemctl start qemu-guest-agent
sudo systemctl enable qemu-guest-agent
#
Network connectivity issues for Linux VMs tagged with VLANs
I had a super frustrating problem where after moving Linux VMs from one host to another, they weren't pulling DHCP or not able to route traffic or both. This post and ChatGPT nudged me towards making a /etc/netplan/01-netcfg.yaml
file that looks something like this:
network:
ethernets:
ens18:
dhcp4: true
ens19:
dhcp4: true
version: 2
Once I did that and then sudo netplan apply
the DHCP addresses got pulled and routing worked!
#
QM command cheat sheet
#
Unlock a system (that might be locked from snapshot)
qm unlock XXX
#
List machines
qm list
#
Check status on if a particular machine is booted
qm status xxx
#
Get full info on one VM - like if autoboot is set, hardware info, etc:
qm config xxx
#
Enable autoboot (if onboot does not = 1 from `qm config X')
qm set X -onboot 1
#
Change boot order
qm set X --startup order=2,up=60
#
Stop a VM
qm stop X
#
Take a snapshot
qm snapshot xxx NAME-OF-SNAP --description "Description of my snapshot"
#
List snapshots
qm listsnapshot xxx
#
Restore a snapshot
qm rollback xxx NAME-OF-SNAP
#
Start a VM
qm start X
#
Bring network link up or down
qm set VMID -net0 virtio=THE:MAC:ADDRESS:OF:THE:PC,bridge=vmbr1,link_down=1,tag=10
#
Run a powershell command directly against a specific VM
qm guest exec xxx -- cmd /c "dir c:\users\administrator\desktop"
#
Another example of downloading and then running a PowerShell script
qm guest exec 100 -- cmd /c "powershell invoke-webrequest https://somesite/script.ps1 -outfile c:\users\administrator\desktop\script.ps1"
qm guest exec 100 -- cmd /c "powershell.exe -ExecutionPolicy Bypass -File "C:\\Users\\administrator\\Desktop\\script.ps1""
#
Yet another rexample of installing NIC driver on a fresh Windows build
qm guest exec 100 -- cmd /c "pnputil /add-driver d:\NetKVM\w11\amd64\netkvm.inf /install"
#
Yet ANOTHER example of installing Chrome
qm guest exec 100 -- cmd /c "powershell Invoke-WebRequest 'https://dl.google.com/chrome/install/chrome_installer.exe' -OutFile c:\users\ttadmin\desktop\chrome_installer.exe && c:\users\ttadmin\desktop\chrome_installer.exe /silent /install"
#
Then tailing the last 10 lines of a log file
qm guest exec 100 -- cmd /c "powershell.exe -ExecutionPolicy Bypass -Command Get-Content -Path 'C:\\some\\path\\install.log' -Tail 10"
#
Check if qmagent is running
qm agent xxx ping
#
Delete/destroy a VM
qm destroy xxx
#
Backup a VM and move to another Proxmox node in a cluster
vzdump <vmid> --storage <storage_name> --mode snapshot
Then move to another node:
scp /path/to/backup/vzdump-qemu-<vmid>.vma root@<destination_node_ip>:/var/lib/vz/dump/
Then restore:
qmrestore /var/lib/vz/dump/backup.vma <vmid> --storage <target-storage>
#
Example
This will dump a snapshot of VM 121 to a file path like /var/lib/vz/dump/vzdump-qemu-121-2024_09_13-10_25_00.vma
vzdump 121 --storage local --mode snapshot
Now move to another node:
scp /var/lib/vz/dump/vzdump-qemu-121-2024_09_13-10_25_00.vma root@target.host.for.VM:/var/lib/vz/dump/
Now restore:
qmrestore /var/lib/vz/dump/vzdump-qemu-121-2024_09_13-10_25_00.vma 123 --storage local-lvm
In the example above, 123
is the VM ID you want to assign the imported host, and local-lvm
is the storage pool to restore to.
#
Resize a disk
Shutdown the affected VM, then check the details on the VM you want to resize:
qm config xxx
This will tell you what kind of hard drive type (like virtio0
) you have
To do the resize and grow the disk by 20G:
qm resize 100 virtio0 +20G
#
Resize a disk at Linux command line
Once the resizing is done, you can do the extending portion from the Linux command line. Using the latest Ubuntu OS as an example, here's what I did. First find the partion that needs growing:
sudo fdisk -l
Then grow it (in this example the target is dev/vda1
:
sudo growpart /dev/vda 1
Finish the resizing:
sudo resize2fs /dev/vda1
Done!
#
Add memory (RAM) to a system
For example, upgrade RAM to 8192:
# Check how much memory is free
free -h
# Stop the VM that needs a RAM facelift
qm stop (VMID)
# Set to 8 gigs
qm set (VMID) -memory 8192
# Set to 16 gigs
qm set (VMID) -memory 16384
#
Reset a Windows VM password
qm guest passwd <vmid> <username>
So for example:
qm guest passwd 100 administrator
#
Troubleshooting
Here are some issues/fixes I've seen in general when working with
#
Network issues
I have now run into several instances where my pentest boxes suddenly appear to go offline - both VMs show down at the exact same date/timestamp. My assumption was that there was potentially a hardware issue that was creeping up. But on a recent pentest I remoted back into the Windows VM after the suspected crash, and found that event viewer showed the VM humming away but throwing many errors about DNS lookup and connectivity failures.
Fast-forward through a lot of Proxmox forum searching and ChatGPT chatting and I found out that my NUC and hardware and driver version seem to have a common issue where the card seizes up and basically the whole NUC needs a reboot to get back online.
This is how I figured out there was a problem:
# journalctl logs around the time in question
journalctl --since "2025-09-23 17:15:00" --until "2025-09-23 17:30:00" -p 3..6
Below are a temporary and permanent fix that might help:
#
Temporary fix
Disable offloading features (runtime only; survives until reboot)
ethtool -K eno1 tso off gso off gro off
Make sure the change "stuck"
ethtool -k eno1 | egrep 'tso|gso|gro'
Your output will look something like:
tx-gso-robust: off [fixed]
tx-gso-partial: off [fixed]
tx-gso-list: off [fixed]
rx-gro-hw: off [fixed]
rx-gro-list: off
rx-udp-gro-forwarding: off
Resume using your VMs as normal, and then you can also watch the logs "live" on the NUC to see if issues pop up again:
journalctl -kf | grep -i e1000e
#
Permanent fix
At the time of this writing I have not tried this yet, but ChatGPT thinks if the temporary fix does it for you, you can edit your /etc/network/interfaces
to include the tso
command fix. For example:
auto lo
iface lo inet loopback
iface eno1 inet manual
post-up /sbin/ethtool -K $IFACE tso off gso off gro off
post-down /sbin/ethtool -K $IFACE tso on gso on gro on
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eno1
bridge-stp off
bridge-fd 0
iface wlp0s20f3 inet manual
source /etc/network/interfaces.d/*