A timeline infographic showing the history of Proxmox VE, an open-source virtualization platform. The timeline starts in 2008 with the 1st public release of Proxmox VE and ends in 2018 with Proxmox VE winning the Open Minds Award.

Roadmap

1. ROADMAP

  • Foundation and CLI integration released with Proxmox VE 7.3
  • Enhancement and stabilisation of the integrated Software Defined Network solution
    • released with Proxmox VE 8.
    • Stabilize VLAN and source NAT management as first parts of bringing Proxmox VE SDN out of tech preview.
  • Project "Cattle and Pets"
    • Improve user and virtual machine managing experience on big setups
    • Add cluster-wide update and status control center
    • Assist on Ceph upgrades with semi-automated restarts of services and OSDs
    • released with Proxmox VE 8.1
  • Text based installer UI
    • released with Proxmox VE 8.0
    • released with Proxmox VE 8.1
  • Cluster Resource Scheduling Improvements

              Short/Mid-Term:

    • released with Proxmox VE 7.4
    • Account for non-HA virtual guests
               Mid/Long-Term:
    • Add Dynamic-Load scheduling mode
    • Add option to schedule non-HA virtual guests too
  • Stabilizing the Software-Define Network stack
    • done with Proxmox VE 8.1
    • Implement DHCP-based IP address management - tech-preview with Proxmox VE 8.1
    • Improve and polish user interface experience

Release History

See also Announcement forum

Proxmox VE 8.1

Released 23. November 2023: See Downloads(updated ISO release 2 with current package set including updated kernel and ZFS 2.2.2, on 07. February 2024.)

  • Based on Debian Bookworm (12.2)
  • Latest 6.5 Kernel as new stable default
  • QEMU 8.1.2 (8.1.2 with ISO refresh)
  • LXC 5.0.2
  • ZFS 2.2.0 with stable fixes backported (2.2.2 with ISO refresh)
  • Ceph Reef 18.2.0
  • Ceph Quincy 17.2.7
Highlights
  • Secure Boot support.

Proxmox VE now includes a signed shim bootloader trusted by most hardware's UEFI implementations. All necessary components of the boot chain are available in variants signed by Proxmox. The Proxmox VE installer can now be run in environments where Secure Boot is required and enabled, and the resulting installation can boot in such environments.

Existing Proxmox VE installations can be switched over to Secure Boot without reinstallation by executing some manual steps, see the documentation for details. How to use custom secure boot keys has been documented in the Secure Boot Setup wiki. For using DKMS modules with secure boot see the reference documentation.

  • The core of Proxmox VE's Software-Defined Network stack moved from experimental to supported and includes new features.

Proxmox VE SDN allows fine-grained control of virtual guest networks at the datacenter level.

The new automatic DHCP IP address management (IPAM) plugin can be used to transparently assign IPs to virtual guests in Simple zones. (tech-preview)

The web UI now allows inspecting and editing DHCP leases managed by the built-in IPAM plugin.

  • New flexible notification system.

Send notifications not only via the local Postfix MTA, but also via authenticated SMTP or to Gotify instances.

Flexible notification routing with matcher-based rules to decide which targets receive notifications about which events.

  • Proxmox Server Solution GmbH, the company behind Proxmox VE development and infrastructure, was assigned an official Organizationally Unique Identifier (OUI) BC:24:11 from the IEEE to use as default MAC prefix for virtual guests.

This OUI can be used for virtual guests inside private networks by all users and is set as new default MAC-Address prefix in the datacenter options.

  • Ceph Reef is now supported and the default for new installations.

Reworked defaults brings improved performance and increased reading speed out of the box, with less tuning required.


Changelog Overview

Enhancements in the web interface (GUI)

  • Improvements to bulk actions:
    • Add a new "Bulk Suspend" action to suspend a selection of guests in one action.
    • Add a new section above the guest list for configuring guest filters and add a button for clearing filters.
    • Allow to filter guests by their tags.
    • Reorder fields and drop obvious warning about local storage to improve on screen-space utilization.
    • Reword the message displayed for bulk actions in the task log to "Bulk Start/Stop/Migrate". The message shown previously was "Start/Stop/Migrate all" and could be misleading in case not all guests were affected (issue 2336).
    • The "Bulk Migrate" action is now hidden on standalone nodes, as there is no valid migration target in that case.
  • Improvements to the node summary panel:
    • The summary now indicates whether the node was booted in legacy (BIOS) mode, EFI mode, or EFI mode with Secure Boot enabled.
    • The currently running kernel is now reported more compactly by indicating only the version and the build date.
  • Allow to automatically decompress an ISO file when downloading from a URL to a storage. The decompression algorithm can be set in the GUI (issue 4849).
  • Allow moving VMs and containers from one pool to a different pool in one operation.
  • Avoid needlessly reloading the GUI after ordering a certificate via ACME for a different cluster node.
  • The permission editor now also shows the ACL paths for notifications and PCI/USB mappings.
  • The resource tree now displays the usage in percent when hovering over a storage.
  • If the configured tree shape for tags is not "Full", the resource tree now displays a tooltip with the tag name when hovering over the configured shape.
  • Ensure the SPICE config is downloaded with the correct file extension on Safari to avoid problems with macOS application association (issue 4947)
  • Fix an issue where the "Migrate" button stayed disabled even if selecting a valid target node.
  • Fix a bug where the backup job editor window would add an invalid entry for VMID 0 when using selection mode "all".
  • Improve error message when creating a VM with an invalid hostname: Clarify that a valid hostname, not a valid DNS name, is expected (issue 4874).
  • When uploading a subscription, ignore surrounding whitespace in the subscription key to avoid confusing errors.
  • Improve the focus handling when editing tags to allow tabbing though editable tag fields.
  • Allow adding tags already when creating VMs and containers.
  • Increase height of the VM and container creation wizard to obtain a 4:3 ratio.
  • When creating an IP/CIDR inside an IPSet, the GUI now requires that an IP/CIDR is provided. Previously, the GUI accepted an empty field, but the API threw an error.
  • Update external links to proxmox.com that changed during the website redesign.
  • Fix an issue where the OK button would stay disabled when editing an ACME DNS challenge plugin (issue 4531).
  • Fix an issue where clicking "Reset" in the zpool creation window could cause an error when re-opening the window (issue 4951).
  • Fix an issue where users could write notes with links containing JavaScript code. This JavaScript code would be executed when a (different) user clicked on such a link.
  • HTML-encode API results before rendering as additional hardening against XSS.
  • Improved translations, among others:
    • Croatian (NEW!)
    •  Georgian (NEW!)
    •  Arabic
    • Catalan
    • German
    • Italian
    • Polish
    • Simplified Chinese
    • Traditional Chinese
    • Ukrainian
    • Several remaining occurrences of the GiB unit in the GUI can now be translated (issue 4551).

Virtual machines (KVM/QEMU)

  • New QEMU version 8.1.
  • Add clipboard support to the VNC console in the GUI. For now the feature cannot be enabled in the GUI and has to be manually enabled via API or CLI. After installing the SPICE guest tools, text can be copied from/to the guest clipboard using the noVNC clipboard button.
  • When creating a Windows VM, allow adding a second CD/DVD drive directly in the creation wizard.
  • This makes it easier to add the Windows VirtIO drivers ISO before starting the VM.
  • Remove the 10-minute timeout for allocating VM disks when restoring from backup, as this timeout may be exceeded if disks are large or network storage is involved (issue 2817).
  • Log a warning when starting a VM with a deprecated machine version.
  • Fix issues where shutdown and reboot commands would time out (instead of failing immediately) on ACPI-suspended VMs.
  • Enabling or disabling CPU hotplug for a running VM did not work reliably and is not allowed anymore.
  • Avoid leaving potentially large amounts of memory assigned to the QEMU process after backup.
  • Fix an issue where heavy network traffic or connection issues during a backup to Proxmox Backup Server could cause an unsolicited write to the first sector of a backed-up SATA disk, which usually contains the boot-sector (issue 2874).
  • Fix an issue where a race condition could cause a VM crash during backup if iothread is enabled.
  • Fix an issue where each pause and resume operation (for example when taking a snapshot) would increase the number of open file descriptors of the QEMU process, which could eventually lead to crashes.
  • Fix an issue where starting a VM with machine type q35 and multiple IDE drives would fail.
  • cloud-init: Fix issues where non-root users could not regenerate the cloud-init drive or set the ciupgrade option.
  • Start VMs using PCI passthrough with a higher timeout that is calculated from the configured memory. Previously, the timeout was reported to be too short when using PCI passthrough.
  • Fix an issue where qmeventd failed to retrieve VMIDs from processes on hybrid cgroup systems and logged errors to the journal.
  • Fix an issue where remote migration would fail for certain combinations of source/target storage, for example from qcow2 on directory to LVM-thin.
  • Fix an issue where backup of a VM template with a TPM would fail (issue 3963).
  • Fix an issue where the VNC proxy would fail if the LC_PVE_TICKET was not set (issue 4522).
  • Backports of several upstream kernel patches:
    • Fix an issue where VMs with a restricted CPU type could get stuck after live-migration from a host with kernel 5.15 to a host with kernel 6.2.
    • Fix an issue where VMs could get stuck after several days of uptime if KSM, ballooning, or both, were enabled.
    • The FLUSHBYASID flag is now exposed to nested VMs when running on an AMD CPU. This fixes an issue where some hypervisors running in a VM would fail to start nested VMs.
    • Fix an issue with recovering potential NX huge pages that resulted in a warning logged to the journal (issue 4833).
    • Fix an issue where only one NVMe device would be recognized even though multiple are present (issue 4770).
Containers (LXC)

  • Support device passthrough for containers. The new dev0/dev1/... options take the path of host device. Optionally, owner and permission settings for the device node inside the container can be given. For now, the option cannot be set in the GUI and has to be manually set via API or CLI.
  • Allow specifying multiple SSH keys in the container creation wizard (issue 4758).
  • Show privileged status as a separate row in the guest status view in the GUI.
  • Show distribution logo and name in the guest status view in the GUI.
  • Fix an issue where network would fail to come up for Fedora containers.
  • Add an API endpoint /nodes/{node}/lxc/{vmid}/interfaces for querying network interfaces of a running container.
  • Improve architecture detection for NixOS containers, which would previously produce a warning and default to x86_64 in case /bin/sh did not (yet) exist in the container.
  • The pct status command does not report guest CPU usage anymore, as there is currently no fast way to measure it (issue 4765).
  • Restoring a container from a PBS backup now honors the ignore-unpack-errors flag (issue 3460).
  • Fix an issue where Fedora containers would not have a container-getty on first boot.
General improvements for virtual guests
  • Show progress of offline disk migration in the migration task log by use of dd's status=progress argument (issue 3004).
  • Proxmox VE now has an officially assigned OUI from the IEEE BC:24:11 to be used as product-specific MAC prefix. This is now used by default instead of assigning purely random MACs. (issue 4764).
HA Manager
  • Notification for HA events, like fencing, are now configurable via the new modular notification system.
  • An issue with the target selection during service recovery, where a fenced node was selected as target, was fixed (issue 4984).
Improved management for Proxmox VE clusters
  • New flexible notification system.

Allows sending notifications to different targets. The local Postfix MTA, previously the sole notification option, is now one of several target types available.

Two new target types include: smtp allowing direct notification emails via authenticated SMTP, and gotify, which sends notifications to a Gotify instance.

Flexible notification routing is possible through matcher-based rules that determine which targets receive notifications for specific events.

Match rules can select events based on their severity, time of occurrence, or event-specific metadata fields (such as the event type). Multiple rules can be combined to implement more complex routing scenarios.

Email notifications now contain an Auto-Submitted header to avoid triggering automated replies (issue 4162)

  • Name resolution to find an IP for a node's name now looks at all IPs associated with the name, only excluding loopback addresses. Additionally, a better warning is logged in case of a failed lookup.
  • pvecm updatecerts, which is used to ensure a consistent state of the certificates in a cluster, was reworked for increased robustness:

Files not being stored in the cluster filesystem are now created earlier.

The call now waits for the node to join the quorate partition of the cluster instead of failing. This is especially helpful during node-bootup, when running before starting pveproxy.service.

The error message in case the call fails due to missing quorum was reworded for better clarity.

  • The MAC addresses of the guests under SDN are now cached in the cluster filesystem for improved lookup speed in a cluster.
Backup/Restore
  • Backups and backup jobs can now be configured with a notification mode for a smooth migration to the new notification system.

The legacy-sendmail mode replicates the previous behavior of sending an email via the local Postfix MTA if an email is configured.

The notification-system mode sends notifications exclusively using the new notification system.

The default mode auto behaves like legacy-sendmail if an email address is configured, and like notification-system if no email address is configured.

Thus, existing backup jobs without a configured email address will default to sending notification emails to the root user after the upgrade to Proxmox VE 8.1.

  • Allow setting the pbs-entries-max parameter. In order to prevent failing container backups with a huge number of files in a directory, it can help to set it to a higher value than the default(issue 3069).
  • Improvements to the vma CLI tool that handles VMA backup files:
    • The vma extract command now optionally takes a filter to only extract specific disks from the backup (issue 1534).
    • Fix an issue where the vma create command could not write to tmpfs (issue 4710).
  • Improvements to file restore:
    • Fix an issue where the settings for ZFC ARC minimum and maximum were not properly set for the temporary file-restore VM.
    • Fix an issue where debug log messages were not printed even though the PBS_QEMU_DEBUG environment variable was set.
  • Fix an issue with backups of diskless VMs to Proxmox Backup Server: Even though encryption was enabled, such backups would not be encrypted. Since the backup contained no disks, this did not reveal any VM data, but the VM configuration was stored in plaintext (issue 4822).
  • File restore now allows downloading .tar.zst archives as an alternative to .zip archives.
  • Improved handling of backups with master key:
    • Abort the backup if the the running QEMU binary does not support master keys, instead of just printing a warning. Master keys are supported in QEMU builds of Proxmox VE since version 6.4.
    • If no encryption key is configured, the backup task will explicitly warn that the backup will be unencrypted.
    • The backup log now prints only one message that encryption is enabled, instead of previously two messages.
  • Allow to configure whether restore should overwrite existing symlinks or hard links, when directly invoking proxmox-backup-client restore (issue 4761)
Storage
  • Improvements to the iSCSI storage backend:
    • Try to log into all discovered portals for a target, instead of just the single portal initially configured for the storage. This way, the storage can now become available in a multipath setup if at least one portal is online, even if the single configured portal is offline.
    • The backend is now usable immediately after installing Open-iSCSI. Previously, some services needed to be restarted first.
  • Fix an issue where a replication job could not be run or deleted if it referred to a storage that does not exist anymore.
  • SMB/CIFS: Fix connection check in case an empty domain is provided.
  • The BTRFS plugin received a fix for creating base templates when falling back to the standard directory variant.
Ceph
  • Support installing Ceph 18.2 Reef and make it the default release for new setups.
  • Allow creating multiple OSDs per physical device via API and CLI, and display such setups properly in the GUI. Multiple OSDs on one device can be useful when using fast NVMe drives that would be bottle-necked by a single OSD service (issue 4631).
  • When creating a pool, read the default values for size/min_size from the Ceph configuration instead of using hard-coded default values 3/2 (issue 2515).
  • There are use cases where different values for size/min_size make sense, for example 4/2 if a cluster spans two rooms.
  • The pveceph install commands now asks the user to confirm the Ceph version to be installed (issue 4364).
  • Improve discoverability of Ceph warnings by providing a tabular view and a button to copy warning details.
  • Report OSD memory usage more accurately by using the Proportional Set Size (PSS) of the OSD process. Previously, memory usage was read from the OSD service and thus included the page cache, leading to extremely high values shown in the GUI.
  • Use snake_case when setting options in Ceph config files to ensure consistency within that file (issue 4808).
  • Mark global pg_bits setting as deprecated and make it a no-op. The setting has been deprecated since Ceph 13.
  • Improve reporting of cluster health:
    • Replace "Error" category for PG states with "Warning" and "Critical" categories to allow more fine-grained assessment of the cluster state.
    • Rename "Working" state to "Busy" state to better convey its meaning.
Access control
  • Support nested pools up to a nesting depth of 3 levels for greater flexibility in structuring VMs and containers (issue 1148).

Pool names can now contain at most two slashes (allowing to structure them as parent/child/grandchild).

Permissions are inherited along the path according to the usual inheritance rules.

  • Improvements to LDAP/AD realms:
    • When adding/updating an LDAP/AD realm, there is now the option to directly check if the bind works, instead of having to wait for the first sync. This check is enabled by default in the GUI and can be disabled in the advanced options if needed.
    • Forbid specifying a Bind DN without a password in the GUI, which is already forbidden by the API.
    • Expose the mode option in the GUI that allows switching between LDAP, LDAPS and LDAP via STARTTLS. This option was already supported by the backend and succeeds the secure option which allowed switching between LDAP and LDAPS only.
    • Fix an issue with enforced TFA where certain sync settings would cause the TFA restriction to not be enforced.
    • It is now possible to update only the password field for the bind-user of an LDAP realm, this failed previously.
    • Allow setting the case-sensitive option of AD realms, which was previously only editable via CLI, in the GUI.
  • Existing-but-disabled TFA factors can no longer circumvent realm-mandated TFA.
  • The list of SDN-related ACL paths now reflects all SDN objects, ensuring that there are no settings which remain root@pam only.
  • A mistyped entry of pools instead of pool in the default set of ACL paths was fixed.
  • Unlocking a user now also resets the TFA failure count.
Firewall & Software-Defined Networking
  • The core of Proxmox VE's Software-Defined Network stack has been lifted from experimental to supported.
  • New DHCP plugin for SDN (tech preview).

Enabling DHCP for a zone will start a DHCP server that can automatically assign IP addresses to associated virtual guests (VMs and containers).

Currently, only Simple zones are supported, and dnsmasq is the only supported DHCP server.

Each subnet of a Simple zone can now be configured with DHCP ranges.

When a virtual guest associated to the zone starts, the DHCP plugin queries the zone's IPAM for an IP address and offers it to the virtual guest.

If the built-in Proxmox VE IPAM is used, active DHCP leases can be viewed and edited conveniently on the web UI.

  • IS-IS was added as a further SDN controller, next to EVPN and BGP.
  • The interfaces section of the frr is now parsed in order to support multiple underlay networks (like IS-IS).
  • MAC learning on SDN bridges can now selectively be disabled for individual plugins. This is implemented for the EVP plugin.
  • A warning is logged if the main network configuration (/etc/network/interfaces) does not source the SDN controlled configuration (/etc/network/interfaces.d/sdn), because the SDN configuration would be ignored in that case.
  • The error reporting for problems with vnet generation was improved, by pointing to the relevant task log.
  • The firewall log can now be also displayed for a specific timespan instead of showing the live-view (issue 4442).
  • Fix an issue where scoped alias resolution would fail with an error.
  • Enabling VLAN-awareness for an EVPN zone is unsupported and now fails instead of just printing a warning (issue 4917).
  • Fix an issue where an empty subnet could not be deleted if it has a gateway defined.
  • The IPAM selector, which is a required choice, is not hidden behind the Advanced checkbox in the UI anymore.
  • The identifying CIDR for a vnet is now named Subnet to improve clarity.
  • A systemd.link(5) configuration is now shipped to both keep bridges up even if there's no port connected, and to prevent that a random MAC-address is assigned to bridges or bond interfaces.
  • A ethtool is now a hard dependency of ifupdown2, matching the common need of disabling offloading features of certain NICs.
  • Prevent a crash in ifupdown2 caused by an error in a third-party plugin in /etc/network/ifup.d/.
  • The accept_ra and autoconf sysctl settings are now also applied for bridge interfaces.
  • ifupdown2 now correctly recognizes when remote IPs for vxlan are configured by external sources and does not remove them on reconfiguration.
Improved management of Proxmox VE nodes
  • Secure Boot support.

Proxmox VE now ships a shim bootloader signed by a CA trusted by most hardware's UEFI implementation. In addition, it ships variants of the GRUB bootloader, MOK utilities and kernel images signed by Proxmox and trusted by the shim bootloader.

New installation will support Secure Boot out of the box if it is enabled.

Existing installations can be adapted to Secure Boot by installing optional packages, and possibly reformatting and re-initializing the ESP(s), without the need for a complete reinstallation. See the wiki article for more details.

  • The kernel shipped by Proxmox is shared for all products. This is now reflected in the renaming from pve-kernel and pve-headers to proxmox-kernel and proxmox-headers respectively in all relevant packages.
  • The new proxmox-default-kernel and proxmox-default-headers meta-packages will depend on the currently recommended kernel-series.
    • Avoid logging benign but confusing warnings about a segfault in pverados.
  • Many edge-cases encountered during the upgrade from PVE 7.4 to 8 by our user-base are now detected and warned about in the improved pve7to8 checks:
    • Warn if DKMS modules are detected, as many of them do not upgrade smoothly to the newer kernel versions in PVE 8.
    • Warn if the PVE 7 system does not have the correct meta-package of grub installed ensures to actually upgrade the installed bootloader to the newest version.
    • The check for old cgroupv1 containers was adapted to not cause false positives on current containers (for example Fedora 38).
  • Support for adding custom ACME enabled CA's which require authentication through External Account Binding (EAB) on the commandline (issue 4497).
  • Using the Console/Shell on a PVE node is now possible for all users with the appropriate permissions (Sys.Console). The restriction to the pam was removed. The users will still need to login as a system user on the shell though.
  • With the Proxmox repositories having support for fetching them directly the changelogs for new package versions shown in the UI are now all gathered with apt changelog.
  • The pvesh debug tool now also supports yielding output for streaming API calls, like for example the syslog.
  • The documentation on firmware updates provided by the operating system has been extended and revised, helping administrators to identify if their setup is optimal.

Installation ISO

  • The ISO is able to run on Secure Boot enabled machines.
  • The text-based UI got significant improvement based on the feedback received from the first release in PVE 8.0.
  • The current link-state of each network interface is now displayed in the network configuration view, helping in identifying the correct NIC for the management interface (issue 4869).
  • If provided by the DHCP server, the hostname field is already filled out with the information from the lease.
  • The arc_max parameter for installations on ZFS can now be set in the Advanced Options. If not explicitly set by the user, it is set to a value targeting 10% of system memory instead of 50%, which is a better fit for a virtualization workload (issue 4829).
  • The correct meta-package of grub is now installed based on the boot mode (grub-pc or grub-efi-amd64). This ensures that the bootloader on disk gets updated when there is an upgrade for the grub package.
  • The text-based UI is now also available over a serial console, for headless systems with a serial port.
  • /var/lib/vz backing the local storage is now created as separate dataset for installations on ZFS (issue 1410).
  • The root dataset on ZFS installations now uses acltype=posixacl in line with upstream's recommendation.
  • Kernel parameters passed on the commandline during install are now also set in the target system (issue 4747).
  • Fix the warning that is shown in case the address family (IPv4, IPv6) of the host IP and DNS server do not match.
  • The text-based UI now sets the correct disk-size for the selected disk, instead of limiting the installation to the size of the first disk in the list (issue 4856).
  • For better UX, the text-based UI now also displays a count-down before automatically rebooting.
  • The screensaver in the graphical installer is now disabled.
  • The graphical installer now displays the units used for disk-based options.
  • The kernel commandline parameter vga788 is now set for both the graphical debug and all text-based UI installation options. This improves compatibility of the installer with certain hardware combinations.
Other Notable changes
  • Existing backup jobs without a configured email address did not send email notifications before the upgrade, but will default to sending email notifications to the root user via the new notification system after the upgrade to Proxmox VE 8.1.

In order to disable notification emails, either change the job's notification mode to legacy-sendmail or configure the notification system to ignore backup job notifications.

Known Issues & Breaking Changes
Kernel

  • With Kernel 6.5 and ZFS it can happen that the host hits a kernel bug when starting a VM with hugepages, and the host must be rebooted. 

More information can be found in the forum and in the bug reports for ZFS and Linux kernel.

  • Some users with Intel Wi-Fi cards, like the AX201 model, reported that initialization of the card failed with Linux kernel 6.5.

This is still being investigated. You should avoid booting into the new kernel if you have no physical access to your server and an Intel Wi-Fi device is used as its only connection. See the documentation for how to pin a kernel version.

  • Some SAS2008 controllers need a workaround to get detected since kernel 6.2, see the forum thread for details.
  • For certain Linux VMs with OVMF and guest kernels >= 6.5, there might be issues with SCSI disk hot(un)plug. This is a more general issue and is currently being investigated and will be fixed on Linux upstream.
  • The TPM (Trusted Platform Module) hardware random number generator (RNG) is now disabled on all AMD systems equipped with a firmware-based TPM (fTPM) device. This change was implemented due to such RNGs causing stutters in many systems. Affected systems should switch the RNG source from /dev/hwrng to an alternative, like /dev/urandom.
  • Reference: kernel commit "tpm: Disable RNG for all AMD fTPMs"
  • Some Dell models, which appear to include all those using a BCM5720 network card, have a compatibility issue with the tg3 driver in the kernel based on version 6.5.11.

From our current understanding 14th Generation Dell Servers (T140, R240, R640,...) are affected, while others (e.g., R630, R620, R610,...) do not seem to be affected. We are currently investigating this issue. In the meantime, we recommend pinning the kernel to version 6.2 on affected hosts.

Some users report that disabling the X2APIC option in the BIOS resolved this issue as a workaround.

Network Configuration

  • Systems installed on top of Debian or those installed before Proxmox VE 7.0 will be switched by default from the ifupdown network configuration implementation to the modern ifupdown2.

 This switch occurs because the stabilized SDN package is now marked as a recommendation for various Proxmox VE packages. Consequently, it will be installed on all systems that have kept the APT::Install::Recommends config at its default true value, leading to the inclusion of the ifupdown2 package.

While ifupdown2 aims to be backward compatible with the legacy ifupdown, some details may still differ. Currently, we are aware of one such difference, particularly regarding the default value for accepting IPv6 router advertisement requests (accept_ra). In the legacy ifupdown, accept_ra is set to 2 ("Accept Router Advertisements even if forwarding is enabled") as long as no gateway is configured. However, in ifupdown2, it always defaults to 0 ("Do not accept Router Advertisements") as a security measure, requiring administrators to actively opt-in.
If you rely on router advertisements being accepted, you can simply add accept_ra 2 to the respective interface section in /etc/network/interfaces.


Virtual Machines

  • The pve-edk2-firmware package, which provides (U)EFI firmware for virtual machines, has been split up in multiple packages. Users of the fully supported amd64/x86_64 architectures do not need to change anything.

The OVMF variants, used for amd64/x86_64 based virtual machines, got moved into pve-edk2-firmware-ovmf and pve-edk2-firmware-legacy, these will always be installed automatically on upgrade.

The AAVMF variants, used for the experimental ARM64 VM integration, have been moved to pve-edk2-firmware-aarch64, this package won't be automatically installed on upgrade, if you rely on the experimental ARM integration you need to manually install this package.

  • With the 8.1 machine version, QEMU switched to using SMBIOS 3.0 by default, utilizing a 64-bit entry point. Since the SMBIOS 32-bit and 64-bit entry points can coexist, and most modern operating systems set up both, the general impact should be minimal

However, certain operating systems or appliances, such as Juniper's vSRX, do not have a 64-bit entry point set up and might fail to boot with the new machine type.

For affected VMs, you can explicitly pin the machine version to 8.0 in the web interface. Note that the machine version of VMs with a Windows OS type is automatically pinned to the most recent version at the time of creation.

Upstream commit reference: QEMU commit7

Proxmox VE 8.0

Released 22. June 2023: See Downloads

  • Based on Debian Bookworm (12.0)
  • Latest 6.2 Kernel as stable default
  • QEMU 8.0.2
  • LXC 5.0.2
  • ZFS 2.1.12
  • Ceph Quincy 17.2.6
Highlights
  • New major release based on the great Debian Bookworm.
  • Seamless upgrade from Proxmox VE 7.4, see Upgrade from 7 to 8
  • Ceph Quincy enterprise repository.

Access the most stable Ceph repository through any Proxmox VE subscription.

  • Add access realm sync jobs.

Synchronize users and groups from an LDAP/AD server automatically at regular intervals.

  • Integrate host network bridge and VNet access when configuring virtual guests into the ACL system of Proxmox VE.

With the new SDN.Use privilege and the new /sdn/zones/<zone>/<bridge-or-vnet>/<vlan-tag> ACL object path, you can give out fine-grained usage permissions for specific networks to users.

  • Create, manage and assign resource mappings for PCI and USB devices for use in virtual machines (VMs) via API and web UI.

Mappings allow you to give out access to one or more specific device(s) to a user, without them requiring root access.

For setups that require VMs, with hardware passthrough to be able to (offline-) migrate to other nodes, mappings can be used to ensure that the VM also has a valid device for passthrough on the target node.

  • Add virtual machine CPU models based on the x86-64 psABI Micro-Architecture Levels and use the widely supported x86-64-v2-AES as default for new VMs created via the web UI.

The x86-64-v2-AES provides important extra features over the qemu64/kvm64 like SSE3, SSE4 and SSE4.1 and was slightly adapted to also provide AES support, improving performance of many computing operations dramatically.

See the Virtual Machines (KVM/QEMU) section for details.

  • Add new text-based UI mode for the installation ISO, written in Rust using the Cursive TUI (Text User Interface) library.

You can use the new TUI mode to workaround issues with launching the GTK based graphical installer, sometimes observed on both very new and rather old hardware.

The new text mode shares the code executing the actual installation with the existing graphical mode.

Changelog Overview
Enhancements in the web interface (GUI)

  • The Ceph repository selection now takes into account the subscription status of the complete cluster and recommends the optimal version for the cluster.
  • Improved Dark color theme:

The Dark color theme, introduced in Proxmox VE 7.4, received a lot of feedbac thek from our community, which resulted in further improvements.

  • Set strict SameSite attribute on the Authorization cookie
  • The Markdown parser, used in notes, has been improved:
    • it allows setting the target for links, to make any link open in a new tab or window.
    • it allows providing URLs with a scheme different from HTTP/HTTPS;
      • You can now directly link to resources like rdp://<rest-of-url>, providing convenience links in the guest notes.
    • tag-names and protocols are matched case-insensitive.
  • The mobile UI code was refactored to not suffer from incompatible changes made for the web-based GUI.
  • The generated CSR used by the built-in ACME client now sets the correct CSR version (0 instead of 2).
  • Uploading files now only computes the MD5 sum of the uploaded file, if it can be used for comparison with the user-provided one.
  • Firewall settings: Improve the alignment of permissions checked by the web UI with the permissions actually required by the API.
  • Explicitly disallow internal-only tmpfilename parameter for file uploads.
  • Fix multipart HTTP uploads without Content-Type header.
  • Show Ceph pool number in the web UI, as it is often mentioned in Ceph warnings and errors.
  • You can now set the subdir option of the CIFS storage type in the web interface, not only via API/CLI.
    • Improved translations, among others:
    • Ukrainian (NEW)
    • Japanese
    • Simplified Chinese
    • Traditional Chinese
    • The size units (Bytes, KB, MiB,...) are now passed through the translation framework as well, allowing localized variants (e.g., for French).
    • The language selection is now localized and displayed in the currently selected language
Virtual machines (KVM/QEMU)
  • New QEMU version 8.0:
    • The virtiofsd codebase was replaced by a new and improved implementation based on Rust, which is packaged separately.
    • QEMU Guest Agent now has initial support for NetBSD and OpenBSD.
    • Many more changes, see the upstream changelog for details.
  • Add virtual machine CPU models based on the x86-64 psABI Micro-Architecture Levels.

The x86-64 levels provide a vendor-agnostic set of supported features and reported CPU flags.

Models like x86-64-v2-AES provides important extra features over the qemu64/kvm64 like SSE3, SSE4 and SSE4.1 and was slightly adapted to also provide AES support, improving performance of many computing operations dramatically.

This model is well-supported by all x86-64 hardware released in the last decade, to be specific since Intel Westmere (launched in 2010) and AMD Opteron 6200-series "Interlagos" (launched in 2011), enabling Proxmox VE to use it as the default CPU model for crating new VMs via the web UI.

  • Create, manage and assign resource mappings for PCI and USB devices for use in VMs via API and web UI.

Mappings allow you to give out access to one or more specific device(s) to a user, without them requiring root access.

For setups that require VMs, with hardware passthrough to be able to (offline-) migrate to other nodes, mappings can be used to ensure that the VM also has a valid device for passthrough on the target node.

New ACL object paths: /mapping/pci/<id> and /mapping/usb/<id> refer to the defined PCI and USB mappings.

New privileges: Mapping.Audit allows to view resource mappings, Mapping.Modify allows to create or edit resource mappings, and Mapping.Use allows to pass through devices to VMs using the mapping.

New roles: PVEMappingUser, with the privilege to view and use mappings, and PVEMappingAdmin with the additional privilege to edit mappings.

  • Avoid invalid smm machine flag for aarch64 VM when using serial display and SeaBIOS.
  • Warn if a network interface is not connected to a bridge on VM startup. This can happen if the user manually edited the VM config.
  • Fix an issue with the improved code for volume size information gathering for passed through disks during backup preparation.
  • Workaround breaking driver changes in newer Nvidia grid drivers, which prevented mediated devices (mdev) to be reclaimed upon guest exit.
  • Prefer an explicit configured SMBIOS UUID for Nvidia vGPU passthrough.

If a uuid command line parameter is present, it will be preferred over the default auto-generated UUID, containing the VMID and mdev index.

This fixes an issue, with software inside the guest, which relies on a specific and unique UUID setting.

  • Improved gathering of current setting for live memory unplugging.
  • Avoid sending a temporary size of zero to QEMU when resizing block devices. Previously, this was done when resizing RBD volumes, but it is not necessary anymore.
  • When resizing a disk, spawn a worker task to avoid HTTP request timeout (issue 2315).
  • Allow resizing qcow2 disk images with snapshots (issue 517).
  • cloud-init improvements:
    • Introduce ciupgrade option that controls whether machines should upgrade packages on boot (issue 3428).
    • Better align privilege checks in the web UI with the actual privileges required in the backend.
    • Fix an issue where the hostname was not properly set on Fedora/CentOS machines, by passing the hostname via the fqdn option.
    • Fix an issue where displaying pending changes via qm and pvesh caused an error.
    • Allow setting network options with VM.Config.Cloudinit privileges, instead of requiring the more powerful VM.Config.Network privilege.
  • Drop unused QMP commands for getting the link and creating/deleting internal snapshots.
  • Replace usages of deprecated -no-hpet QEMU option with the hpet=off machine flag.
Containers (LXC)

  • Improve handling of /etc/machine-id on clone operations - the file is now only truncated, if the source did not explicitly set it to 'uninitialized' or remove it. Thus, the admin can decide if they want first-boot semantics or not (see machine-id (5)).
  • Set memory.high cgroup limit to 99.6% of configured memory. This setting gives the container a chance to free memory before directly running into an Out-of-Memory (OOM) condition. It is applied on lxc.conf generation and on hot-plugging memory to a running container.
  • Warn users on conflicting, manual, lxc.idmap entries.

Custom uid/gid map entries can become quite complicated and cause overlaps fast.

By issuing a warning upon container start, the user should find the wrong entry directly.

  • When resizing a disk, perform plausibility checks already before spawning the worker task. This allows invalid requests to fail earlier.
  • General code improvements, adhering to best practices for Perl code.
General improvements for virtual guests
  • When cloning guests, the validation of the provided name of the clone is now happening in the frontend, improving UX.
HA Manager
  • Stability improvements of manual maintenance mode:
    • Fix an issue where a request for enabling maintenance mode on a node is lost, in case the rebooted node is the current active Cluster Resource Manager (CRM).
    • Fix an issue where a shutdown policy other than migrate could cause a node in maintenance mode to leave maintenance mode too early or fence itself.
    • Fix an issue where ha-rebalance-on-start could cause a newly added and already-running service to be shut down and migrated to another node.

Now, ha-rebalance-on-start ignores services that are already running.

  • When enabling or disabling maintenance mode via the CLI, the ha-manager command now checks whether the provided node exists.

This avoids misconfigurations, e.g., due to a typo in the node name.

Improved management for Proxmox VE clusters

  • The rsync invocation used when joining nodes via ssh, which is deprecated, has been adapted to changes in rsync CLI argument parsing in Bookworm.
Backup/Restore
  • Improve performance of backups that use zstd on fast disks, by invoking zstd without the --rsyncable flag (issue 4605).
  • Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.
  • When restoring from backups via the web interface, the VM/CT name is now validated client-side before sending an API request. This helps catching invalid names early.
  • The web UI now sorts backups by date, whereas it previously sorted backups first by VMID and then by date. The VMID is added as an extra column for users who would like to restore the previous sorting order (issue 4678).
  • Fix an issue where the backup job editor window occasionally did not show the selected guests (issue 4627).
  • The fs-freeze-on-backup option of the QEMU guest agent, which controls whether the filesystem should be frozen for backups, can now be set in the web UI.
  • Improve permission model for backup jobs: Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • Clarify description of the ionice setting.
Storage
  • The file-based storage-types have two new config options create-base-path and create-subdirs. They replace the mkdir option and separate two different concepts:
  • create-base-path decides if the path to the storage should be created if it does not exist,
  • create-subdirs decides if the content-specific sub-directories (guest images, ISO, container template, backups) should be created.
  • Conflating both settings in the single mkdir option caused a few unwanted effects in certain situations (issue 3214).
  • The CIFS storage type can now be configured with custom mount options, as it was already possible for the NFS storage type.
  • The subdir option of the CIFS storage type can now be configured in the web interface. The option can be used to mount a subdirectory of a SMB/CIFS share and was previously only accessible via the API/CLI.
  • Improve API documentation for the upload method.
  • The API now allows to also query replication jobs that are disabled.
  • Allow @ in directory storage path, as it is often used to signify Btrfs subvolumes.
  • When resizing RBD volumes, always round up sizes to the nearest integer. This avoids errors due to passing a floating-point size to the RBD tooling.
Ceph
  • Add support for new Ceph enterprise repositories. When installing Ceph via pveceph install or the web UI, you can now choose between the test, no-subscription and enterprise (default) repositories. The -test-repository option of the pveceph install command was removed.
  • Add pveceph osddetails command to show information about OSDs on the command line, with a level of detail that is comparable to the web UI/API.
  • Drop support for Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8.
  • Remove overly restrictive validation of public_network during monitor creation. Configuring a public network like 0::/0 or 0::/1 caused a superfluous "value does not look like a valid CIDR network" error.
  • The Ceph installation wizard in the web UI does not create monitors and managers called localhost anymore and uses the actual node name instead.
Access control
  • Add possibility to define realm sync jobs in the web UI. These allow to synchronize users and groups from an LDAP server automatically at regular intervals.
  • Add TFA/TOTP lockout to protect against an attacker who has obtained the user password and attempts to guess the second factor:
    • If TFA failed too many times in a row, lock this user account out of TFA for an hour. If TOTP failed too many times in a row, disable TOTP for the user account. Using a recovery key will unlock a user account.
    • Add pveum tfa unlock command and /access/users/{userid}/unlock-tfa API endpoint for manually unlocking users.
    • Add TFA lockout status to responses of /access/tfa and /access/users endpoints.
  • Fix validity check for LDAP base DNs that was overly strict starting from Proxmox VE 7.4. For example, the check rejected base DNs containing both dashes and spaces (issue #4609).
  • When authenticating via PAM, pass the PAM_RHOST item. With this, it is possible to manually configure PAM such that certain users (for example root@pam) can only log in from certain hosts.
  • Add pveum tfa list command for listing second factors on the command line.
  • The access/ticket API endpoint does not support the deprecated login API (using new-format=0) anymore.
  • Remove the Permission.Modify privilege from the PVESysAdmin and PVEAdmin roles and restrict it to the Administrator role. This reduces the chances of accidentally granting privilege modification privileges.
  • Login with TFA: In order to improve UX, fix wording of messages related to recovery keys.
  • Forbid creating roles with names starting with PVE to reserve these role names for use in future upgrades.
  • SDN.Use is required on a bridge/vnet (or its zone) in order to configure it in a guest vNIC.
  • use /sdn/zones/localnetwork or /sdn/zones/localnetwork/<bridge> to allow usage of all or specific local bridges.
  • use /sdn/zones/<zone> or /sdn/zones/<zone>/<bridge> to allow usage of all or specific vnets in a given SDN zone.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths, instead of arbitrary roles.
Firewall & Software Defined Networking
  • Allow to distinguish IP sets and aliases with the same name defined on the datacenter level and on the guest level by providing an explicit prefix (issue 4556). Previously, the innermost IP set/alias took precedence, which is still the default behavior if no prefix is provided.
  • Fix an issue where an allowed special ICMP-type could accidentally be added as destination port for a layer 4 protocol, breaking firewall rule loading.
  • Fix setting the correct vlan-protocol for QinQ zones if the bridge is vlan-aware.(issue 4683
  • Fix an issue where routing between zones was enabled by default in exit nodes. This has been fixed by adding null-routes for each other zone prefix to each zone (issue 4389).
  • Correctly order vrf and router bgp vrf entries by vrf name in the frr configuration. (issue 4662)
  • For setups where a node is primary exit node for one vrf and secondary exit for a different vrf, the configuration now also adds the second vrf's default route. (issue 4657)
  • Allow specifying a custom vxlan-tunnel port per interface.
  • Update the frr configuration generation to the version of frr shipped in Debian Bookworm.
  • Fix an issue where reloading the network configuration on a remote node created an error, which hid the actual issue with the network configuration.
  • Add support for IPv6 SLAAC and router advertisement configuration in /etc/network/interfaces to ifupdown2.
  • Fix live reloading when changing VLAN and VXLAN specific attributes.
  • Add support for creating an OVS bridge which tags traffic with a specific VLAN tag to ifupdown2.
  • This is to match the possibility in ifupdown.
Improved management of Proxmox VE nodes
  • pve7to8 compatibility check script added.
  • As with previous major upgrades, Proxmox VE 7 ships a script checking for issues with the current node/cluster. It should point to any issues which might prevent a successful major upgrade.
  • Outdated pve6to7 compatibility check script was removed.
  • Fix an issue where the web UI would display no APT repositories during a major upgrade.
  • The new version of grub2 provided by Debian Bookworm (2.06-13) fixes an issue where a host using LVM would fail to boot with a message disk `lvmid/...` not found, even though the LVM setup is healthy.
Installation ISO
  • Add new text-based UI mode for the installation ISO, written in Rust using the Cursive TUI (Text User Interface) library.

You can use the new TUI mode to workaround issues with launching the GTK based graphical installer, sometimes observed on both very new and rather old hardware.

  • The new text mode shares the code executing the actual installation with the existing graphical mode.
  • The version of BusyBox shipped with the ISO was updated to version 1.36.1.
  • The Ceph Quincy repository provided by Proxmox is configured by default to deliver updates for the Ceph client, even if no Proxmox Ceph hyper-converged server is set up.
  • Detection of unreasonable system time.
  • If the system time is older than the time the installer was created, the system notifies the user with a warning.
  • ethtool is now shipped with the ISO and installed on all systems.
  • systemd-boot is provided by its own package instead of systemd in Debian Bookworm and is installed with the new ISO.
Notable bugfixes and general improvements
  • Most git repositories now have a dsc Makefile target to create a Debian Source Package and additionally a sbuild target to create the source package and build it using sbuild.
Known Issues & Breaking Changes
Storage
  • Storage activation now checks that every content type uses a different directory, in order to prevent unexpected interactions between different content types.

This breaks setups in which the content-dirs option was set up to map different content types to the same directory, and setups in which some content directories were manually set up as symlinks to a common directory.

  • The mkdir option is considered deprecated, it got split into create-base-path and create-subdirs as fine-grained replacement.

While Proxmox VE will continue to support this option during the 8.x release series, we recommend switching over to the new options already now.

Kernel

  • Previous 6.2 kernels had problems with incoming live migrations when all of the following were true:
    • VM has a restricted CPU type (e.g., qemu64) – using CPU type host or Skylake-Server is ok!
    • the source host uses an Intel CPU from Skylake Server, Tiger Lake Desktop, or equivalent newer generation.
    • the source host is booted with a kernel version 5.15 (or older) (e.g. when upgrading from Proxmox VE 7.4)

In this case, the VM could hang and use 100% of the CPU of one or more vCPUs.

This was fixed with pve-kernel-6.2.16-4-pve in version 6.2.16-5. So make sure your target host is booted with this (or a newer) kernel version if the above points apply to your setup.

  • Kernels based on 6.2 have a degraded Kernel Samepage Merging (KSM) performance on multi-socket NUMA systems.
    • Depending on the workload this can result in a significant amount of memory that is not deduplicated anymore.
    • This issue went unnoticed for a few kernel releases, making a clean backport of the fixes made for 6.5 hard to do without some general fall-out.

Until we either find a targeted fix for our kernel, or change the default kernel to a 6.5 based kernel (planned for 2023'Q4), the current recommendation is to keep your multi-socket NUMA systems that rely on KSM on Proxmox VE 7 with it's 5.15 based kernel.

QEMU

  • QEMU 8.0 removed some previously deprecated features. Proxmox VE 8 won't use the -chardev tty and -chardev parport aliases anymore, and no other features were used by the Proxmox VE stack. Thus, only installations using args inside their guest configs need to check the compatibility. See the Qemu changelog on the topic for details.
    • The removed features in QEMU 8.0 also include the Proxmox VE-specific, but unused/deprecated QMP commands get_link_status, snapshot-drive and delete-drive-snapshot.
  • Only root@pam is now allowed to clone and restore guests with passed through PCI/USB devices that are not using the new mapping feature. To allow regular users to clone and restore with PCI/USB devices, create a mapping and give the user 'Mapping.Use' on that.
  • Trying to pass through the same PCI device multiple times in a single guest now fails earlier. A qm showcmd for example does not generate an output anymore in that case.
  • When passed through device is configured as multifunction (or 'All Functions' in the web UI) with a set mediated device (mdev) this now generates an error instead of a warning. Use the specific function instead.
  • cloud-init: If the VM name is not a FQDN and no DNS search domain is configured, the automatically-generated cloud-init user data now contains an additional fqdn option. This fixes an issue where the hostname was not set properly for some in-guest distributions. However, the changed user data will change the instance ID, which may cause the in-guest cloud-init to re-run actions that trigger once-per-instance. For example, it may regenerate the in-guest SSH host keys.
  • Migration doesn't scan all local storages for orphaned volumes anymore. Instead, only the volumes referenced in the configuration (including snapshots) are picked up. This avoids unexpected errors where an unavailable local storage could fail migration even if no volume referenced that storage.
Container
  • The lxc.id_map configuration key has been deprecated for a long time by lxc and was replaced by lxc.idmap. With this release, its presence is considered an error. The key can only be present if it was manually added to a guest configuration.
  • The lxcfs is now built with fuse 3. This upgrade is done on a major release, since all running containers need to be restarted afterwards.
  • Migration doesn't scan all local storages for orphaned volumes anymore. Instead, only the volumes referenced in the configuration (including snapshots) are picked up. This avoids unexpected errors where an unavailable local storage could fail migration even if no volume referenced that storage.
Authentication & Permission System
  • There is a new SDN.Use privilege (and corresponding PVESDNUser role) that is required to configure virtual NICs in guests. See "Access control" section above for details!
  • The Permission.Modify privilege has been removed from the PVESysAdmin and PVEAdmin roles, in order to reduce the chances of accidentally granting the privilege to modify privileges. If a particular setup requires a role with this privilege, it is necessary to define a new custom role and use that instead of PVESysAdmin/PVEAdmin.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths. Previously they could assign any role to specific VM/storage/pool paths. As the privileges usable on specific VM/storage/pool paths were quite limited, this did not allow privilege escalation, but restricting the capabilities now allows adding more powerful privileges in future versions without breaking changes.
  • Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • User accounts will now be locked after too many attempts to authenticate with a second factor. This is intended to protect against an attacker who has obtained the user password and attempts to guess the second factor. Unlocking requires either a successful login with a recovery key or a manual unlock by an administrator.
Node Management
  • Systems booting via UEFI from a ZFS on root setup should install the systemd-boot package after the upgrade.

The systemd-boot was split out from the systemd package for Debian Bookworm based releases. It won't get installed automatically upon upgrade from Proxmox VE 7.4 as it can cause trouble on systems not booting from UEFI with ZFS on root setup by the Proxmox VE installer.

Systems which have ZFS on root and boot in UEFI mode will need to manually install it if they need to initialize a new ESP (see the output of proxmox-boot-tool status and the relevant documentation).

Note that the system remains bootable even without the package installed (the bootloader that was copied to the ESPs during intialization remains untouched), so you can also install it after the upgrade was finished.

It is not recommended installing systemd-boot on systems which don't need it, as it would replace grub as bootloader in its postinst script.

API

  • The API can handle array-type data differently, while trying staying backward compatible.

Instead of being able to pass the individual elements separated by null bytes, you can pass the data directly as array.

  • Before Proxmox VE 8, the API endpoint to trigger a QEMU guest agent (QGA) command execution (/nodes/{node}/qemu/{vmid}/agent/exec) allowed passing a command as a single string, which would then be automatically split at whitespace. This was deemed too brittle and is not supported anymore. You must now send the command and all its arguments as a proper array of strings.


Proxmox VE 8.0 beta1

Released 9. June 2023: See Downloads
Note: this is a test version not meant for production use yet.

  • Based on Debian 12 Bookworm (testing)
  • Latest 6.2 Kernel as stable default
  • QEMU 8.0.2
  • LXC 5.0.2
  • ZFS 2.1.11
  • Ceph Quincy 17.2.6
Highlights
  • New major release based on the great Debian Bookworm.
  • Seamless upgrade from Proxmox VE 7.4, see Upgrade from 7 to 8
  • Ceph Quincy enterprise repository.

Access the most stable Ceph repository through any Proxmox VE subscription.

  • Add access realm sync jobs.

Synchronize users and groups from an LDAP/AD server automatically at regular intervals.

  • Integrate host network bridge and VNet access when configuring virtual guests into Proxmox VE's ACL system.

With the new SDN.Use privilege and the new /sdn/zones/<zone>/<bridge-or-vnet>/<vlan-tag> ACL object path, one can give out fine-grained usage permissions for specific networks to users.

Changelog Overview
Enhancements in the web interface (GUI)

  • The Ceph repository selection now takes into account the subscription status of the complete cluster and recommends the optimal version for the cluster.
  • Improved Dark color theme:

The Dark color theme, introduced in Proxmox VE 7.4, received a lot of feedback from our community, which resulted in further improvements.

  • Set strict SameSite attribute on the Authorization cookie
  • The Markdown parser, used in notes, has been improved:
    • it allows setting the target for links, to make any link open in a new tab or window.
    • it allows providing URLs with a scheme different from HTTP/HTTPS;

You can now directly link to resources like rdp://<rest-of-url>, providing convenience links in the guest notes.

    • tag-names and protocols are matched case-insensitive.
  • The mobile UI code was refactored to not suffer from incompatible changes made for the web-based GUI.
  • The generated CSR used by the built-in ACME client now sets the correct CSR version (0 instead of 2).
  • Uploading files now only computes the MD5 sum of the uploaded file, if it can be used for comparison with the user-provided one.
  • Firewall settings: Improve the alignment of permissions checked by the web UI with the permissions actually required by the API.
  • Explicitly disallow internal-only tmpfilename parameter for file uploads.
  • Fix multipart HTTP uploads without Content-Type header.
  • Show Ceph pool number in the web UI, as it is often mentioned in Ceph warnings and errors.
  • Improved translations, among others:
    • Ukrainian (NEW)
    • Japanese
    • Simplified Chinese
    • Traditional Chinese
    • The size units (Bytes, KB, MiB,...) are now passed through the translation framework as well, allowing localized variants (e.g., for French).
    • The language selection is now localized and displayed in the currently selected language
Virtual Machines (KVM/QEMU)
  • New QEMU Version 8.0:
    • The virtiofsd codebase was replaced by a new and improved implementation based on Rust, which is packaged separately.
    • QEMU Guest Agent now has initial support for NetBSD and OpenBSD.
    • Many more changes, see the upstream changelog for details.
  • Avoid invalid smm machine flag for aarch64 VM when using serial display and SeaBIOS.
  • Warn if a network interface is not connected to a bridge on VM startup. This can happen if the user manually edited the VM config.
  • Fix an issue with the improved code for volume size information gathering for passed through disks during backup preparation.
  • Workaround breaking driver changes in newer Nvidia grid drivers, which prevented mediated devices (mdev) to be reclaimed upon guest exit.
  • Prefer an explicit configured SMBIOS UUID for Nvidia vGPU passthrough.

If a uuid command line parameter is present, it will be preferred over the default auto-generated UUID, containing the VMID and mdev index.

This fixes an issue, with software inside the guest, which relies on a specific and unique UUID setting.

  • Improved gathering of current setting for live memory unplugging.
  • Avoid sending a temporary size of zero to QEMU when resizing block devices. Previously, this was done when resizing RBD volumes, but it is not necessary anymore.
  • When resizing a disk, spawn a worker task to avoid HTTP request timeout (issue 2315).
  • Allow resizing qcow2 disk images with snapshots (issue 517).
  • cloud-init improvements:
    • Introduce ciupgrade option that controls whether machines should upgrade packages on boot (issue 3428).
    • Better align privilege checks in the web UI with the actual privileges required in the backend.
    • Fix an issue where the hostname was not properly set on Fedora/CentOS machines, by passing the hostname via the fqdn option.
    • Fix an issue where displaying pending changes via qm and pvesh caused an error.
    • Allow setting network options with VM.Config.Cloudinit privileges, instead of requiring the more powerful VM.Config.Network privilege.
  • Drop unused QMP commands for getting the link and creating/deleting internal snapshots.
  • Replace usages of deprecated -no-hpet QEMU option with the hpet=off machine flag.
Containers (LXC)
  • Improve handling of /etc/machine-id on clone operations - the file is now only truncated, if the source did not explicitly set it to 'uninitialized' or remove it. Thus, the admin can decide if they want first-boot semantics or not (see machine-id (5)).
  • Set memory.high cgroup limit to 99.6% of configured memory. This setting gives the container a chance to free memory before directly running into an Out-of-Memory (OOM) condition. It is applied on lxc.conf generation and on hot-plugging memory to a running container.
  • Warn users on conflicting, manual, lxc.idmap entries.

Custom mappings can become quite complicated and cause overlaps fast.
By issuing a warning upon container start, the user should find the wrong entry directly.

  • When resizing a disk, perform plausibility checks already before spawning the worker task. This allows invalid requests to fail earlier.
  • General code improvements, adhering to best practices for Perl code.
General improvements for virtual guests

  • When cloning guests, the validation of the provided name of the clone is now happening in the frontend, improving UX.
  • Add config files in /etc/pve/mapping and privileges Mapping.* in preparation for cluster-wide mapping of PCI/USB devices.
HA Manager
  • Stability improvements of manual maintenance mode:
    • Fix an issue where a request for enabling maintenance mode on a node is lost, in case the rebooted node is the current active Cluster Resource Manager (CRM).
    • Fix an issue where a shutdown policy other than migrate could cause a node in maintenance mode to leave maintenance mode too early or fence itself.
    • Fix an issue where ha-rebalance-on-start could cause a newly added and already-running service to be shut down and migrated to another node.

Now, ha-rebalance-on-start ignores services that are already running.

  • When enabling or disabling maintenance mode via the CLI, the ha-manager command now checks whether the provided node exists.

This avoids misconfigurations, e.g., due to a typo in the node name.

Improved management for Proxmox VE clusters

  • The rsync invocation used when joining nodes via ssh, which is deprecated, has been adapted to changes in rsync CLI argument parsing in Bookworm.
Backup/Restore
  • Improve performance of backups that use zstd on fast disks, by invoking zstd without the --rsyncable flag (issue 4605).
  • Suppress harmless but confusing "storing login ticket failed" errors when backing up to Proxmox Backup Server.
  • When restoring from backups via the web UI, the VM/CT name is now validated client-side before sending an API request. This helps catching invalid names early.
  • The web UI now sorts backups by date, whereas it previously sorted backups first by VMID and then by date. The VMID is added as an extra column for users who would like to restore the previous sorting order (issue 4678).
  • Fix an issue where the backup job editor window occasionally did not show the selected guests (issue 4627).
  • The fs-freeze-on-backup option of the QEMU guest agent, which controls whether the filesystem should be frozen for backups, can now be set in the web UI.
  • Improve permission model for backup jobs: Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • Clarify description of the ionice setting.
Storage
  • The file-based storage-types have two new config options create-base-path and create-subdirs. They replace the mkdir option and separate two different concepts:

create-base-path controls if the path to the storage should be created if it does not exist,

create-subdirs controls if the content-specific subdirectories (e.g., guest images, ISO images, container templates, or backups) should be created.

Conflating both settings in the single mkdir option caused a few unwanted effects in certain situations (issue 3214).

  • The CIFS storage type can now be configured with custom mount options, as it was already possible for the NFS storage type.
  • The subdir option of the CIFS storage type can now be configured in the web interface. The option can be used to mount a subdirectory of a SMB/CIFS share and was previously only accessible via the API/CLI.
  • Improve API documentation for the upload method.
  • The API now allows to also query replication jobs that are disabled.
  • Allow @ in directory storage path, as it is often used to signify Btrfs subvolumes.
  • When resizing RBD volumes, always round up sizes to the nearest integer. This avoids errors due to passing a floating-point size to the RBD tooling.
Ceph
  • Add support for new Ceph enterprise repositories. When installing Ceph via pveceph install or the web UI, you can now choose between the test, no-subscription and enterprise (default) repositories. The -test-repository option of the pveceph install command was removed.
  • Add pveceph osddetails command to show information about OSDs on the command line, with a level of detail that is comparable to the web UI/API.
  • Drop support for hyper-converged Ceph Octopus and Pacific, as they are not supported in Proxmox VE 8.
  • Proxmox VE 8 will support managing Quincy and newer Ceph server releases, setups still using Pacific can upgrade to Ceph Quincy before upgrading Proxmox VE from 7 to 8.
  • The Ceph 17.2 Quincy client will still support accessing older Ceph server setups as a client.
  • Remove overly restrictive validation of public_network during monitor creation. Configuring a public network like 0::/0 or 0::/1 caused a superfluous "value does not look like a valid CIDR network" error.
  • The Ceph installation wizard in the web UI does not create monitors and managers called localhost anymore and uses the actual node name instead.
Access Control
  • Add possibility to define realm sync jobs in the web UI. These allow to synchronize users and groups from an LDAP server automatically at regular intervals.
  • Add TFA/TOTP lockout to protect against an attacker who has obtained the user password and attempts to guess the second factor:
    • If TFA failed too many times in a row, lock this user account out of TFA for an hour. If TOTP failed too many times in a row, disable TOTP for the user account. Using a recovery key will unlock a user account.
    • Add pveum tfa unlock command and /access/users/{userid}/unlock-tfa API endpoint for manually unlocking users.
    • Add TFA lockout status to responses of /access/tfa and /access/users endpoints.
  • Fix validity check for LDAP base DNs that was overly strict starting from Proxmox VE 7.4. For example, the check rejected base DNs containing both dashes and spaces (issue 4609).
  • When authenticating via PAM, pass the PAM_RHOST item. With this, it is possible to manually configure PAM such that certain users (for example root@pam) can only log in from certain hosts.
  • Add pveum tfa list command for listing second factors on the command line.
  • The access/ticket API endpoint does not support the deprecated login API (using new-format=0) anymore.
  • Remove the Permission.Modify privilege from the PVESysAdmin and PVEAdmin roles and restrict it to the Administrator role. This reduces the chances of accidentally granting privilege modification privileges.
  • Login with TFA: In order to improve UX, fix wording of messages related to recovery keys.
  • Forbid creating roles with names starting with PVE to reserve these role names for use in future upgrades.
  • SDN.Use is required on a bridge/vnet (or its zone) in order to configure it in a guest vNIC.
    • use /sdn/zones/localnetwork or /sdn/zones/localnetwork/<bridge> to allow usage of all or specific local bridges.
    • use /sdn/zones/<zone> or /sdn/zones/<zone>/<bridge> to allow usage of all or specific vnets in a given SDN zone.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths, instead of arbitrary roles.
Firewall & Software Defined Networking
  • Allow to distinguish IP sets and aliases with the same name defined on the datacenter level and on the guest level by providing an explicit prefix (issue 4556). Previously, the innermost IP set/alias took precedence, which is still the default behavior if no prefix is provided.
  • Fix an issue where an allowed special ICMP-type could accidentally be added as destination port for a layer 4 protocol, breaking firewall rule loading.
  • Fix setting the correct vlan-protocol for QinQ zones if the bridge is vlan-aware.(issue 4683
  • Fix an issue where routing between zones was enabled by default in exit nodes. This has been fixed by adding null-routes for each other zone prefix to each zone (issue 4389).
  • Correctly order vrf and router bgp vrf entries by vrf name in the frr configuration. (issue 4662)
  • For setups where a node is primary exit node for one vrf and secondary exit for a different vrf, the configuration now also adds the second vrf's default route. (issue 4657)
  • Allow specifying a custom vxlan-tunnel port per interface.
  • Update the frr configuration generation to the version of frr shipped in Debian Bookworm.
  • Fix an issue where reloading the network configuration on a remote node created an error, which hid the actual issue with the network configuration.
  • Add support for IPv6 SLAAC and router advertisement configuration in /etc/network/interfaces to ifupdown2.
  • Fix live reloading when changing VLAN and VXLAN specific attributes.
  • Add support for creating an OVS bridge which tags traffic with a specific VLAN tag to ifupdown2.
  • This is to match the possibility in ifupdown.
Improvements for the management of Proxmox VE Nodes
  • pve7to8 compatibility check script added.

As with previous major upgrades, Proxmox VE 7 ships a script checking for issues with the current node/cluster. It should point to any issues which might prevent a successful major upgrade.

  • Outdated pve6to7 compatibility check script was removed.
  • Fix an issue where the web UI would display no APT repositories during a major upgrade.
  • The new version of grub2 provided by Debian Bookworm (2.06-13) fixes an issue where a host using LVM would fail to boot with a message disk `lvmid/...` not found, even though the LVM setup is healthy.
Installation ISO
  • The version of BusyBox shipped with the ISO was updated to version 1.36.1.
  • The Proxmox-provided Ceph Quincy repo will be set-up by default, providing updates for a modern Ceph client even if Proxmox Ceph hyper-converged setup is not in use.
  • Detection of unreasonable system time.
  • If the system time is older than the time the installer was created, the system notifies the user with a warning.
  • ethtool is now shipped with the ISO and installed on all systems.
  • systemd-boot is provided by its own package instead of systemd in Debian Bookworm and is installed with the new ISO.
Notable bug fixes and general improvments
  • Most git repositories now have a dsc Makefile target to create a Debian Source Package and additionally a sbuild target to create the source package and build it using sbuild.
Known Issues & Breaking Changes
Storage

  • Storage activation now checks that every content type uses a different directory, in order to prevent unexpected interactions between different content types.

This breaks setups in which the content-dirs option was set up to map different content types to the same directory, and setups in which some content directories were manually set up as symlinks to a common directory.

  • The mkdir option is considered deprecated, it got split into create-base-path and create-subdirs as fine-grained replacement.

While Proxmox VE will continue to support this option during the 8.x release series, we recommend switching over to the new options already now.

QEMU

  • QEMU 8.0 removed some previously deprecated features. Proxmox VE 8 won't use the -chardev tty and -chardev parport aliases anymore, and no other features were used by the Proxmox VE stack. Thus, only installations using args inside their guest configs need to check the compatibility. See the Qemu changelog on the topic for details.
    • The removed features in QEMU 8.0 also include the Proxmox VE-specific, but unused/deprecated QMP commands get_link_status, snapshot-drive and delete-drive-snapshot.

Container

  • The lxc.id_map configuration key has been deprecated for a long time by lxc and was replaced by lxc.idmap. With this release, its presence is considered an error. The key can only be present if it was manually added to a guest configuration.
  • The lxcfs is now built with fuse 3. This upgrade is done on a major release, since all running containers need to be restarted afterwards.
Authentication & Permission System
  • There is a new SDN.Use privilege (and corresponding PVESDNUser role) that is required to configure virtual NICs in guests. See SDN section above for details!
  • The Permission.Modify privilege has been removed from the PVESysAdmin and PVEAdmin roles, in order to reduce the chances of accidentally granting the privilege to modify privileges. If a particular setup requires a role with this privilege, it is necessary to define a new custom role and use that instead of PVESysAdmin/PVEAdmin.
  • Users with VM.Allocate/Datastore.Allocate/Pool.Allocate privileges, but without the Permissions.Modify privilege, can now only assign a subset of their own privileges to specific VM/storage/pool paths. Previously they could assign any role to specific VM/storage/pool paths. As the privileges usable on specific VM/storage/pool paths were quite limited, this did not allow privilege escalation, but restricting the capabilities now allows adding more powerful privileges in future versions without breaking changes.
  • Editing backup jobs now generally requires the Datastore.Allocate privilege on the target storage, and editing backup jobs with dumpdir requires root privileges.
  • User accounts will now be locked after too many attempts to authenticate with a second factor. This is intended to protect against an attacker who has obtained the user password and attempts to guess the second factor. Unlocking requires either a successful login with a recovery key or a manual unlock by an administrator.
Others
  • The API can handle array-type data differently, while staying backward compatible.

Instead of being able to pass the individual elements separated by null bytes, you can pass the data directly as array.




We use cookies
Cookie preferences
Below you may find information about the purposes for which we and our partners use cookies and process data. You can exercise your preferences for processing, and/or see details on our partners' websites.
Analytical cookies Disable all
Functional cookies
Other cookies
We use cookies to personalize content and ads, to provide social media features and to analyze our traffic. Learn more about our cookie policy.
I understand Details
Cookies