Back to Blog
    proxmox
    multi-tenant
    sdn
    rbac
    self-service

    How a Single Proxmox Node Became a Multi-Tenant Self-Service Cloud (Without Giving Anyone Root)

    December 26, 2025
    9 min read read
    There's a very specific moment every small infra setup eventually hits. It's when one Proxmox node quietly turns into the Proxmox node. It starts harmlessly. One strong box. Plenty of cores. Way too much RAM. A couple of VMs for internal services. Maybe a test environment. Then another team asks for "just one VM." Then another project spins up. Suddenly, that single node is hosting half a company's development universe—and everyone wants access. That's where things usually get uncomfortable. Because Proxmox, by default, is brutally honest about who's in charge. You're either root, or you're basically a spectator. And once you give someone real access to the GUI, they can see everything. Other teams' VMs. Storage. Network objects. The entire Datacenter tree. The blast radius is obvious. So people fall back to bad compromises: - Handing out root because "it's just internal." - Running nested virtualization to fake isolation. - Spinning up separate hosts for every team and watching utilization crater. - Or keeping everything locked down and becoming the human API for start/stop/snapshot requests. A recent deep dive from the Proxmox community shows there's a better option—one that feels obvious in hindsight, but is surprisingly underused. The result looks like this: One physical Proxmox node, sliced into fully isolated per-project labs, where teams self-manage their own VMs through the Proxmox GUI—without ever getting root, and without seeing anything that isn't theirs. No magic. No forks. Just SDN, RBAC, and some discipline. ## The Core Idea: Treat Projects Like Tenants (But Don't Pretend You're a Cloud) The setup starts with a simple mental shift. The "tenants" here are not random hosting customers. They're internal project teams, PoCs, training groups, or short-lived environments. People who need autonomy, but not sovereignty. Think: - App teams inside a small software company - MSP-run dev labs - Training or workshop environments - Security labs with strict separation In all of those cases, giving each team a full root server is wasteful. But giving everyone root on a shared box is reckless. So instead, this approach builds per-project isolation along two axes: - **Network isolation** – each project lives in its own SDN zone and VNet - **GUI isolation** – RBAC ensures users only see and control their own resources And critically, both of these are enforced inside Proxmox itself. ## Building the Illusion: One Node, Many Invisible Worlds At the center of this setup is a single Proxmox VE node, running multiple fully isolated "project labs." Each project gets: - Its own SDN zone (devpj01, devpj02, etc.) - Its own VNet inside that zone - Its own VPN entry point - Its own Proxmox pool The SDN layer ensures that traffic simply cannot cross project boundaries. VPN users land directly inside their project's VNet, and that's it. No routing tricks. No firewall gymnastics later. This is where Proxmox SDN quietly shines. With zones and VNets, isolation stops being a best-effort promise and becomes a structural fact. But network isolation alone doesn't solve the human problem. Because users don't just want connectivity. They want buttons. ## Turning the Proxmox GUI Into a Self-Care Portal Here's the bold question this setup answers: Can you let VPN users log into the Proxmox GUI—and not regret it? The answer turns out to be yes. Very yes. But only if you lean into Proxmox RBAC the way it was intended to be used. The trick is pools. Each project gets its own pool: - pj01 - pj02 - and so on Then: - A group is created per project (Pj01Admins) - Users are added to that group - The group is granted a role only on the pool, not on the Datacenter or node That role? PVEAdmin. This sounds dangerous until you realize the scope matters more than the role. PVEAdmin inside a pool means: - Start/stop VMs - Open consoles - Change VM settings - Take snapshots - Run backups - Create and delete VMs But only for resources that live inside that pool. Outside the pool, those objects might as well not exist. When a project admin logs in, the Proxmox tree collapses into something almost suspiciously clean: - No Datacenter overview - No other projects - No cluster-wide settings - Just their pool, their VMs, and the storage explicitly assigned to them It feels like a dedicated Proxmox instance—without actually being one. ## Storage and the Subtle Ways RBAC Can Break This is where most "RBAC experiments" quietly fail. If you don't explicitly add storage to the pool, VM creation will fail in confusing ways. No disks. No ISOs. No EFI storage. Just errors. The fix is simple but non-obvious: - VM disk storage must be added to the pool - ISO storage must be added - Boot/EFI-related storage must be added Only then does the illusion hold. Once that's done, project admins can create VMs end-to-end without ever touching anything global. ## SDN Permissions: The Make-or-Break Detail Network selection during VM creation is the sharp edge. If you skip SDN permissions, users will hit a wall when they try to attach a NIC. If you do it wrong, they might get access to other projects' networks. The clean solution is per-project SDN zones. Each project gets its own zone, and the project admin group is granted permissions only on that zone. That way: - They can attach NICs to their own VNet - They cannot see or modify other projects' VNets - They cannot create new zones or VNets globally There's a workaround for setups that already use a shared zone—by assigning permissions directly to hidden VNet paths—but the direction is clear: per-project zones scale better, and fail safer. This is the part that turns "nice demo" into "actually multi-tenant." ## VPN Access Without Opening the World Finally, access. VPN users are allowed to reach the Proxmox GUI only via their project's SDN gateway IP, on port 8006. Firewall rules tie this together using IP sets, so access is constrained both by network and by identity. You don't need to expose the GUI publicly. You don't need jump hosts. You don't need brittle SSH tunnels. You log into the VPN, open a browser, and Proxmox is just… there. But only your slice of it. It feels modern in a way Proxmox doesn't always get credit for. ## What This Still Doesn't Solve (And Why That's Okay) This setup is powerful, but it's not pretending to be AWS. There are no hard quotas on CPU, RAM, or VM count. VMIDs are global. Naming conventions still rely on human discipline. Per-user GUI access is tricky without static VPN IPs. And yet, for its target audience, none of that is a deal-breaker. Because the real win here is operational clarity: - Infra owners keep root and the Datacenter - Project teams get autonomy - Isolation is enforced by design, not trust - Hardware utilization stays high It's a sweet spot that many small shops never realize is available. ## The Bigger Picture What's striking about this pattern is how little custom machinery it requires. No patches. No external control planes. No hacks that collapse under upgrades. Just using Proxmox as it was designed—pools, RBAC, SDN—applied consistently. It's also a reminder that multi-tenancy doesn't have to mean hyperscale. Sometimes it just means letting people do their jobs without stepping on each other. On a single node. Without giving anyone root. And once you see it working, it's very hard to go back. If you're running Proxmox today and juggling multiple teams, this is one of those setups that quietly asks an uncomfortable question: Why are we still doing this the hard way?