Back to Blog
    Proxmox
    SSD
    Storage
    ZFS
    Hardware
    Performance

    No Budget for Enterprise Drives? Here's How Proxmox Users Are Fighting SSD Wearout Anyway

    November 8, 2025
    11 min read
    You don't need enterprise SSDs to run Proxmox. But if you're running consumer-grade NVMe or SATA drives in a tight, single-slot setup, and watching your wearout percentage climb after just a few months, you're not imagining things — that wear is real. It's happening. And you're not the only one staring at SMART stats wondering if your drives are going to melt. The original poster, running Proxmox on a Lenovo M720Q, had this exact problem: mirrored consumer drives used for local-zfs, housing VMs and LXC containers. The drives were barely six months old, and wearout percentages were already a concern. With little room (physically or financially) to toss in fresh enterprise hardware, the question was simple: what are the real-world options to make consumer SSDs last longer? Turns out, the Proxmox community had answers. A lot of them. Here's what they said. ## 1. Figure Out What's Eating Your SSD Before you panic-buy, inspect what's actually happening. Is it ZFS? Logging? Temporary file churn? One user summed it up well: "Are the disks being flogged to death with local logging?" That's not just a colorful phrase. Excessive verbosity in logs, scratch disk activity, or unused system services can rack up writes. If you're storing temp files, swap, or doing a lot of ephemeral disk I/O, that's prime wear territory. The OP confirmed logging was left on default — probably more verbose than necessary — and every container and VM was sharing the same mirrored pool. That's a recipe for premature drive aging. ## 2. ZFS? Tune It or Rethink It ZFS is powerful, but it's also notoriously write-heavy. One of the more detailed commenters offered a concrete tip: turn off ZFS logging, or redirect logs to RAM with tools like log2ram. Others doubled down: "Use log2ram wherever appropriate," one user advised, while warning that it's not plug-and-play. You'll need to tune the RAM disk size, write frequency, and monitor memory usage — or risk crashing services when the RAM fills up. The nuance? ZFS expects you to know what you're doing. It's not a beginner's storage engine. Unless you've planned, benchmarked, and tuned it for your specific workload, you might be doing more harm than good. As one reply put it bluntly: "ZFS expects you know what you're going to use it for… most people on here are just screwing around trying to learn how to manage VMs & containers." If your needs are basic — snapshots and compression aside — switching to ext4 with lvm-thin could reduce unnecessary wear without sacrificing much in the way of performance or reliability. ## 3. Don't Mirror NVMe and SATA Drives — Split Them Here's a surprise twist: mirroring different drive types might be hurting more than helping. One user highlighted a subtle issue — mirroring a fast NVMe with a slower SATA drive limits performance to the slowest link. Not only does that kneecap your NVMe's advantage, but it also doubles your writes. Instead, consider breaking the mirror, and using each drive for different roles — NVMe for VMs, SATA for backups or less I/O-heavy containers. Better yet, offload backups to another system entirely. If you've got an Unraid box lying around, as the OP mentioned, consider running Proxmox Backup Server (PBS) in a VM over there. It's an elegant way to split the load and reduce SSD stress. ## 4. Bigger Drives = Slower Wear Larger SSDs don't just hold more data — they spread writes across more NAND cells, which reduces wear per cell. So even if you stick to consumer hardware, choosing a 2TB SSD instead of a 512GB one can extend lifespan significantly. As one user noted: "Larger drives do have more cells to level over and also a higher TBW. Just buy larger drives." It's not always intuitive, but in SSD land, size is endurance. More space equals more room to breathe. ## 5. Watch Your Writes — And Trim the Fat Another user chimed in with real-world stats: over 660MB/hour of data written on a system with just three idle containers. That's a massive hidden write load. If you're seeing similar patterns, it's time to dig deeper. Run nvme-cli or check SMART stats in the GUI. Even default Proxmox installs can be surprisingly chatty. And if you're using ZFS on both the host and inside the VMs? That's double or triple write amplification. One advanced tip: tweak zfs_txg_timeout. The default is usually 5 seconds, but raising it to 180 seconds or more delays how often ZFS commits data, reducing write frequency. Just be cautious — this increases the risk of data loss during a power failure, so only do it if you've got UPS coverage. ## 6. Used Enterprise Drives Are Still an Option — If You Look Enterprise drives don't need to be new to be good. A few savvy homelabbers pointed out that used enterprise SSDs — particularly SATA models like the Intel DC S3620 or 4610 — are often dirt cheap on eBay, with plenty of life left. These drives were built to run 24/7 under load in datacenters. Many are retired just because they've reached the end of their warranty, not because they're worn out. The challenge, as the OP noted, is form factor. Most affordable enterprise NVMe drives are 22110, while their Lenovo M720Q only fits 2280. That's a real limitation — and one that affects many small-form-factor homelabs. Still, if you can work around it (via U.2 to PCIe adapters, for instance), you've got options. ## 7. Backups > Mirrors Let's get philosophical: Is drive redundancy better than having actual backups? One commenter put it best: "Backups are more important than a mirror." Instead of mirroring two different drives — especially if they don't match — consider using one for performance, and the other for Proxmox Backup Server or nightly snapshots. This way, you reduce duplication writes, gain restore options, and make better use of your hardware. It's not a popular take, but it's pragmatic: mirrors don't protect against accidental deletions, OS-level failures, or ransomware. Backups do. ## The Verdict: You Don't Need Enterprise SSDs — But You Do Need a Strategy This thread was packed with one unifying theme: awareness matters more than gear. Know what's writing to your disks. Tune your stack (especially ZFS) if you use it. Use your SSDs strategically — not everything needs to live on the same volume. And when you can't upgrade your hardware, optimize your usage. It's tempting to throw money at the problem, but smart configuration buys you time — and peace of mind. In the homelab world, that's often the difference between burning out your SSD in 8 months or running it for five years straight. And if you do eventually upgrade? You'll at least know you squeezed every last cell out of your current setup. Like a true hacker would.