Back to Blog
Storage
ZFS
Performance
ZFS Configuration Guide for Optimal Performance
January 5, 2024
10 min read
# ZFS Configuration Guide for Optimal Performance
ZFS (Zettabyte File System) is a revolutionary filesystem that combines the roles of a traditional filesystem and volume manager. This guide covers everything you need to know to configure ZFS for optimal performance in your homelab.
## What Makes ZFS Special?
ZFS offers unique features that set it apart:
- **Data Integrity**: Built-in checksumming detects and corrects silent data corruption
- **Snapshots**: Instant, space-efficient point-in-time copies
- **RAID-Z**: Software RAID with better performance than traditional hardware RAID
- **Compression**: Transparent data compression to save space
- **Deduplication**: Eliminate duplicate data blocks
- **ARC Cache**: Intelligent caching system for better performance
## System Requirements
### Minimum Requirements
- **RAM**: 8GB (16GB+ recommended)
- **CPU**: Any 64-bit processor
- **Disks**: At least 2 drives for redundancy
### Recommended Configuration
- **RAM**: 1GB per TB of storage (for deduplication: 5GB per TB)
- **CPU**: Multi-core processor for better performance
- **Disks**: Enterprise-grade drives with power loss protection
## Installing ZFS
### Ubuntu/Debian
```bash
sudo apt update
sudo apt install zfsutils-linux
```
### Proxmox
ZFS is included by default:
```bash
zfs version
```
## Creating ZFS Pools
### Single Disk (No Redundancy)
⚠️ **Not recommended for production data**
```bash
sudo zpool create tank /dev/sdb
```
### Mirror (RAID-1)
Two disks with complete redundancy:
```bash
sudo zpool create tank mirror /dev/sdb /dev/sdc
```
### RAID-Z1 (Similar to RAID-5)
Minimum 3 disks, survives 1 disk failure:
```bash
sudo zpool create tank raidz1 /dev/sdb /dev/sdc /dev/sdd
```
### RAID-Z2 (Similar to RAID-6)
Minimum 4 disks, survives 2 disk failures:
```bash
sudo zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde
```
### RAID-Z3
Minimum 5 disks, survives 3 disk failures:
```bash
sudo zpool create tank raidz3 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
```
## ZFS Pool Recommendations
### Home/Small Office
- **3-4 drives**: Use RAID-Z1
- **5-8 drives**: Use RAID-Z2
- **9+ drives**: Consider RAID-Z3 or multiple RAID-Z2 vdevs
### Enterprise/Critical Data
- **Always use RAID-Z2 or RAID-Z3**
- **Add hot spares**
- **Use enterprise drives**
## Optimizing ZFS Performance
### ARC Cache Tuning
Limit ARC cache size (in `/etc/modprobe.d/zfs.conf`):
```bash
# Limit ARC to 8GB
options zfs zfs_arc_max=8589934592
# Minimum ARC size
options zfs zfs_arc_min=4294967296
```
Apply changes:
```bash
sudo update-initramfs -u
sudo reboot
```
### Enable Compression
LZ4 compression is fast and saves space:
```bash
sudo zfs set compression=lz4 tank
```
### Configure Record Size
For databases (small random I/O):
```bash
sudo zfs set recordsize=16K tank/database
```
For media files (large sequential I/O):
```bash
sudo zfs set recordsize=1M tank/media
```
### Disable Access Time Updates
Improves performance:
```bash
sudo zfs set atime=off tank
```
### Enable ZFS Auto-Trim
For SSDs:
```bash
sudo zpool set autotrim=on tank
```
## Working with Datasets
### Create Dataset
```bash
sudo zfs create tank/documents
sudo zfs create tank/photos
sudo zfs create tank/videos
```
### Set Quotas
```bash
sudo zfs set quota=100G tank/documents
```
### Set Reservations
```bash
sudo zfs set reservation=50G tank/database
```
### List Datasets
```bash
zfs list
```
## Snapshots
### Create Snapshot
```bash
sudo zfs snapshot tank/documents@backup-2024-01-15
```
### List Snapshots
```bash
zfs list -t snapshot
```
### Rollback to Snapshot
```bash
sudo zfs rollback tank/documents@backup-2024-01-15
```
### Delete Snapshot
```bash
sudo zfs destroy tank/documents@backup-2024-01-15
```
### Automated Snapshots
Install Sanoid for automated snapshot management:
```bash
sudo apt install sanoid
```
Configure `/etc/sanoid/sanoid.conf`:
```ini
[tank/documents]
use_template = production
recursive = yes
[template_production]
frequently = 0
hourly = 24
daily = 7
monthly = 12
yearly = 0
autosnap = yes
autoprune = yes
```
## Send and Receive (Replication)
### Send Snapshot to File
```bash
sudo zfs send tank/documents@backup | gzip > backup.gz
```
### Receive Snapshot from File
```bash
gunzip -c backup.gz | sudo zfs receive tank/restore
```
### Incremental Replication
Initial full send:
```bash
sudo zfs send tank/documents@initial | ssh user@backup-server \
sudo zfs receive backup/documents
```
Subsequent incremental sends:
```bash
sudo zfs send -i @initial tank/documents@today | ssh user@backup-server \
sudo zfs receive backup/documents
```
## Monitoring ZFS
### Pool Status
```bash
sudo zpool status
```
### Pool I/O Statistics
```bash
sudo zpool iostat -v 2
```
### ARC Statistics
```bash
cat /proc/spl/kstat/zfs/arcstats
```
Or use `arc_summary`:
```bash
sudo arc_summary
```
### Dataset Usage
```bash
zfs list -o name,used,avail,refer,mountpoint
```
### Check for Errors
```bash
sudo zpool status -v
```
## Maintenance
### Scrub Pool
Regular scrubs verify data integrity:
```bash
sudo zpool scrub tank
```
Check scrub progress:
```bash
sudo zpool status
```
Schedule monthly scrubs in crontab:
```bash
# Run on the first Sunday of each month at 2 AM
0 2 1-7 * * [ "$(date +\%u)" = "7" ] && /sbin/zpool scrub tank
```
### Replace Failed Drive
```bash
sudo zpool replace tank /dev/sdb /dev/sdf
```
Monitor rebuild:
```bash
watch -n 5 zpool status
```
### Add Hot Spare
```bash
sudo zpool add tank spare /dev/sdg
```
## Troubleshooting
### Pool Won't Import
Force import:
```bash
sudo zpool import -f tank
```
### Degraded Pool
Check status:
```bash
sudo zpool status -v
```
Clear errors (if transient):
```bash
sudo zpool clear tank
```
### Out of Space
Check space usage:
```bash
zfs list -o name,used,avail,refer
zfs list -t snapshot
```
Delete old snapshots:
```bash
sudo zfs destroy tank/documents@old-snapshot
```
### Performance Issues
Check ARC hit rate:
```bash
sudo arc_summary | grep "Hit Rate"
```
Check pool fragmentation:
```bash
sudo zpool list -o name,fragmentation
```
## Best Practices
### Do's
✅ Use ECC RAM for data integrity
✅ Regular scrubs (monthly)
✅ Enable compression (lz4)
✅ Take frequent snapshots
✅ Test your backups
✅ Monitor pool health
✅ Use whole disks, not partitions
### Don'ts
❌ Never exceed 80% pool capacity
❌ Don't use RAID-Z with different sized drives
❌ Avoid deduplication unless you have 5GB RAM per TB
❌ Don't expand RAID-Z vdevs (not supported)
❌ Never use consumer drives without testing
❌ Don't ignore scrub errors
## Advanced Features
### Encryption
Create encrypted dataset:
```bash
sudo zfs create -o encryption=aes-256-gcm \
-o keyformat=passphrase tank/secure
```
### Deduplication
⚠️ **Requires significant RAM**
```bash
sudo zfs set dedup=on tank/vms
```
### NFS Shares
```bash
sudo zfs set sharenfs=on tank/share
sudo zfs set sharenfs="rw=@192.168.1.0/24" tank/share
```
### SMB Shares
```bash
sudo zfs set sharesmb=on tank/share
```
## Performance Benchmarks
Test sequential write:
```bash
dd if=/dev/zero of=/tank/testfile bs=1M count=10000
```
Test sequential read:
```bash
dd if=/tank/testfile of=/dev/null bs=1M
```
Use `fio` for comprehensive testing:
```bash
sudo apt install fio
fio --name=seqwrite --rw=write --bs=1M --size=10G \
--filename=/tank/testfile
```
## Conclusion
ZFS provides enterprise-grade storage features for your homelab. With proper configuration and maintenance, it delivers exceptional data integrity and performance.
Key takeaways:
- Use RAID-Z2 for important data
- Enable LZ4 compression
- Schedule regular scrubs
- Take frequent snapshots
- Monitor pool health
- Never exceed 80% capacity
## Resources
- [OpenZFS Documentation](https://openzfs.github.io/openzfs-docs/)
- [FreeBSD ZFS Handbook](https://docs.freebsd.org/en/books/handbook/zfs/)
- [ZFS on Linux GitHub](https://github.com/openzfs/zfs)
Keep Exploring
No Budget for Enterprise Drives? Here's How Proxmox Users Are Fighting SSD Wearout Anyway
Proxmox SSD wearout guide: reduce write amplification, tune ZFS, and extend consumer drive life when enterprise SSDs are out of budget.
BTRFS Inside Proxmox VMs: Smart Flexibility or a CoW-on-CoW Trap Waiting to Happen?
Using BTRFS inside Proxmox VMs can be great for snapshots and subvolumes, but stacking CoW on CoW (especially on ZFS hosts) can introduce serious IO overhead and fragmentation risks.
Why Your Proxmox Migration Failed (Hint: It Wasn't Proxmox)
Most failed Proxmox migrations aren't Proxmox failures at all. They're the result of assumptions VMware spent 15 years teaching us to make — and infrastructure that stopped hiding its complexity.
Six Grand, Twelve Drives, and One Dream: How a 200TB Server Became a Portfolio
One engineer built a 200TB Proxmox and TrueNAS system from scratch—not for work, but as proof of skill. This is the story of how a homelab became a living resume.