How-to

Tune compression, dedup, and chunking.

Set compression and chunk sizes that fit your datasets so backups finish faster without wasting CPU.

Steps

  1. Pick a compression level. Start with zstd default; lower level (e.g., 3) for CPU-heavy environments, higher for better reduction if CPU is free.
  2. Set chunk size per dataset type. Smaller chunks (1–2 MiB) improve dedupe for VMs with small changes; larger (4–8 MiB) reduce CPU overhead for big sequential datasets.
  3. Test on a representative VM/CT. Run backup, record throughput, CPU load, and dedupe/compression ratios.
  4. Adjust concurrency. Limit concurrent backups to avoid CPU saturation; balance streams to match storage and network.
  5. Re-test after tuning. Compare throughput and ratios; keep configs that meet your backup window.
  6. Document defaults. Standardize chosen chunk size and compression across jobs/datastores.

Prereqs

  • Representative VM/CT to test
  • Backup window target
  • CPU and I/O headroom known

Quick checks

  • Throughput meets or beats prior runs.
  • CPU utilization stays within safe limits.
  • Dedupe/compression ratios align with expectations.
  • Backup finishes within the scheduled window.

If something fails

  • Lower compression level to free CPU.
  • Increase chunk size for sequential workloads.
  • Reduce concurrent jobs to relieve contention.
  • Check network/storage metrics for bottlenecks.

Official docs

See Proxmox guidance on compression and chunking: PBS compression.

Hosted PBS at $7.95/TB.

No storage limits—$7.95/TB with compute and RAM included. We run the infrastructure; you keep control.