I recently decided to rebuild my homelab after a nasty double hard drive failure (no important files were lost, thanks to ddrescue). The new setup uses one SSD as the PVE root drive, and two Ironwolf HDDs in a RAID 1 MD array (which I’ll probably expand to RAID 5 in the near future).

Previously the storage array had a simple ext4 filesystem mounted to /mnt/storage, which was then bind-mounted to LXC containers running my services. It worked well enough, but figuring out permissions between the host, the container, and potentially nested containers was a bit of a challenge. Now I’m using brand new hard drives and I want to do the first steps right.

The host is an old PC living a new life: i3-4160 with 8 GB DDR3 non-ECC memory.

  • Option 1 would be to do what I did before: format the array as an ext4 volume, mount on the host, and bind mount to the containers. I don’t use VMs much because the system is memory constrained, but if I did, I’d probably have to use NFS or something similar to give the VMs access to the disk.

  • Option 2 is to create an LVM volume group on the RAID array, then use Proxmox to manage LVs. This would be my preferred option from an administration perspective since privileges would become a non-issue and I could mount the LVs directly to VMs, but I have some concerns:

    • If the host were to break irrecoverably, is it possible to open LVs created by Proxmox on a different system? If I need to back up some LVM config files to make that happen, which files are those? I’ve tried following several guides to mount the LVs, but never been successful.
    • I’m planning to put things on the server that will grow over time, like game installers, media files, and Git LFS storage. Is it better to use thinpools or should I just allocate some appropriately huge LVs to those services?
  • Option 3 is to forget mdadm and use Proxmox’s ZFS to set up redundancy. My main concern here, in addition to everything in option 2, is that ZFS needs a lot of memory for caching. Right now I can dedicate 4 GB to it, which is less than the recommendation – is it responsible to run a ZFS pool with that?

My primary objective is data resilience above all. Obviously nothing can replace a good backup solution, but that’s not something I can afford at the moment. I want to be able to reassemble and mount the array on a different system if the server falls to pieces. Option 1 seems the most conducive for that (I’ve had to do it once), but if LVM on RAID or ZFS can offer the same resilience without any major drawbacks (like difficulty mounting LVs or other issues I might encounter)… I’d like to know what others use or recommend.

    • rtxn@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      ZFS uses the RAM intensively for caching operations. Way more than traditional filesystems. The recommended cache size is 2 GB plus 1 GB per terabyte of capacity. For my server, that would be three quarters of the RAM dedicated entirely to the filesystem.