• 32 Posts
  • 1.13K Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle


  • It probably can be packaged in a flatpak but it would be more of a challenge than using the docker package. You could implement your use case today with the default docker compose setup. You could be up and running in minutes. Start it with -d and it would even start automatically on reboot. It won’t consume any more resources than a flatpak version.

    Just try this in a directory somewhere: https://immich.app/docs/install/docker-compose/

    As for docker itself, if you’re on Ubuntu or Debian, you can use the docker version from the stock repos. The package is docker.io and for compose you want docker-compose-v2


  • Yes, it prevents bit rot. It’s why I switched to it from the standard mdraid/LVM/Ext4 setup I used before.

    The instructions seem correct but there’s some room for improvement.

    Instead of using logical device names like this:

    sudo zpool create zfspool raidz1 sda sdb sdc sdd sde -f
    

    You want to use hardware IDs like this:

    sudo zpool create zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
    

    You can discover the mapping of your disks to their logical names like this:

    ls -la /dev/disk/by-id/*
    

    Then you also want to add these options to the command:

    sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool ...
    

    These do useful things like setting optimal block size, compression (basically free performance), a bunch of settings that make ZFS behave like a typical Linux filesystem (its defaults come from Solaris).

    Your final create command should look like:

    sudo zpool create  -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
    

    You can experiment till you get your final creation command since creation/destruction is nearly instant. Don’t hesitate to create/destroy multiple times till you got it right.





  • Basically the equivalent of RAID 5 in terms of redundancy.

    You don’t even need to do RAIDz expansion, although that feature could save some space. You can just add another redundant set of disks to the existing one. E.g. have a 5-disk RAIDz1 which gives you the space of 4 disks. Then maybe slap on a 2-disk mirror which gives you the space of 1 additional disk. Or another RAIDz1 with however many disks you like. Or a RAIDz2, etc. As long as the newly added space has adequate redundancy of its own, it can be seamlessly added to the existing one, “magically” increasing the available storage space. No fuss.





  • This is why the most foolproof solution I’ve found is to use a docker container that has VPN and torrent client built-in. It’ll have the networking configuration done by someone who knows better. The most popular ones, like this, would permit no internet access out of the container outside of the VPN host. Then it doesn’t matter whether the torrent client binds to a specific interface or not, or what its configuration is. It’s trapped, or sandboxed, and the only way out is via the VPN tunnel. Once you have setup one of these, you can also reuse it from other containers with other apps, like your Usenet client, or even outside of containers via the built-in HTTP proxy. I know there’s also a qBit based container but I haven’t read into it or used it so I can’t vouch for it. The Transmission-OpenVPN based one is rock solid. Have used it for many, many years.







  • Well, you gotta start it somehow. You could rely on compose’es built-in service management which will restart containers upon system reboot if they were started with -d, and have the right restart policy. But you still have to start those at least once. How’d you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to do docker compose up -d. But then I’m splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issue docker compose down. Not great. Instead I’d write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that’s kinda what I’m doing isn’t it? Except if I start it with docker compose up without -d, I don’t need a separate stop line and systemd can directly monitor the process. As a result I get logs in journald too, and I can use systemd’s restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It’s way more powerful than compose’s restart policy. Finally, I like to clean up any data I haven’t explicitly intended to persist across service restarts so that I don’t end up in a situation where I’m debugging an issue that manifests itself because of some persisted piece of data I’m completely unaware of.