Updated ☝️ 👇
Updated ☝️ 👇
It probably can be packaged in a flatpak but it would be more of a challenge than using the docker package. You could implement your use case today with the default docker compose setup. You could be up and running in minutes. Start it with -d
and it would even start automatically on reboot. It won’t consume any more resources than a flatpak version.
Just try this in a directory somewhere: https://immich.app/docs/install/docker-compose/
As for docker itself, if you’re on Ubuntu or Debian, you can use the docker version from the stock repos. The package is docker.io
and for compose you want docker-compose-v2
Yes, it prevents bit rot. It’s why I switched to it from the standard mdraid/LVM/Ext4 setup I used before.
The instructions seem correct but there’s some room for improvement.
Instead of using logical device names like this:
sudo zpool create zfspool raidz1 sda sdb sdc sdd sde -f
You want to use hardware IDs like this:
sudo zpool create zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
You can discover the mapping of your disks to their logical names like this:
ls -la /dev/disk/by-id/*
Then you also want to add these options to the command:
sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool ...
These do useful things like setting optimal block size, compression (basically free performance), a bunch of settings that make ZFS behave like a typical Linux filesystem (its defaults come from Solaris).
Your final create command should look like:
sudo zpool create -o ashift=12 -o autotrim=on -O acltype=posixacl -O compression=lz4 -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa zfspool raidz1 /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA2FERAP /dev/disk/by-id/wwn-0x5000cca27dc48885 ...
You can experiment till you get your final creation command since creation/destruction is nearly instant. Don’t hesitate to create/destroy multiple times till you got it right.
Yes exactly.
Yes. It’s what I do. In fact some of the servers I’m running use my own VPN which allows me to securely connect to them.
They can’t. Competition for capital is forcing them to extract the everliving profit out of people. Their competitors would not be far behind on this train if it increases profitability.
Basically the equivalent of RAID 5 in terms of redundancy.
You don’t even need to do RAIDz expansion, although that feature could save some space. You can just add another redundant set of disks to the existing one. E.g. have a 5-disk RAIDz1 which gives you the space of 4 disks. Then maybe slap on a 2-disk mirror which gives you the space of 1 additional disk. Or another RAIDz1 with however many disks you like. Or a RAIDz2, etc. As long as the newly added space has adequate redundancy of its own, it can be seamlessly added to the existing one, “magically” increasing the available storage space. No fuss.
This. Also it’s not difficult to expand at all. There are multiple ways. Just ask here. You could also ask for hypothetical scenarios now if you like.
I outlined some differences that could make it worth it over just interface binding for some. Another is that it makes it impossible to accidentally have another application exit through the tunnel, leaking your identity, like a browser logged into gmail.com. You have to explicitly set the container as proxy in the browser for that to become possible. It also allows using a separate VPN connection, provider or region for the torrent client, while the desktop user is free to use a different VPN connection or none.
This is why the most foolproof solution I’ve found is to use a docker container that has VPN and torrent client built-in. It’ll have the networking configuration done by someone who knows better. The most popular ones, like this, would permit no internet access out of the container outside of the VPN host. Then it doesn’t matter whether the torrent client binds to a specific interface or not, or what its configuration is. It’s trapped, or sandboxed, and the only way out is via the VPN tunnel. Once you have setup one of these, you can also reuse it from other containers with other apps, like your Usenet client, or even outside of containers via the built-in HTTP proxy. I know there’s also a qBit based container but I haven’t read into it or used it so I can’t vouch for it. The Transmission-OpenVPN based one is rock solid. Have used it for many, many years.
Wow, that was some supreme both-sides shit.
If the cost of panels drops significantly, there would be more capital available to spend on inverters, even if they stay at the current prices, still decreasing the cost of deployment. But yes. 😄
Whatever the repo is setup with.
Nice. So this model is perfectly usable by lower end x86 machines.
I discovered that the Android app shows results a bit slower than the web. The request doesn’t reach Immich during the majority of the wait. I’m not sure why. When searching from the web app, the request is received by Immich immediately.
All-in, I wanted something on the order of 1MB for client app, server, all dependencies, everything.
Okay that’s gotta be radically different!
Well, you gotta start it somehow. You could rely on compose’es built-in service management which will restart containers upon system reboot if they were started with -d
, and have the right restart policy. But you still have to start those at least once. How’d you do that? Unless you plan to start it manually, you have to use some service startup mechanism. That leads us to systemd unit. I have to write a systemd unit to do docker compose up -d
. But then I’m splitting the service lifecycle management to two systems. If I want to stop it, I no longer can do that via systemd. I have to go find where the compose file is and issue docker compose down
. Not great. Instead I’d write a stop line in my systemd unit so I can start/stop from a single place. But wait 🫷 that’s kinda what I’m doing isn’t it? Except if I start it with docker compose up
without -d
, I don’t need a separate stop line and systemd can directly monitor the process. As a result I get logs in journald
too, and I can use systemd’s restart policies. Having the service managed by systemd also means I can use aystemd dependencies such as fs mounts, network availability, you name it. It’s way more powerful than compose’s restart policy. Finally, I like to clean up any data I haven’t explicitly intended to persist across service restarts so that I don’t end up in a situation where I’m debugging an issue that manifests itself because of some persisted piece of data I’m completely unaware of.
Let me know how the search performs once it’s done. Speed of search, subjective quality, etc.
Why start anew instead of forking or contributing to Jellyfin?
When many who still aren’t in cities move into them. Urbanization in China has yet more runway and that drives rail utilization.