/home because you want to save the user files if you need to reinstall.
/var and /tmp because /var holds log files and /tmp temporary files that can easily take up all your disk space. So it’s best to just fill up a separate partition to prevent your system from locking up because your disk is full.
Not just log files, but any variable/dynamic data used by packages installed on the system: caches, databases (like /var/lib/mysql for MySQL), Docker volumes, etc.
Traditionally, /var and /home are parts of a Linux server that use the most disk space, which is why they used to almost always be separate partitions.
Also /tmp is often a RAM disk (tmpfs mount) these days.
I do not think that matters so much, I guess it just affects the speed at which you load your software into ram, but once it is loaded the difference in running the software should be pretty small.
That’s unless you call a command thousands of times per second, in that case it may improve performance.
The fastest drive should generally be reserved to storing software input and output, as that’s where generally drive speed affects the most execution time. Meaning if your software does a blocking read that read will be faster and thus the software will proceed running quicker after reading.
Moreover, software input in general tends to be larger than the binaries; unless we’re talking about an electron based text editor.
Because someone in the 1970s-80s (who is smarter than we are) decided that single-user mode files should be located in the root and multi-user files should be located in /usr. Somebody else (who is also smarter than we are) decided that it was a stupid ass solution because most of those files are identical and it’s easier to just symlink them to the multi-user directories (because nobody runs single-user systems anymore) than making sure that every search path contains the correct versions of the files, while also preserving backwards compatibility with systems that expect to run in single-user mode. Some distros, like Debian, also have separate executables for unprivileged sessions (/bin and /usr/bin) and privileged sessions (i.e. root, /sbin and /usr/sbin). Other distros, like Arch, symlink all of those directories to /usr/bin to preserve compatibility with programs that refer to executables using full paths.
But for most of us young whippersnappers, the most important reason is that it’s always been done like this, and changing it now would make a lot of developers and admins very unhappy, and lots of software very broken.
The only thing better than perfect is standardized.
/home because you want to save the user files if you need to reinstall.
/var and /tmp because /var holds log files and /tmp temporary files that can easily take up all your disk space. So it’s best to just fill up a separate partition to prevent your system from locking up because your disk is full.
/usr and /bin… this I don’t know
Not just log files, but any variable/dynamic data used by packages installed on the system: caches, databases (like /var/lib/mysql for MySQL), Docker volumes, etc.
Traditionally, /var and /home are parts of a Linux server that use the most disk space, which is why they used to almost always be separate partitions.
Also /tmp is often a RAM disk (tmpfs mount) these days.
And in immutable distros, one of the few writable areas
True.
I would think putting /bin and /lib on the fastest thing possible would be nice 🤷
I do not think that matters so much, I guess it just affects the speed at which you load your software into ram, but once it is loaded the difference in running the software should be pretty small. That’s unless you call a command thousands of times per second, in that case it may improve performance. The fastest drive should generally be reserved to storing software input and output, as that’s where generally drive speed affects the most execution time. Meaning if your software does a blocking read that read will be faster and thus the software will proceed running quicker after reading. Moreover, software input in general tends to be larger than the binaries; unless we’re talking about an electron based text editor.
Could you not just use subdirectories?
They are subdirectories?!
Ok technically but why couldn’t we keep a stable explicit hierarchy without breaking compatibility or relying on symlinks and assumption?
In other words
Why not /system/bin, /system/lib, /apps/bin.
Or why not keep /bin as a real directory forever
Or why force /usr to be mandatory so early?
Because someone in the 1970s-80s (who is smarter than we are) decided that single-user mode files should be located in the root and multi-user files should be located in
/usr. Somebody else (who is also smarter than we are) decided that it was a stupid ass solution because most of those files are identical and it’s easier to just symlink them to the multi-user directories (because nobody runs single-user systems anymore) than making sure that every search path contains the correct versions of the files, while also preserving backwards compatibility with systems that expect to run in single-user mode. Some distros, like Debian, also have separate executables for unprivileged sessions (/binand/usr/bin) and privileged sessions (i.e. root,/sbinand/usr/sbin). Other distros, like Arch, symlink all of those directories to/usr/binto preserve compatibility with programs that refer to executables using full paths.But for most of us young whippersnappers, the most important reason is that it’s always been done like this, and changing it now would make a lot of developers and admins very unhappy, and lots of software very broken.
The only thing better than perfect is standardized.