Pretty sure everybody is missing the joke. The joke is that Debian packages are so stable and stale that you likely will need a reboot before an update.
Also, it’s a joke…please patch your boxes, k?
On my Gentoo server, uptime:
- 21:47:56 up 2455 days, 15:09, 1 user, load average: 0.00, 0.01, 0.00
Solid.
Would have been double that by now if not for the fire.
I got obsessed with uptime in the early 2000s, but for my desktop Slackware box. It ran a bunch of servers and services and crap but only for me, not heavy loads of public users. Anyway, I reached 6 years of uptime without a UPS and was aiming for 7 when a power outage got me.
Skill issue. Next time you can open up the computers power supply while it’s running, splice in a second power cable, and attach a UPS without powering down or getting electrocuted.
For legal reasons, /s
Not sure what your signature is supposed to do here but now I have 3rd degree burns and a fireball has engulfed my office wall
But more importantly, did your uptime get reset?
step 1:
sudo apt install slstep 2: fuck up
step 3: ???
step 4: profit!!!
Won’t work for me, I am more of a “;l” guy, I have the good direction, but wrong starting point 90%of the time 🥲
A must have IMO
Also ‘fuck’
I have a LXQt on my Termux, running on my android phone, and I am really happy with it (even though I use it for jack shit)
“Uptime” — aka the anxiety meter for every sysadmin.
Does NixOS apply kernel updates live? I can’t recall from when I used it.
Or if you have a UPS and backup generator or a house battery (do these need a UPS as well still?) it will tell you how long since you setup the system.
I would suspect you would still want a UPS. I don’t think house “power” setups have the switch over speed even if they’re automatic. Most home generator setups are manual not sure about battery setups.
My home generator is automatic but you still need an ups because the transfer switch and power on process for the generator isn’t instant. Takes like 10-30 seconds depending on how cold it is and how recently I serviced the generator.
You also ideally need a higher quality ups that can handle the shitty power coming from a generator, although the overall ups doesn’t need to be as “hefty” as a result. My ups is the kind that has extra filtering and stabilization of incoming power. My old ups was a cheaper cyberpower and it died after a few months of generator usage (we lose power here roughly every 4-6 weeks, thus the auto generator). The cheaper cyberpower would be fine in the majority of home circumstances tbh otherwise.
Pretty sure batteries can be set up for ups. One of the companies I worked for had part of the power for the building as ups. They used orange outlets to mark them. They constantly had to keep telling people not to overload it.
Battery should be automatic, but yeah not certain how quick the switchover is. At least you could go for the smallest capacity UPS there is as long as it can manage the wattage you are going with.
At some point when I am less busy again I think I am gonna swap back to a debian based system because my experience on arch and red hat systems just hasnt been as good (this may be because I started on Debian based systems and keep trying to use commands that dont work on the other ones out of muscle memory)
I get bored every so often and move all the important stuff to an external drive or a separate internal one and completely change my os
I am on manjaro but I have also run arch, red hat, void, mint, Debian, Ubuntu and a bunch of others that I either put on laptops or something similar as messing around with devices
Tails and slitaz have to be my favorite to run from a USB but peppermint isn’t the worst
I just did the contrary. Moved from debian to arch. After the update to trixie my network stack completely died somehow, so I’m going back to arch.
I have had minimal issues from my manjaro desktop but I just dont like it as much as my mint based systems because everything feels wrong and I can barely remember how to update my graphics drivers on manjaro vs mint where I am confident I could run my entire system mostly command line from installs to updates and random other shit that I just can’t remember how to do through arch systems because I dont run them as hard for some reason
Debian admin here. Even Debian gets regular kernel upgrades that like a reboot afterwards. Security updates are more important than uptime. Also regular testing for clean recovery after a reboot is a must so a power outrage doesn’t bring any new surprises with it. Also test your backup restores regularly.
power outrage
New fear unlocked.
The sun was angry that day, my friend…
What do you use for backup restores ?
The same tools.
Even Debian gets regular kernel upgrades that like a reboot afterwards
As someone running a UPS on my ubuntu server, “uptime” represents the time since the last kernel release, and not much else.
Novice homelaber here, is this just a case of apt update & upgrade or is there different commands for security and kernel updates? Also what’s your preferred backup/restore software? Thanks!
Kernel updates are usually held back and need to be selected manually. E.g. apt-get install linux-image-amd64.
I prefer rsync for private backups and employ bareos in my company for all servers.
I think you can do
apt upgrade --updatenow.‘apt upgrade -U’
Incredible that it’s not written everywhere, I always wanted to use something like this without the " update && upgrade" which looks like is not working oftentimes
Is it really not written? I saw apt upgrade --update and knew the standard shortcut would be -u, but that didn’t work so I tried -U, bingo bongo off I went.
WHAT. Does this do both sudo apt update and sudo apt upgrade?
Yup
see also –autoremove
Your note is very interesting about the difference between the commands and how autoremove will automatically remove stuff before or after the upgrade is performed. Should it always be done after, or are there instances when running it before is more beneficial? Is there any need to do both like this:
# sudo apt --update --autoremove upgrade -y && sudo apt autoremove -yI can’t really imagine a benefit to
--autoremoveexcept for keeping old packages a bit longer before removing them.Eg, if you run
apt --update --autoremove upgrade -yonce a day you’ll keep your prior-to-currently-running-version kernel packages a day longer than if you ranautoremoveimmediately after each upgrade.To make things more confusing: the new-ish
apt full-upgradecommand seems to remove most of whatapt autoremovewants to… but not quite everything. 🤷
I think so. I read it a few months back, but I don’t use any apt based systems to check on.
🤯
Nope it’s just apt update & upgrade. Iirc apt tells you when the kernel was updated and needs a reboot as well.
Only if you installed the package
needrestartfull-upgrade probably a better pick
Not if you use Proxmox! One has to be careful.
Also worth checking out restic. It’s more command line oriented and is generally stateless
I configured restic once, forget about it and saved my files because it was making backups since forever.
Oh, never heard about it. A quick research showed me that restic is a very viable solution. Thanks for mentioning it, I added it to my comment.
While researching, I also came across a fancy WebUI, which is mostly what non-CLI users want: backrest
I am using restic and backrest on my yunohost server and I really like it! It is really set up and forget for me. Only the uploads to backblaze b2 are still triggered manually. Also did a full recovery from the backblaze repo (downloaded locally) without problems.
Thanks just installed immich and I need a quality backup system.
I appreciate the link!
I’m not the person you asked the question of. I’m a fellow novice homelaber.
I use Kopia to backup my data folders and Docker container data. Works really well. The project for this weekend is to set offsite backups to be uploaded to iDrive.
When I update I use this:
sudo apt update && \ sudo apt upgrade -y && \ sudo apt full-upgrade -y && \ flatpak update -y 2>/dev/null; \ sudo apt autoremove -y && \ sudo apt autoclean && \ sudo journalctl --vacuum-time=7dYou can get rid of upgrade if you also use full-upgrade
Yeah, people that brag about uptimes are just bragging about the fragility of their infrastructure. If designed correctly you should be able to patch and reboot infrastructure while application availability stays up.
With an uptime of greater than 5 years I’m going to be concerned about the system potentially not coming back up after a reboot/power outage, especially for physical hardware
At a bank I worked at, we had an old IBM Power server which was at that point purely used for historical data. It had multiple years of uptime and was of course a good 10+ years old. When we went to take it offline, we actually just disabled the nic on the switch so we could reduce the number of powercycles it would see in fear that it would not power on anymore. Theoretically the data on it is purely historical, backed up and not needed, but there was enough question marks on each of those fronts we just played it safe
I haven’t had a kernel update on Debian that triggered the “you should restart” message in quite some time. I was under the understanding that most newer systems now use splicing at the kernel level to not require periodic reboots.
Check for the existence of the for containing packages that recommend a reboot. Debian does not do live patching like Ubuntu does. Not least because updates to firmware are usually not applied until reboot. Also even if that were the case, regular checks for healthy reboots make sense.
I haven’t seen it in a while either, but also, if there is a kernel update,
uname -salways returns the old kernel until a reboot.
This is why we have UPS ;-)
Seriously, one black out and suddenly you see the need for a UPS. Now my desktop is on a USB, my work laptop and monitors are on a UPS, my homelab is on a UPS, even my modem and router are on a UPS. I just wish I could get a backup generator, but that’s not happening anytime soon.
I sometimes have power outage in winter (snow storm, ice, etc) and working from home I need a UPS ; modem cable, router, PC, monitors, are on it, it can stand ~5h
I’ve had good luck with APC. Just be ready to pay a bit more upfront. But so far in the last 6 or 7 years, I’ve only had to replace one battery.
I bought a used APC Back-UPS Pro BR1500G for $100 on market place, it was a good deal, I replaced the 2 battery inside and added 4 outside (it is supported), I’m ready!
My experience with using an UPS is that they have caised an outage every few years, which is more often than we get power outages where I live, so I didn’t replace the batteries last time the UPS took down my server, and are just running straight from the wall. It might be better with a more expensive UPS, but it’s not worth it for me.
Yeah I read up on them before I bought, APC seemed like the best. I test them at least once a year and so far I’ve only had to replace one battery. Depending on the application I paid between $80 and $180 for each, but the higher upfront cost seems to have paid off for me at least. I am a sample size of 1, your results may vary.
A hale storm earlier this year and the power outage it caused created some bizarre issue with my home server I have yet to diagnose. All of my containers and VMs corrupted in some way, so I had to restore from backup, but my file server container has some sort of permissions issue on top of that.
Honestly the brownout before the outage is almost definitely what did it, but the cost of a UPS that also protects against brownouts is well outside of my usual hobby budget so it’s hard to justify on ewaste hardware that I got a pallet of for less than what the UPS would cost used
I got tired of my network puking every time the power went out for 5 seconds.
Edit- My NAS really dislikes having the power cut off.
Yep, the black outs have stopped now but for a while it was a daily occurrence. My NAS took a beating and so did my desktop. I spent a ton on ups’s to make sure that stuff was protected and bonus, I wouldn’t loose connection while on phone calls with government officials while at work… they get pissy when you suddenly drop off.
Can I ask, what is the advantage of a Debian server over a True Nas one? Asking because I set up True Nas and wondering if I should switch it to Debian
True nas is nas software that moonlights as a server. Debian is a linux distro commonly used as the operating system for servers due to its incredible stability and reliability among other things. So reliable infact that it’s used as the operating system for true nas scale! So unless your using the core version (that runs bsd) then your already using it. As far as rawdogging Debian on your hardware goes, id recommend against it unless you’re looking to seriously up your admin game. No web interfaces, lots of time in the terminal ( command line ) and more configuration files than is anyway reasonable. And we haven’t even started on virtual machines like proxmox ( also Debian based! ) or container critters like docker and kubernetes. (Iirc true nas uses kubernetes under the hood)

alt-text
___alt-text: The “I lied, I don’t have netflix” meme template. The girl with heavy dark rings around her eyes points a gun at the observer, with various images inserted in the background. The images include references to debian, libreboot, rsync, sed&awk, cron. The text reads: “I lied, I don’t have netflix - Take off your shoes, we’re going to learn to setup a NAS with Debian customized and automated to the bone and also automate the deployment process with Kubernetes. Everything will have 3-2-1 backups and controls will be networked to the volume slider in the radio of your car. We will use the motherboard of your calculator because it’s supported by libreboot.”
cool, then we can chill?
Small correction: since the newest version there only is Trunas Scale, so the Debian derivative, which they now call Community Edition. The BSD variant has been decommissioned as far as I know.
You seem like the right person to ask this:
What route do I go if I want to up my admin slowly so I eventually feel able to run pure Debian? Currently running Docker on Unraid with two minor VMs but looking to migrate away from Unraid with the intention to only run FOSS (and get a deeper understanding of everything under the hood).
I know that’s little information, all I need is a nudge in the right direction so I can figure things out by consulting documentation and forums.
Realistically, comfort comes from experience. The more you use it the more you’ll feel comfortable.
If you want to get a lot of exposure without dedicating too much time to it and limit the risk, I would say, spin up a Debian VM and try to configure it into the server you want the old school way. Setup ssh keys, raid pool and samba share all via ssh. Try to do it like you’re actually deploying it. This will give you real world exposure to the command line and the commands you’d run. Next maintain that server like it’s production, ssh in every couple of weeks to run updates and reboot. Just that muscle memory of logging in and reviewing updates will help you feel more comfortable. Do it again with another service (a VPN server would be an easy choice, a Minecraft server is also a fun one but requires a lot more memory. DNS would be good if you’re feeling brave, but that’s really just because DNS architecture is more complex than most realize) and maintain those servers too
Once you’ve setup a couple of servers and spent a couple of months monitoring and updating them your comfort level should be much higher and you might feel ready to setup some actually home production servers on Debian or the like.
You mentioned running Trunas and wanting to learn Debian and other FLOSS software, the easy button answer is to run Proxmox. Its free and open source with paid enterprise support plans available and has been rapidly improving just in the handful of years I’ve been running it. Proxmox is really just a modified version of Debian. They have some tweaks and custom kernels over stock Debian but impressively actually have a supported install method of installing overtop of an existing Debian install and apparently some Proxmox employees actually run it as their workstation operating system
I’ve actually been working my way through the proxmox-on-top-of-debian guide recently, but after installing the proxmox-ve kernel and rebooting, I was left with SSH disabled (connection refused) and no local console (more precisely no monitor output past “loading initial ramdisk”). I have so little time on my hands that troubleshooting is sometimes taking the fun out of it. Probably just going to re-install using the Proxmox ISO.
If you can afford it it’s a good idea to buy a Raspberry Pi since Raspbian is basically just debian. Then replicate your current setup on it and just try to tinker with it without any risks of breaking things or losing data.
If you’re using a lot of Docker I would recommend learning the command line since you’ll be able to use Docker on basically any real OS at that point.
Welcome to home labbing… you poor fool!
Honestly figuring out docker is 50% of the journey with the other 50% mostly being networking. For instance if your looking to start your own Jack Sparrow themed streaming service you’d want to grab a domain name, point it at your ip, open up ports 80 and 443 on your router, install a reverse proxy via docker and set up SSL ( hint: Caddy makes this easy ) and point it at your jellyfin docker container and voila, your very own streaming service you can access from anywhere! Notice the complicated part of all this is mostly the networking and docker setups, not so much the OS that your running. ( Note: don’t open ports without knowing the risks )
Debian is a fine OS but most homelab stuff can be done on anything you can install docker on, even on a windows computer! That’s not to say you shouldn’t learn some Linux server stuff but it isn’t wholly necessary. That being said…
My best advice for getting into Linux servers would be to grab an old PC, laptop or even a raspberry pi and install Debian, raspbian or any other distro on it. Figure out how to log in via ssh and get the thing running headless ( no keyboard or monitor ) and just learning to navigate and do things via the terminal. Some of the basics would be learning to use the package manager to install software, mounting the file system remotely and figuring out how to setup static IPs and such. When your ready go ahead and install docker, follow some tutorials, learn some yaml and your off to the races!
Configurability? I mean Truenas Scale is also based on Debian, but it’s an appliance software, if you want NAS it’s purpose made for that. You need to configure Debian yourself if you want functioning NAS.
I still remember when TN doesn’t have native Tailscale apps/docker yet and everytime there’s a Truenas update I need to reinstall and set up Tailscale from scratch.
If you just need a NAS with basic apps/docker, there is no reason to just use Truenas.
I use both, but run a Technitium DNS and Frigate on bare Debian.
Debian is well known for maintaining established packages in its repos. This means that all of the software is thoroughly tested, and therefore (usually) stable; however, the software in question is generally older, so it also means that sometimes you’ll have to find your own approach if you want to run any newer services.
At least in my experience the chances that I move or replace hardware are much higher than the chances for a power outage.
For me I won’t be replacing and video cards or ram sticks for the foreseeable future.
Same… but I also remember a single outage in the last 15 or so years.
Heard of tuptime? I’ve been using it for a while now, I think I like it.
System startups: 151 since 18:00:05 10/11/15 System shutdowns: 137 ok + 13 bad System life: 9yr 223d 1h 27m 47s
Longest uptime: 106d 5h 34m 28s from 14:17:10 26/03/22 Average uptime: 23d 4h 32m 0s System uptime: 99.81% = 9yr 216d 12h 31m 51s
Longest downtime: 4d 23h 30m 48s from 10:36:53 14/09/23 Average downtime: 1h 2m 46s System downtime: 0.19% = 6d 12h 55m 56s
Current uptime: 25d 0h 34m 25s since 20:25:37 15/11/25
Heard of it for the first time (as far as I can remember) a couple days ago, on Lemmy.
TIL, Lemmy’s educational.
Does it work retrospectively?
Nope it creates a little database, which you could manualy edit I suppose.
it doesn’t appear to
My father was an HPUX admin that had a server with an uptime of >12 years
I was introduced to homelab by trying to figure out how my uncles setup. It ran for 4 years after he died, 11 years uptime. The estate probate prevented anyone from touching the equipment for the legal fights, and I get a kick out of thinking of how smug he would have been about it.

















