

Proton doing another shady thing?
Colour me surprised!
Proton doing another shady thing?
Colour me surprised!
Mailbox,formerly mailbox.org
Tuta,which is often recommended, is sadly another vendor lock in while mailbox is using industrial standards.
I use an SXT, as I got it cheap, but the wap LTE kits, the LTAPs mini or the hap AX lite should do as well - softwarewise they are all the same anyway. (Just watch out for hardware without LTE modem card and be aware of the difference between LTE-M and LTE as in the knot.)
Sometimes you find decent older ones on eBay as well.
I use a cheap Mikrotik LTE Router as a second route. It has the smallest data plan my provider offers - but it’s enough for maintenance and if I need more due to the main line being faulty it’s the same provider’s fault and they pay the bill anyway.
It mainly goes into the OPNsense as a second gateway,but it also allows me to VPN in and reboot the OPN if needed.
If the OPN would be fucked totally in theory I could run the network directly over it,but that would be nasty.
A friend of mine actually has a pretty nifty solution,but he is an absolute pro at these things. He has a small device (don’t ask me what SBC exactly) ping and check (I think DNS and a http check is included as well) various stages of his network, including his core switch, firewall and DSL modem. If one of them freezes the device sends a data packet via LoraWAN. He can then send a downstream command to reboot the devices.
Proton claiming shit that they don’t actually do or can do?
Consider me shocked!
I have central (water circuit based) heating with individual control per room. Additionally I have a weather station on my roof that tracks the sun and wind,temp, etc. and presence detectors in almost all rooms and electric blinds. The components are all KNX based, the logic part is home assistant based.
Basically what we do: I have a “normal mode” that is supported by two addon modules. Normal mode means:
On schooldays the system tracks when school starts. If none is present in the kids rooms for more than 30min it assumes the kid is gone and goes into energy saving mode for that room (18 instead of 21). The system then looks when the kid is likely to come back and puts the room temperature up on time.
Our offices are always in energy saving temp and only get into normal temp once someone has been there for 15min or one of our computers is put on - both the wife and I work home office full time,but travel a fair bit.
The system tracks if our mobile phones are “pingable” locally. If they aren’t for 30min it assumes we are all gone and puts the whole house into “away” mode,including reducing the temperatures. Then it looks at our outlook calendars (and the school schedule) and puts the temperature back on as required.
Additonally a room that has a window open is always cut off from heating and the system sends a message when the outside temp is either too hot or too cold after a certain time.
Additionally we have two prediction based module The system looks at three different weather predictions (my area is a bit of a problem for these) and creates a mean expected minimum and maximum day temperature.
If the expected max and min is below a certain point it switches on “winter mode” - this means the system tries to keep the shutters up as much as possible and open them as early as possible (based on the sun position) so the house absorbs as much sun as possible. Doesn’t help that much,but at least a bit. Additionally the time for “open window notifications” is reduced.
If the expected max is above a certain degree the system goes into summer mode. Then it’s basically vice-versa. The system tries to keep the blinds/shutters down as much as possible according to the position of the sun and opens them only after the sun has passed. That works fairly well and reduces the room temperature significantly - in the worst room around 3.8° on average. It also reminds the inhabitants to open windows in the morning when it’s still cold and close them in time.
Syncthing and nextcloud are not a good backup solution. Like ever. Potentially they aren’t even a backup solution at all. Or even cause data loss.
You sadly didn’t tell us too much about what you are actually trying to backup and how your infrastructure looks like.
If I understand you correctly you want to centralise the files that are currently hosted on a diverse set of devices into a central file storage on your server and backup from there. Right? That’s a fair goal and something I absolutely do myself - and both NextCloud as well as syncthing will help you make the files accessible for devices.
Now,back to the backup part.
You want basically three things from backup: They need to reliable (doesn’t help when you can’t access your files anymore because they are corrupted), you want them to be as unaffected by any potential risks as possible and let’s face it,you probably want them cheap. The second part basically dictates that for an online backup you want something that can do versioning so corrupted data (e.g. from ransomware) is not simply written over.
My current approach is: I have an internal backup server (see below), an external backup in the cloud, and a cold storage backup in a bank safe. Sounds like a lot? We will see.
Let’s look at cloud storage first. There are a multitude of solutions available for free with Duplicati, urBackup or goMFT being some fairly popular ones - I personally use Duplicati. These periodically scan the folders for changes, encrypt the files and send them to a cloud provider of your choice (e.g. an S3 bucket.) and to some extent can also do the versioning. (Although it’s safer to regulate that via a bucket policy as otherwise the application needs delete rights - which means in theory could delete all the data when compromised). Main benefit is the ease of access - you need to restore a single file? Done fast and easy. Not so much for a whole setup, restoring things can get quite expensive.
If you use ZFS there is also the option to use ZFS sent to backup, but as there is currently no reliable European Union ZFS sent provider I am aware of (rsync.net does this,but is US based) legally cannot use them. So no experience on that.
To backup clients completly and VMs/LXC it might also make sense to use a designated backup server,e.g. the proxmox backup server. These do require local (as in “where the PBS is running” storage, though, so a local PBS and a cloud storage behind doesn’t work. (There is a “hosted PBS” Service available, though from Tuxis. They work really well). But it can make sense to let a zimablade run a few old hard drives for a few hours a day for that.
For offsite and online backup - as a full restore is always expensive and time consuming from the cloud- I also use two USB hard drives. One is always stored in a locker in a bank vault and every few months I change drive - so in case of a full server loss I only would need to restore the state of a (at max) 4 month old server via USB and then update stuff from the cloud for the 4 months after that.
Now, to be extra sure I also burn the most important files (documents about the house,insurances,degrees,financial and tax data, healthcare records, photos of lifetime events, e.g. weddings, birthdays,births, graduations as well as “emergency data restore howtos”, password files, basically all the stuff I want to make sure my heirs/kids have access to if I die) on blue archive (important, not normal disks!) M-Discs. They are supposed to last far longer than normal blue rays and most consumer accessible media. These are stored locally,in the safe and at the court that holds our will. The reasons for that? Powered off hard drives lose data quite fast and if the wife and I perish at the same time, eg. because we have a car crash or the house burns down the issue is time: Cloud backup might not be available anymore as our bank accounts are frozen and therefore the backup is no longer paid for. The bank safe is not accessible for a long time for the same reason. When someone then accesses the USV drive it might be of no use. The server might be powered off or damaged. And sadly the legal system here can take years (up to 7 years are my planning times) before they can actually access the data.
Wasn’t there some industry rumours that the official merger plan with Honda way meant to go much further and included Nissan, Mazda, Toyota and Mitsubishi as wished by the Japanese government in an effort to create a player large enough to withstand pressure from China, Korea and Europe. It failed obviously,though.
Where I wish it would be:
Somewhere where there are more customisationS of how it actually stores files.
Somewhere where it’s possible to sync instances.
Because at the moment we don’t have a “hostile” job market yet - as written in the article, the market is only rapidly cooling down. As the market before was massively undersaturated it just means that people currently have less choices - but they still have their share of opportunities. But tbh, pure anecdotal, it pretty much reflects what I hear from graduates atm. The market for newly graduated has cooled down definitely, unless they have a ITsec background or have a fair share of experience already.
Nope,same story here,just not as extreme: https://www.heise.de/news/Wirtschaftsinstitut-IT-Fachkraefte-sind-in-Deutschland-deutlich-weniger-gefragt-10544518.html
Yeah, to be honest, WG out of the box is really nice for tunneling and static IP road warriors. For larger deployments it’s a bit of a PIA without DHCP.
Sadly.
But things like Netbird make it a bit easier.
Just saw that my way of doing this isn’t actually needed anymore, there is an integration now:
Yeah, absolutely - nevertheless it can inspire that reaction - which is a shame, because it’s indeed fairly easy.
I absolutely second Technitium as well. That thing is rock solid, can be used for basically everything, has blocking with a multitude of options and does provide a nice graphical GUI.
I have it running in a dual DNS setup (main server+a Zimablade nowadays) and that shit just works - it’s the container that has caused the least amount of problems in the last 3 years.
The API is fairly handy and quite easy - I have it integrated into HomeAssistant so I have a “Disable DNS Blocking” button in my “Network control” tab in the app.
The only downside is the fact that initially it can be quite overwhelming, especially if you are not an DNS guru and just did the step from AdGuard/PiHole - but soon you realise that you actually only need a few fields for basic operations.
Yeah. It basically can only do guest and non guest, sadly. There used to be a hack allowing proper VLANs,but that’s long gone.
I am using it and tbh didn’t have too many issues with it. It runs as a LXC on my Proxmox server.
With that it’s a fairly comfortable setup - it does have API access on the proxmox node and therefore automatically discovers all LXCs,even the ones you add after the installation.
For other machines I use a fairly easy bash script to download the agent 2 and then overwrite the config file with the right parameters,but that’s just me being lazy - it’s not that much work doing it by hand as well.
And for everything else there is always SNMP which is fairly well supported and there are tons of templates nowadays.
Tbh, I had Prometheus/Grafana before and found it to be much more complicated, especially when you need active and passive nodes. The fact that Zabbix is “All in one” is fairly nice sometimes.
Dashboards are a bit lacking behind Grafana at times,but I can live with that.
Tbh, it’s not the worst thing when a service does that. There are cases where it is indicated - cartels, CSAM, etc. do not deserve a safe haven. The bad part about the France issue is the fact that the Swiss court system willfully allowed a case that was not per se illegal in Switzerland and had rather controversial legal grounds in France to proceed. This is very similar to the cases where Switzerland simply ignored their own laws under pressure from the US government in terms of bank accounts 15 years earlier.
This is rather concerning and many Swiss legal experts did not share the opinion of Proton that there was nothing Proton could have done.
And this is somehow better?
There is a lot of room between “BigTech” and “Joe Average” doing it for his neighbours. Mailbox.org, etc. (see my other post here)
Yeah,came here to say that. I second that.