

Supposedly. Says 148 in About.


Supposedly. Says 148 in About.


Maybe the article was written by AI that hallucinated the setting.


all you have to do is click on Settings > AI Controls. You’ll then see a very bold and prominent option called ‘Block AI Enhancements.’
I don’t see it on mobile though.


I used to be on the China-bad bandwagon when I bought our current Sony. I didn’t understand nearly as well how production worked as I do now. The next time I have to buy one I’d evaluate on the basis of quality regardless of origin. I’d also avoid high amount of custom software to avoid annoyance and security holes, just like I did when I purchased our current one.


They always made very good TVs, tube, plasma, LCD.


Yup, that’s what I was referring to.


Aaand another one bites the dust. So who’s left that still makes non-PRC TVs? Just the Koreans I think. When I last shopped TVs I looked for non-PRC options and the only ones available in Canada were LG, Samsung and Sony. Sony already bit the dust a short while ago.
The acl, norm, xattr, dnodesize options are all to make ZFS behave like a Linux filesystem and are pretty standard for using ZFS on Linux. They aren’t default because ZFS’ defaults are for Unix.
I do:
sudo zpool create \
-o ashift=12 -O acltype=posixacl -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on \
-O xattr=sa \
mypool \
raidz2 \
wwn-0x5000cca284c06395 \
wwn-0x5000cca295e115f3 \
wwn-0x5000cca2a1ef9c90 \
wwn-0x5000cca295c03910 \
wwn-0x5000cca29dd216b0
I’m then going to optimize recordsize depending on the workload in datasets. E.g. Immich db might use 8K or 16K recordsize while the library dataset where the files are might be larger so that search is faster. Etc.


Install Ollama on a machine with fast CPU or GPU and enough RAM. I currently use Qwen3 that takes 8GB RAM. Runs on an NVIDIA GPU. Running it on CPU is also fast enough. There’s a 4GB version which is also decent for device control. Add Ollama integration in Home Assistant. Connect it to the Ollama on the other machine. Add Ollama as conversation agent to the Home Assistant’s voice assistant. Expose HA devices to be controllable. That’s about it on high level.


HA with local LLM on Ollama. Can imtegrate the Android app as the default phone assistant. I don’t think it can use a wake word on the phone though. I invoke it by holding the power button, like a walkie.


Using Home Assistant with Qwen locally. It functions better than any version of Google Home I’ve had. Understands me without having to think about how I say things. Can ask it for one or multiple things at the same time. Can even make it so that it pretends to be Santa Claus while responding. My wife was ecstatic when she heard the Ho-ho-ho while asking to turn the coffee machine on on Christmas.


You gotta hook it to a local LLM. Then it’s boss.


I haven’t tried funnel but it works using an internal Talscale IP/host and port. E.g. http://the-immich-host:1234/ if the-immich-host is a Tailscale machine.


Works outside. I’m setting a standard DNS record on a standard DNS provider to an internal TS IP. The record works everywhere but the IP is only accessible when TS is on. Whether I’m on the local net or outside.


Are you running out of RAM? Second hand servers sold in your area may help. :D
I’m picking up a 128GB server from 2019 for a spare tomorrow.


I had something similar when I used to mount an NFS share. I had a bash line that would loop ping and then mount once ping succeeds. Having a separate service that pings and making the mount dependent on it is probably the better thing to do. Should also work when put in Requires= in a .mount file.


That’s one way to look at it. I used to look at paid VC-funded services like that. I no longer do as I’ve observed services I paid good money for get more expensive much faster than inflation and decrease in quality and features at the same time. It’s one reason I self-host many services I used to pay third parties for. I now look to alternatives from the get go and derisk existing dependencies. To be clear - profitability isn’t merely the only problem. The ownership and its profit growth strategy (and expectations) are. Those are not the same in a decades old ISP and a VC-funded startup.
Merely being profitable today isn’t a good predictor for stable prices and function over the long run for VC-funded services. I’m not planning to do major surgery to my setup every few years as yet another service shits the bed. The workstation/server where my self-hosted services run has last been reinstalled in 2014. Most of my config-as-code was written in 2019. I support a few families with this and I aim at maximum stability with minimal maintenance. So I use open source whenever I can and I often pay for development. I only integrated Tailscale in my setup because the clients are open source and because there’s an open source server option.
I’m not saying to people - don’t use Tailscale. In fact I often recommend it to new self-hosters. But I do that because there’s a way out. So here I’m reminding people who care about a way out to check if this feature is escapable. :D
Halp the slow of us pls.