

Literally noone I know in real life has any problem whatsoever reading analog clocks, no matter the “brain capacity”, neuro-typicality, state of drunkenness,… It is an extremely simple “skill”.


Literally noone I know in real life has any problem whatsoever reading analog clocks, no matter the “brain capacity”, neuro-typicality, state of drunkenness,… It is an extremely simple “skill”.


Yeah, vibe of the time is a good description


Because it’s not! Glad to help you clear that up.


Disagree - it rarely matters to me if it’s 13:24:56 or 13:25:05, but I do find the instant and intuitive gauging of time deltas super useful (as in, how long it’s going to be to the full hour / to quarter past / … ). Not saying you can’t get that info from a digital clock as well, of course you can; but the physicality of analog clocks lends a good bit of intuition to this, I feel.


I feel like I’m going insane reading these comments about how difficult it is to read analog clocks, how it needs too much understanding of maths, how it takes too long,…
Can someone please confirm: you just look, for a fraction of a second, at the clock face and know the time, right?
Learning to read the clock was like… A couple of lessons and some homework in the 2nd grade, and everyone got it.


Actually… Just tried it. I am on 2025.10, so newer than what was mentioned there. It still does not understand any better than from what I remember. Bummer.
But hey, at least the acknowledge that there’s the need for something between dumb pattern matching and an LLM.


Holy shit YES!
That article is from yesterday, and the relevant section is: https://www.home-assistant.io/blog/2025/10/22/voice-chapter-11/#improved-sentence-matching
Awesome to see improvements there. Thanks a lot for linking!


Oh wow, awesome!


Thank you for your sacrifice :D


While I don’t like it, it’s not hidden either:
https://bentopdf.com/privacy.html
There should definitely be an option to disable this for self-hosting, but if it’s just a counter for how often each tool is used by all users combined… Eh…
(Stirling also has something similar)


Why not open a PR to make it configurable? The maintainer is super active and friendly.


Thanks for the recommendation! That looks interesting indeed.
This entire topic is probably a sinkhole of complexity. It’s great to have somewhere to look for inspiration!


Yeah those are good points. Also noticed the CDN thing, it’s a bit annoying for a privacy-first project… But should be an easy fix 😄
Stirling’s backend is Java. So, yeah, heavy and slow sounds about right.


The one exception here: it’s great to have it installed on your parents’ PC when you’re the one doing the update once in a while when you are around. Rock solid in between, no nagging, and if something did break, easy to roll back.


Ah, thanks for mentioning. Yep, they have a docker image; as mentioned, a nixpkg will be available soonTM; and frankly, you can just build / download the release artifacts and put them on any static host.


Please read the title of the post again. I do not want to use an LLM. Selfhosted is bad enough, but feeding my data to OpenAI is worse.


Yep, that’s the idea! This post basically boils down to “does this exist for HASS already, or do I need to implement it?” and the answer, unfortunately, seems to be the latter.


Thanks, had not heard of this before! From skimming the link, it seems that the integration with HASS mostly focuses on providing wyoming endpoints (STT, TTS, wakeword), right? (Un)fortunately, that’s the part that’s already working really well 😄
However, the idea of just writing a stand-alone application with Ollama-compatible endpoints, but not actually putting an LLM behind it is genius, I had not thought about that. That could really simplify stuff if I decide to write a custom intent handler. So, yeah, thanks for the link!!
FWIW, I went to school in mid-2000. My sibling even later. They still taught it back then, and at least here, I am pretty sure they still do. (And why would they not, after all…)