In addition to what the others said, some apps allow you to link to an LLM model for additional features.
For eg Immich has prebuilt models you can choose depending on how powerful your PC is, which will give facial recognition and powerful NLP-like search capabilities for your library. So if they think this is model good they can make a new prebuilt one using this as a base. Software like Microsoft Teams uses LLM for better background blurring for video calls, so maybe an open source equivalent can make use of it.
Also you can use it for other stuff like image generation too
What could this be used for?
Local LLMs, probably even ones you can host on phones. But they won’t be as powered of course
Yea I get that, but does anyone have any practical ideas for local LLM?
Literature summarization, data analysis, not being a pawn in corporate data harvesting.
As long as you don’t care if the summaries and analyses are wrong!
Home assistant is the big one imo, voice control for a private smart home is useful and low-stakes so hallucinations won’t be the end of the world
I’m eagerly waiting for a locally run phone assistant. Just for voice control while driving.
In addition to what the others said, some apps allow you to link to an LLM model for additional features.
For eg Immich has prebuilt models you can choose depending on how powerful your PC is, which will give facial recognition and powerful NLP-like search capabilities for your library. So if they think this is model good they can make a new prebuilt one using this as a base. Software like Microsoft Teams uses LLM for better background blurring for video calls, so maybe an open source equivalent can make use of it.
Also you can use it for other stuff like image generation too