• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 days ago

    Ughhh, I could go on forever, but to keep it short:

    Basically, the devs are Tech Bros. They’re scammer-adjacent. I’ve been in local inference for years, and wouldn’t touch ollama if you paid me to. I’d trust Gemini API over them any day.

    I’d recommend base llama.cpp or ik_llama.cpp or kobold.cpp, but if you must use an “turnkey” and popular UI, LMStudio is way better.

    But the problem is, if you want a performant local LLM, nothing about local inference is really turnkey. It’s just too hardware sensitive, and moves too fast.