• Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 hours ago

    I literally said I’m using qwen3.5:122b for coding. I also use GLM-5 but it’s slightly slower so I generally stick with qwen.

    It’s right there, in ollama’s library: https://ollama.com/library/qwen3.5:122b

    The weights and everything else for it are on Huggingface: https://huggingface.co/Qwen/Qwen3.5-122B-A10B

    This is not speculation. That’s what I’m actually using nearly every day. It’s not as good as Claude Code with Opus 4.6 but it’s about 90% of the way there (if you use it right). When GLM-5 came out that’s when I cancelled my Claude subscription and just stuck with Ollama Cloud.

    I can use gpt-oss:20b on my GPU (4060 Ti 16GB)—and it works well—but for $20/month, the ability to use qwen3.5 and GLM-5 are better options.

    I still use my GPU for (serious) image generation though. Using ChatGPT (DALL-E) or Gemini (Nano Banana) are OK for one-offs but they’re slow AF compared to FLUX 2 and qwen’s image models running locally. I can give it a prompt and generate 32 images in no time, pick the best one, then iterate from there (using some sophisticated ComfyUI setups). The end result is a superior image than what you’d get from Big AI.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 hours ago

      Both of those models appear proprietary closed-source freeware. To be open source, they need to provide the source for the blobs.

      I don’t blame you if the AI industry deceived you, because it’s gotten to the point where people that review this stuff need to refer to “actual open source” to differentiate.

      So… Do you actually use open-source models?