My rack is finished for now (because I’m out of money).

Last time I posted I had some jank cables going through the rack and now we’re using patch panels with color coordinated cables!

But as is tradition, I’m thinking about upgrades and I’m looking at that 1U filler panel. A mini PC with a 5060ti 16gb or maybe a 5070 12gb would be pretty sick to move my AI slop generating into my tiny rack.

I’m also thinking about the PI cluster at the top. Currently that’s running a Kubernetes cluster that I’m trying to learn on. They’re all PI4 4GB, so I was going to start replacing them with PI5 8/16GB. Would those be better price/performance for mostly coding tasks? Or maybe a discord bot for shitposting.

Thoughts? MiniPC recs? Wanna bully me for using AI? Please do!

  • teslasdisciple@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    I’m running ai on an old 1080 ti. You can run ai on almost anything, but the less memory you have the smaller (ie. dumber) your models will have to be.

    As for the “how”, I use Ollama and Open WebUI. It’s pretty easy to set up.

    • kata1yst@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 hours ago

      Similar setup here with a 7900xtx, works great and the 20-30b models are honestly pretty good these days. Magistral, Qwen 3 Coder, GPT-OSS are most of what I use

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I tried a couple of times with Jen ai and local llama, but somehow does not work that well for me.

      But at the same time i have a 9070xt, so, not exactly optimal