

This is what basically anyone in the ML research/dev industry will tell you, heh.


This is what basically anyone in the ML research/dev industry will tell you, heh.


Sigh.
I swear, Epic would probably have a decent reputation (and storefront) if Sweeney would just shut his trap and delete all his social media.


Interested. Also a little interested in the MC server, per your website.
Never played Starbound; how many players are typically on a server concurrently? Or at least y’alls server?


Sometimes. As a tool, not an outsourced human, oracle, or some transcendent companion con artists like Altman are trying to sell.
See how grounded this interview is, from a company with a model trained on peanuts compared to ChatGPT, and that takes even less to run:
…In 2025, with the launch of Manus and Claude Code, we realized that coding and agentic functions are more useful. They contribute more economically and significantly improve people’s efficiency. We are no longer putting simple chat at the top of our priorities. Instead, we are exploring more on the coding side and the agent side. We observe the trend and do many experiments on it.
https://www.chinatalk.media/p/the-zai-playbook
They talk about how the next release will be very small/lightweight, and more task focused. How important gaining efficiency through architecture (not scaling up) is now. They even touch on how their own models are starting to be useful utilities in their workflows, and specifically not miraculous worker replacements.


Heh, so does mine.
All our parents’ book hoarding may end up saving us. And the internet, if they become the new standard?


You joke, but that’s horrifying.
This is already an SEO technique, apparently, and I could see Amazon book sellers finding a way to fudge it: https://yoast.com/help/date-appears-search-results/


100%.
But I was pondering more what the general population might do. People are going to figure out slop recipes don’t work, but the question is what’s the next most accessible thing to replace it with?


…So are we going back to print cookbooks? Published before 2024?
Honestly, that feels like the practical solution.


Plantains are a fickle plant. Ripeness is a huge factor, and that aside, some are just fibery/less sweet and don’t cook as fast.


In theory, Google should fight all attempts at SEO.
But they infamously stopped doing that to bump some quarterly result (as sifting through them generates more clicks), and here we are.


A post that long?
Eh, well, it could definitely be an unmarked bot on X. That’s good attention bait, and it has a feeling of temporal inplausibility kinda like a ‘cheapest API LLM’ story.


Where’d you see it?
There are GPT-2 bots on Reddit and Lemmy that make a lot of posts like this. And they aren’t hidden; the community is explicitly labeled.


Vllm is a bit better with parallelization. All the kv cache sits in a single “pool”, and it uses as many slots as will fit. If it gets a bunch of short requests, it does many in parallel. If it gets a long context request, it kinda just does that one.
You still have to specify a maximum context though, and it is best to set that as low as possible.
…The catch is it’s quite vram inefficient. But it can split over multiple cards reasonably well, better than llama.cpp can, depending on your PCIe speeds.
You might try TabbyAPI exl2s as well. It’s very good with parallel calls, thoughts I’m not sure how well it supports MI50s.
Another thing to tweak is batch size. If you are actually making a bunch of 47K context calls, you can increase the prompt processing batch size a ton to load the MI50 better, and get it to process the prompt faster.
EDIT: Also, now that I think about it, I’m pretty sure ollama is really dumb with parallelization. Does it even support paged attention batching?
The llama.cpp server should be much better, eg use less VRAM for each of the “slots” it can utilize.


I’ll save you the searching!
For max speed when making parallel calls, vllm: https://hub.docker.com/r/btbtyler09/vllm-rocm-gcn5
Generally, the built in llama.cpp server is the best for GGUF models! It has a great built in web UI as well.
For a more one-click RP focused UI, and API server, kobold.cpp rocm is sublime: https://github.com/YellowRoseCx/koboldcpp-rocm/
If you are running big MoE models that need some CPU offloading, check out ik_llama.cpp. It’s specifically optimized for MoE hybrid inference, but the caveat is that its vulkan backend isn’t well tested. They will fix issues if you find any, though: https://github.com/ikawrakow/ik_llama.cpp/
mlc-llm also has a Vulcan runtime, but it’s one of the more… exotic LLM backends out there. I’d try the other ones first.


AFAIK some outputs are made with a really tiny/quantized local LLM too.
And yeah, even that aside, GPT 3.5 is really bad these days. It’s obsolete.


Bloefz has a great setup. Used Mi50s are cheap.
An RTX 3090 + a cheap HEDT/Server CPU is another popular homelab config. Newer models run reasonably quickly on them, with the attention/dense layers on the GPU and sparse parts on the CPU.


This is the way.
…Except for ollama. It’s starting to enshittify and I would not recommend it.


The iPhone models are really bad. They aren’t representative of the usefulness of bigger ones, and it’s inexplicably stupid that Apple doesn’t like people pick their own API as an alternative.
M chips are super expensive. They’re optimized for low clockspeed/idle efficiency and pay through the nose for cutting edge processes, whereas most gaming hardware is optimized for pure speed/$, with the smallest die area and cheapest memory possible, at the expense of power efficiency.
And honestly the CPU/GPU divide over traces is more economical. “Unified memory” programming isn’t strictly needed for games at the moment.
And, practically, Apple demands very high margins. I just can’t see them pricing a console aggressively.