I like how there’s no fucking code repo or even a white paper or any evidence that this system ever actually existed 🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️
This is basically meaningless. You can already run gpt-OSS 120 across consumer grade machines. In fact, I’ve done it with open source software with a proper open source licence, offline, at my house. It’s called llama.cpp and it is one of the most popular projects on GitHub. It’s the basis of ollama which Facebook coopted and is the engine for LMStudio, a popular LLM app.
The only thing you need is around 64 gigs of free RAM and you can serve gpt-oss120 as an OpenAI-like api endpoint. VRAM is preferred but llama.cpp can run in system RAM or on top of multiple different GPU addressing technologies. It has a built-in server which allows it to pool resources from multiple machines…
I bet you could even do it over a series of high-ram phones in a network.
So I ask is this novel or is it an advertisement packaged as a press release?
I think you’re missing the point or not understanding.
What you’re talking about is just running a model on consumer hardware with a GUI. We’ve been running models for a decade like that. Llama is just a simplified framework for end users using LLMs.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
They aren’t talking about just running models as you’re describing.
I think you’re missing the point or not understanding.
Let me see if I can clarify
What you’re talking about is just running a model on consumer hardware with a GUI
The article talks about running models on consumer hardware. I am making the point that this is not a new concept. The GUI is optional but, as I mentioned, llama.cpp and other open source tools provide an OpenAI-compatible api just like the product described in the article.
We’ve been running models for a decade like that.
No. LLMs, as we know them, aren’t that old, were a harder to run and required some coding knowledge and environment setup until 3ish years ago, give or take when these more polished tools started coming out.
Llama is just a simplified framework for end users using LLMs.
Ollama matches that description. Llama is a model family from Facebook. Llama.cpp, which is what I was talking about, is an inference and quantization tool suite made for efficient deployment on a variety of hardware including consumer hardware.
The article is essentially describing a map reduce system over a number of machines for model workloads, meaning it’s batching the token work, distributing it up amongst a cluster, then combining the results into a coherent response.
Map reduce, in very simplified terms, means spreading out compute work to highly pararelized compute workers. This is, conceptually, how all LLMs are run at scale. You can’t map reduce or parallelize LLMs any more than they already are. The article doent imply map reduce other than taking about using multiple computers.
They aren’t talking about just running models as you’re describing.
They don’t talk about how the models are run in the article. But I know a tiny bit about how they’re run. LLMs require very simple and consistent math computations on extremely large matrixes of numbers. The bottleneck is almost always data transfer, not compute. Basically, every LLM deployment tool is already tries to use as much parallelism as possible while reducing data transfer as much as possible.
The article talks about gpt-oss120, so were aren’t talking about novel approaches to how the data is laid out or how the models are used. We’re talking about tranformer models and how they’re huge and require a lot of data transfer. So, the preference is try to keep your model on the fastest-transfer part of your machine. On consumer hardware, which was the key point of the article, you are best off keeping your model in your GPU’s memory. If you can’t, you’ll run into bottlenecks with PCIe, RAM and network transfer speed. But consumers don’t have GPUs with 63+ GB of VRAM, which is how big GPT-OSS 120b is, so they MUST contend with these speed bottlenecks. This article doesn’t address that. That’s what I’m talking about.
if you have a single commodity machine, the cited solutions are useful for deploying small models. If you have several commodity machines, you can’t combine them efficiently with the cited solutions to deploy a large model, and even if you could, it would require a team to manage and maintain the system.
You’re wrong and OP is right. Llama.cpp has the ability to do exactly what “Anyway System” claim to do without the bullshit. Like this claim of " even if you could, without a team to manage it" is so stupid.
There are several frameworks that allow this beside Llama.cpp that are open source, that have been for a while, is extremely maintained.
Also, this seems like a stunt to get VC money. Good on the founders I guess. But the solution from what I read is a nothing burger. My money on llama.cpp or ktransformers or ik_llama.cpp
So what do you get with a home run LLM? How capable is it what can you use it for?
I still think AI is mostly a toy and a corporate inflation device. There are valid use cases but I don’t think that’s the majority of the bubble
- For my personal use, I used it to learn how models work from a compute perspective. I’ve been interested and involved with natural language processing and sentiment analysis since before LLMs became a thing. Modern models are an evolution of that.
- A small, consumer grade model like GPT-oss-20 is around 13GB and can run on a single mid-grade consumer GPU and maybe some RAM. It’s capable of parsing text and summarizing, troubleshooting computer issues, and some basic coding or code review for personal use. I built some bash and home assistant automatons for myself using these models as crutches. Also, there is software that can index text locally to help you have conversations with large documents. I use this with documentation for my music keyboard which is a nightmare to program and with complex APIs.
- A mid-size model like Nemotron3 30B is around 20GB can run on a larger consumer card (like my 7900xtx with 24 gb of VRAM, or 2 5060tis with 16gb of vRAM each) and will have vaguely the same usability as the small commercial models, like Gemini Flash, or Claude Haiku. These can write better, more complex code. I also use these to help me organize personal notes. I dump everything in my brain to text and have the model give it structure.
- A large model like GLM4.7 is around 150GB can do all the things ChatGPT or Gemini Pro can do, given web access and a pretty wrapper. This requires big RAM and some patience or a lot of VRAM. There is software designed to run these larger models in RAM faster, namely ik_llama but, at this scale, you’re throwing money at AI.
I played around with image creation and there isn’t anything there other than a toy for me. I take pictures with a camera.
Now, EPFL researchers… have released new software that allows users to download open-source AI models and use them locally, with no need for the cloud to answer questions or complete tasks.
It’s cool that they got LLMs running on local clusters of computers, but with the way it’s written, they make it sound like people have not already been using local LLMs for a long time (including GPT-OSS 120B).
OFC you can… I can run the 70B DeepSeek on my 16GB RX 6800 XT with 64GB RAM already…
That last amount you mentioned could be a little problematic, at the moment…





