Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 4 Posts
  • 351 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle



  • Ah, the good old days when your “dumb” refrigerator would kill children playing hide and seek because the latch wouldn’t open from the inside. When it was lined with asbestos because that’s literally the best insulation that exists excepting aerogel. When the mercury thermostat would fail—leaking mercury on to your food (and aerosolizing some which would be breathed in as soon as you opened it)—and it would freeze everything inside, complete with an interior wall of snow that could take days to defrost. It used old school freon, destroying the ozone layer. Or before then, fun highly toxic gasses like methyl chloride!

    Those were the days! When a breeze through the house on a day with wonderful weather could blow out the pilot light in your oven, slowly leaking gas into your house, exploding and destroying the entire home late at night while everyone is asleep.

    Then the wonders of electricity came along to produce ovens that were hooked up to 220V lines without a grounding wire, and wiring that would slowly fail over time, eventually making contact with the metal frame, electrocuting anyone who touched the device—or anyone that touched the person touching it.

    Ovens were built different “back in the day”! They didn’t have anti-tip brackets, resulting in loads of children sitting on the oven door, spilling boiling liquids down upon them.

    The best were those old washing machines, though! You could lift up the lid and look inside to see your laundry spinning at high speeds! Just don’t reach your hand in, or you could find out what the term “degloving” means.

    Ah yes, the good old days of appliances.






  • She didn’t dial anywhere near enough numbers

    Not necessarily! This particular phone had a feature that let you set shortcut numbers. It was an advanced form of the “long press a single number to dial a particular contact” feature that came before it. So you could go into your contacts and—via a series of absurdly complicated menus for such a simple device—you could set “7752” as the shortcut number to dial say, your bank of fax machines that somehow deliver the equivalent of 100Base-T Ethernet speeds.

    “Tell me how old you are without telling me how old you are” 😢


  • I recently stayed at a rental property that had this (actual photo):

    Photo of a NuTone Intercom with a built-in CD player and FM tuner

    I tried to get it working but none of the remote panels worked. They were all disconnected somehow (owner probably cut the wires to prevent shenanigans by guests cranking the volume then leaving it like that). The CD player worked (central panel only) but oddly, it couldn’t pick up any FM stations. It would tune to them (“scan” feature worked) but they only ever produced static. I suspect the capacitors used in the amplification circuit dried out or something got corroded after being in a “regular ocean salt spray” area (it was on a beach) for such a long time 🤷


  • Wow! This brings back memories… It was a Soul Crusher: A primitive technology used to commune with the dead over long distances. I’ll explain…

    These devices used the “Afterlife Toll” (AT) command set, invented by someone named “Hayes” which I believe was just a nickname or mistranslation of Hades. With the correct invocation, you could whisper into the great beyond. Here’s an example:

    ATDT 6665551234

    Translated: “Afterlife Toll, Death Touch <helliphone number>”. After this invocation, the user would hear the pleasant sound of souls being crushed in order to make the afterlife connection.

    Of course—due to the popularity of such devices—crushing souls over long distances could get expensive so a number of Incorporeal Service Providers (ISP) sprang up to make it cheaper and easier than ever to crush souls from anywhere.

    Cool fact: This is where the term, “soul crushing machines” comes from! These days, soul crushing is fully automated and far beyond the measure of Beings Per Seance (BPS). Nearly every computer is shipped with an ethernet connection and practically everyone is walking around with devices that can commune over WIFI (Wailing Incorporeal Fidelity).

    In fact, our Incorporeal Technology (IT) is so advanced, you can have a soul crushing experience from anywhere in the world at all hours of the day, every day!



  • I literally said I’m using qwen3.5:122b for coding. I also use GLM-5 but it’s slightly slower so I generally stick with qwen.

    It’s right there, in ollama’s library: https://ollama.com/library/qwen3.5:122b

    The weights and everything else for it are on Huggingface: https://huggingface.co/Qwen/Qwen3.5-122B-A10B

    This is not speculation. That’s what I’m actually using nearly every day. It’s not as good as Claude Code with Opus 4.6 but it’s about 90% of the way there (if you use it right). When GLM-5 came out that’s when I cancelled my Claude subscription and just stuck with Ollama Cloud.

    I can use gpt-oss:20b on my GPU (4060 Ti 16GB)—and it works well—but for $20/month, the ability to use qwen3.5 and GLM-5 are better options.

    I still use my GPU for (serious) image generation though. Using ChatGPT (DALL-E) or Gemini (Nano Banana) are OK for one-offs but they’re slow AF compared to FLUX 2 and qwen’s image models running locally. I can give it a prompt and generate 32 images in no time, pick the best one, then iterate from there (using some sophisticated ComfyUI setups). The end result is a superior image than what you’d get from Big AI.


  • I just added up how much it would cost (in theory—assuming everything is in-stock and ready to ship) to build out a data center capable of training something like qwen3.5:122b from scratch in a few months: $66M. That’s how much it would cost for 128 Nvidia B200 nodes (they have 8 GPUs each), infiniband networking, all-flash storage (SSDs), and 20 racks (the hardware).

    If OpenAI went bankrupt, that would result in a glut of such hardware which would flood the market, so the cost would probably drop by 40-60%.

    Right now, hardware like that is all being bought up and monopolized by Big AI. This has resulted in prices going up for all these things. In a normal market, it would not cost this much! Furthermore, the reason why Big AI is spending sooooo much fucking money on data centers is because they’re imagining demand. It’s not for training. Not anymore. They’re assuming they’re going to reach AGI any day now and when they do, they’ll need all that hardware to be the world’s “virtual employee” provider.

    BTW: Anthropic has a different problem than the others with AGI dreams… Claude (for coding) is in such high demand that their biggest cost is inference. They can’t build out hardware fast enough to meet the demand (inference, specifically). For every dollar they make, they’re spending a dollar to build out infrastructure. Presumably—some day—they’ll actually be able to meet demand with what they’ve got and on that day they’ll basically be printing money. Assuming they can outrun their debts, of course.


  • I personally love glm-5 and qwen3.5, specifically: https://ollama.com/library/qwen3.5:122b

    I’ve used them both for coding and they work really well (way better than you’d think). They’re also perfectly capable of the usual LLM chat stuff (e.g. check my grammar) but all the models (even older, smaller ones) are capable of that stuff these days.

    For a treat: Have someone show you using some of these models to search the web! It’s amazing. You don’t see ads, you don’t have to comb through 12 pages of search results, and they read the pages that moment (not cached) to give you summaries of the content. So when you click the link to go to the content you know it’s the thing you were looking for. They’re not using a local index of the Internet, they’re searching on your behalf using whatever search engines you configured. It’s waaaaay better than ChatGPT (which uses Bing behind the scenes whether you like it or not) or Gemini (which uses Google, obviously). The (self-hosted) LLM will literally be running curl for you on Google, DuckDuckGo, Bing, or whatever TF else you want (simultaneously) then reading each of the search results and using your prompt to figure out what the most relevant results are. It’s sooooo nice!

    FYI: Ollama.com’s library page is actually a great resource for finding info on all the models that can be self-hosted: https://ollama.com/library


  • You seem to be unaware that it only takes about four NVIDIA HGX H100 nodes (32 GPUs) to train something like qwen3.5:122b. That model is about as good as ChatGPT was six months to a year ago (for the usual use cases). That would take a long ass time though (over a year) so you’d want probably 50-100 HGX H100s (or lots of the newer, cheaper ARM-based hardware devices).

    The weights for qwen3.5:122b are open. That means that if you’ve got the hardware (loads of universities and non-profits have waaaay TF more than 4 HGX H100 nodes) you can continue modern AI development. Everything you need is right there on Huggingface! Deepseek’s stuff is also open I think but I forget. Aside: In my head, I hold the qwen models as “the gold standard” based on many articles I’ve read about them but AI moves so fast, there might be better stuff out on any given day! I haven’t read AI news in like a week so I could be all wrong and qwen3.5 is now sooo obsolete, hehe (that’s how it feels to follow AI news, anyway 🤣).

    Even more interesting: qwen3.5:122b isn’t just an LLM. It does visual reasoning (e.g. give it a picture of a plant and ask it to identify it, count the number of screws in an image, estimate distances, etc) as well as the usual LLM stuff. You can read all about it here:

    https://ollama.com/library/qwen3.5:122b

    …and if you install ollama and spend $20 on ollama.com’s cloud service you can actually try it out without having to own enough GPUs to cover the 245+GB requirement. I highly recommend that service! You can try out all the latest & greatest models on your local PC (or phone!) for any purpose you want for a $20. Whenever a new model is out they usually have it up on their servers within a day or two and it’s fast, too.

    FYI: I’ve used ollama cloud to evaluate models for coding (web dev with Python back end) and qwen3.5:122b is fantastic. It’s not as good as Claude Opus 4.6 but it’s close (and cheap) enough that you can just make up for the mistakes with extra instances that check the output with a critical eye (the latest trick in AI-based coding to get good output).

    For reference, the University of Texas at Austin has data centers with 4,000 NVIDIA Blackwell (B200/GB200) GPUs, Harvard has 1,144 GPUs, and the University of Cambridge & Bristol (in the UK) has some monstrous mix of Intel and AMD GPUs. All three are perfectly capable of training new models from scratch or using continuing development on existing open-weight models like Deepseek and Qwen.

    Generative AI isn’t going anywhere. Furthermore, advancements in that space happen so fast that it’s likely that in a few years we won’t need so many GPUs/VRAM to train models. Especially if ternary models (and similar, like Google’s TurboQuant tech) take off.

    I know this is a long comment but I want to point something else out: If OpenAI and Anthropic go bust, that would flood the market with cheap GPUs. It would be a total price collapse and you can bet your ass that clever universities and service providers (like Amazon compute, but 3rd party) would snap those up and bring down prices across the board.


  • Same places as usual: Academia and open source foundations.

    That’s where 99% of all advancements in AI come from. You don’t actually think Big AI is paying as many people to do computer science and mathematics research as all the universities in the world (with computer science programs)?

    It’s the same shit as always: Big companies commercialize advancements and discoveries made by scientist and researchers from academia (mostly) and give almost nothing back.

    Big AI has partnerships with tons of schools and if it weren’t for that, they wouldn’t be advancing the technology as fast as they are. In fact, the only reason why many of these discoveries are made public at all is because of the agreements with the schools that require the discoveries/papers be published (so their school, professors, researchers, and students can get credit).

    Like I was saying before: You don’t need a trillion dollars in data centers to do this stuff. Almost all the GPUs and special chips being used (and preordered, sigh) by Big AI are being used to serve their customers (at great expense). Not for training.

    Training used to be expensive but so many advancements have been made this is no longer the case. Instead, most of the resources being used in “AI data centers” (and research) is all about making inference more efficient. That’s the step that comes after you give an AI a prompt.

    Training a super modern AI model can be done with a university’s data center or a few hundred thousand to a few million dollars of rented GPUs/compute. It doesn’t even take that long!

    Generative AI improves at a ridiculously fast rate. In nearly all the ways you could think of: Training, inference (e.g. figuring out user intent), knowledge, understanding, and weirder, fluffier stuff like “creativity” (the benchmarks of which are dubious, BTW).




  • Assume all the big AI firms die: Anthropic, OpenAI, Microsoft, Google, and Meta. Poof! They’re gone!

    Here would be my reaction: “So anyway… have you tried GLM-7? It’s amazing! Also, there’s a new workflow in ComfyUI I’ve been using that works great to generate…”

    Generative AI is here to stay. You don’t need a trillion dollars worth of data centers for progress to continue. That’s just billionaires living in an AGI fantasy land.