• 4 Posts
  • 1.53K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle














  • That’s what gets upvotes on Lemmy, sadly.

    This is how Voat (another Reddit clone) died. Political shitposts and clickbait tabloids crowded out every niche, so all the interesting content left.

    As it turns out, doomscrolling twitter troll reposts with the same few comments in each one is quite depressing.

    I don’t know a good solution, either. Clickbait works. Maybe some structural changes could help, though?



  • As a hobby mostly, but its useful for work. I found LLMs fascinating even before the hype, when everyone was trying to get GPT-J finetunes named after Star Trek characters to run.

    Reading my own quote, I was being a bit dramatic. But at the very least it is super important to grasp some basic concepts (like MoE CPU offloading, quantization, and specs of your own hardware), and watch for new releases in LocalLlama or whatever. You kinda do have to follow and test things, yes, as there’s tons of FUD in open weights AI land.


    As an example, stepfun 2.5 seems to be a great model for my hardware (single Nvidia GPU + 128GB CPU RAM), and it could have easily flown under the radar without following stuff. I also wouldn’t know to run it with ik_llama.cpp instead of mainline llama.cpp, for a considerable speed/quality boost over (say) LM Studio.

    If I were to google all this now, I’d probably still get links for setting up the Deepseek distillations from Tech Bro YouTubers. That series is now dreadfully slow and long obsolete.



  • Chinese electric cars were always going to take off. RAM is just a commodity; if you sell the most bits at the lowest price and sufficient speed, it works.

    If you’re in edge machine learning, if you write your own software stacks for niche stuff, Chinese hardware will be killer.

    But if you’re trying to run Steam games? Or CUDA projects? That’s a whole different story. It doesn’t matter how good the hardware is, they’re always going to be handicapped by software in “legacy” code. Not just for performance, but driver bugs/quirks.

    Proton (and focusing everything on a good Vulkan driver) is not a bad path forward, but still. They’re working against decades of dev work targeting AMD/Nvidia/Intel, up and down the stack.


  • Also, this has been the case (or at least planned) for a while.

    Pascal (the GTX 1000 series) and Ampere (the RTX 3000 series) used the exact same architecture for datacenter/gaming. The big gaming dies were dual use and datacenter-optimized. This habit sort of goes back to ~2008, but Ampere and the A100 is really where “datacenter first” took off.

    AMD announced a plan to unify their datacenter/gaming architecture awhile ago, and prioritized the MI300X before that. And EPYC has always been the priority, too.

    Intel wanted to do this, but had some roadmap trouble.

    These companies have always put datacenter first, it just took this much drama for the consumer segment to largely notice.