• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    6 hours ago

    Not at scale. Even on the new architecture, one really needs some kind of accelerator to make it economical for servers.

    Bitnet-like models might change the calculus, but no major trainer had tried that yet.

    • [object Object]@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      6 hours ago

      Even with a bitnet, it’s almost definitely better to train on a high precision float then refine down to bits.

      I would expect bitnet to require more layers for equivalent quality too.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 hours ago

        I just meant for mass inference serving.

        Yeah, I haven’t seen much in the way of bitnet training savings yet, like regular old QAT. It does appear that Deepseek is finetuning their MoEs in a 4-bit format now, though.

    • ag10n@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 hours ago

      Yes, you can run it at scale. Which is why it uses Huawei hardware.

      You can run it on anything, scaled or not

      • theunknownmuncher@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        4 hours ago

        Nope! You don’t know what you’re talking about. At all. But you can have fun running a 1.6 trillion parameter model on CPU at basically 0 tokens per second at scale, MoE or not.

          • theunknownmuncher@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            25 minutes ago

            You’ve proved my point that you don’t know what you’re talking about by blindly linking to the git repo. Couldn’t find any source that supports your claim? I wonder why.

            Sure you can serve one request at a time to one patient user at a slow token per second rate, which makes running locally viable, but there is no RAM that has the bandwidth to run this model at scale. Even flash would be incredibly slow on CPU with multiple requests. You’d need the high bandwidth of VRAM and to run across multiple GPUs in a scalable way, it requires extremely high bandwidth interconnects between GPUs.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        6 hours ago

        Just not power/cost efficiently on CPU only, is what I meant. CPUs don’t have the compute for batching (running generation requests in parallel). You need an accelerator, like Huawei’s, to be economical.

        It’s fine for local inference, of course.

        • ag10n@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          A whole ecosystem that can run on any hardware, efficiently or not, is a whole ecosystem developed for the Chinese market