• NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    20 hours ago

    Ehhhh.

    Yes, there are some fairly revolutionary(-ish) chips. Those are few and far between because they tend to be hyper specialized. Inference but not training or only optimized for a very small input matrix (common for edge computing like cameras).

    By and large? They really ARE “traditional” GPGPUs that are optimized to hell and back for vector operations and linear algebra. And a lot of the gains there come from multiplying their floating point performance by 2-4 (depending on if half or quarter precision). They aren’t as good for double precision as something optimized for it but basically only a very small subset of users need that. There will be no issues repurposing the hardware in these data centers.

    And the rest is data movement which has always been the real problem.

    • Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 hours ago

      I don’t think most companies will find much value in that though. I know that none of the infrastructure I work with uses heavy calculations, and if we tried to jam it in, we’d be making solutions looking for problems.

      An email server doesn’t need a GPU, neither does a file server, or a website, or an e-commerce platform.

      Suppose they could rent it out as supercomputers but I don’t think the return on cost is going to be that good.