• Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      13
      ·
      edit-2
      10 hours ago

      I’ve been researching this a bit… I’ve come to the conclusion that there is no AI bubble. In fact, we’re only just getting started down this road. Unless there’s some massive 100x efficiency breakthrough in training AI and inference, the entire world is going to be building seemingly endless AI data centers (and the normal compute kind, e.g. for stuff like AWS, Google/YouTube, Meta, banks) for at least a decade. Probably a little longer (12-15 years before demand levels out).

      Everyone thinks that “AI data center” means ChatGPT, Claude, Gemini, etc but there’s 10,000x more demand for AI than those services. Think: Pharmaceutical companies trying to find proteins, scientists (and big agriculture!) trying to model the weather, and other businesses trying to automate stuff. Not just software; robots and things like conveyor belts.

      Another example: Ever use one of those self-checkouts that’s mostly just a camera pointing down, where you place the stuff you’re purchasing? That uses AI too.

      Having said that, there is a great big bubble in AI: OpenAI, specifically. That will definitely pop one day. And hopefully, the DRAM bullshit will go along with it.

      • girsaysdoom@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        The types of AI you mention at the start of your comment has been around for years and isn’t exactly the problem we’re facing as far as I have researched. The AI bubble is a result of the hype around transformer-based generative AI and not so much about AI itself. Neither datacenters nor AI are a new thing and up until 2020 they weren’t as much as a problem as they are today due to the hype and increasing demands by these large models.

        The problem is literally a scaling issue for generative AI and those that decide to build new datacenters just for this usage are ignorant to the environmental and socioeconomic issues as being the limiters that they should be.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        Yeah, the LLM and picture generation bubble will burst but that isn’t ‘AI’, it’s a tiny subset of tasks that happen to be easy to train because the companies involved have helped themselves to all of the text and images created by humanity.

        The other uses of AI are harder to train, because we don’t have centuries worth of robotic motion data or a YouTube of folded protein data. Those are the uses that will have the most impact in the future, as they are developed.

        LLMs are a bubble, AI is not.

        • benjirenji@slrpnk.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          6 hours ago

          LLMs are the only thing that is hyped. The other models and applications have existed already back when ChatGPT first hit the public and they have not had any special break through that would explain exponential growth in investment or a need for compute power. Language models had that with the transformer structure, everything else just develops iteratively.

          The bubble we see now is because of language models and we can try and conflate it with other deep models and call it all AI, but it doesn’t change the fact that the generative models are the only ones requiring these resources and are looking for a problem to solve.

      • Grass@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        I agree with it not being all chatgpt type, but considering that even nvidia was hyping it up as war tech I think this is a bit of wishful thinking.