• finitebanjo@piefed.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Because the training has diminishing returns, meaning the small improvements between (for example purposes) GPT 3 and 4 will need exponentially more power to have the same effect on GPT 5. In 2022 and 2023 OpenAI and DeepMind both predicted that reaching human accuracy could never be done, the latter concluding even with infinite power.

    So in order to get as close as possible then in the future they will need to get as much power as possible. Academic papers outline it as the one true bottleneck.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      And academia will work on that problem. It reminds me of intel processors “projected” to use kilowatts of energy, then smart people made other types of chips and they don’t need 2000 watts.

      • finitebanjo@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        17 hours ago

        Academia literally got cut by more than a third and Microsoft is planning to revive breeder reactors.

        You might think academia will work on the problem but the people running these things absolutely do not.