Altman’s remarks in his tweet drew an overwhelmingly negative reaction.

“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”

Others called him a “f***ing psychopath” and “scum.”

“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.

  • MartianRecon@lemmus.org
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    1 day ago

    AI is absolutely not here to stay. This kind of nonsense needs to be nipped in the bud. The capital investment for ai simply can’t be recouped without major fantastical leaps in business.

    The revenue coming in is a shell game. The investment numbers simply can’t be recouped without being more expensive than actual people.

    So yeah. It absolutely can go away.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      9
      ·
      edit-2
      1 day ago

      Lmfao and computers are just for nerds

      Edit: OpenAI, Anthropic, etc can all die, but LLMs are not. You can run a local model.

      Now I completely agree with the hype train is completely out of control and its a monetary bubble, but the tool itself is not going away.

      Edit2: I think the dotcom bubble is a good analogy, the underlying idea of the internet and all it can do and online ordering and such was solid, just an insane amount of hype on top that simply couldn’t be reached at that time. But now, the biggest companies ever are mainly internet/tech companies.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 hours ago

        When people are complaining about AI, it’s often the scale of it they have beef with: the fact that it’s being shoved into their face everywhere they look, mandated for use in their job by management, even if it does not make them more productive. A consequence of it being shoved everywhere are the larger problems that make people angry, such as the excessive resource use by AI data centres.

        I agree that LLMs are here to stay — I understand enough about how the tech works that I know that there is tremendous potential for their use (I originally got into learning about machine learning because I wanted to better understand AlphaFold, a protein structure prediction model made by Google Deepmind (not sure I’d count this as an LLM, but under the hood, it works pretty similarly)). However, the problem of AI is more about how the technology is functioning at a societal level than a purely technological problem.

        I believe that the current societal impact of the AI boom far exceeds the actual technological impact of LLMs. Whilst I get your point about the dotcom bubble analogy, I think that in that case, the ratio of “harms caused by the dotcom bubble” to “genuine societal impact of the technology once the bubble has popped” is much smaller. I grant that we have the benefit of hindsight with the internet, because the tech has had so much time to mature and become integrated with society, whereas we’re still in the middle of the AI hype bubble, but I don’t believe that LLMs/AI are capable of being anywhere near as transformative to society as the internet. There may be niche fields that are overturned or even functionally destroyed, but there are few genuine use-cases of LLMs. They’ll still exist after the bubble has popped, and they’ll have their uses, but I don’t believe they’ll be anywhere near as ubiquitous as they are now.

        Regardless of whether you agree with me on this, one thing we are in accord with is that the bubble is bullshit and harmful. Personally, something that frustrates me with it is that I am genuinely curious to see genuine progress in the real use cases for LLMs — I’m open to the possibility that in 10-20 years time, my predictions in my previous paragraph may have been proven to be wrong. However, the bubble is just delaying that kind of meaningful integration into society, as well as hindering areas of research that could improve LLMs

        (as well as crowding out other areas of AI research that are based on different architectures and methods, which may get us much closer to the sci-fi sense of AI than LLMs ever could. Song-Chun Zhu is an example of a researcher who used to work in this field of AI, but got burnt out by how the economic pressures on research meant that it was hard to do research that wasn’t based on this one dominant method. He’s one of many who is nowadays more interested in researching AI in a “small-data for big tasks” paradigm)

      • MartianRecon@lemmus.org
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        1 day ago

        Computers are input-output devices. You put things into a computer and it does what you tell it to do.

        LMM’s do not do this they just give you a facsimile of what it believes you want.

        LMM’s will not go away but their functionality is extremely limited, as has been proven by it’s failure to ‘change business forever.’

        And no, ‘but the tech isn’t there’ isn’t an argument right now. This is economics. The investment for it’s current capabilities are far outsized, and there will be a massive contraction.

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          23 hours ago

          We are so far beyond “computer is just input output device” realistically. There’s thousands of layers of things built on top that produce what we know as a computer and anywhere along that chain things can be broken/not perform as expected because any other layer on the chain failed to do what it was supposed to.

          Realistically, what’s the difference between a thing and the facsimile of a thing when the result is the same?

          • MartianRecon@lemmus.org
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            18 hours ago

            Semantics.

            A person creates something. LMM models just blurt out an approximation of what they think might be what you want.