• wonderingwanderer@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    4 days ago

    That’s fucking hilarious. How many instances of this have there been now? And companies keep doubling down on AI? Fucking idiots. I’m not even savvy enough to call myself an amateur, and I know better than to make such a series of obvious mistakes that predictably led to this outcome.

    One possible concern, amid the amusement, is whether Anthropic programed Claude to punish companies it sees as potential competition. Or is this just a completely bonkers, off the rails LLM making terrible decisions because it’s just a probabilistic model and not actually capable of abstract cognition?

    Either way, these people are idiots for giving a machine program enough permissions to wipe their drives, they’re idiots for storing their backups on the same network as their main drives, and they’re idiots for trusting a commercial LLM API, when it would be cheaper to self-host their own.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      AI writes code

      User vets code

      User runs code

      If you’re not lock-step watching that shit, you need to just be doing it yourself.

      • Landless2029@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 days ago

        The problem is the owning class what’s to cut out human elements so bad they keep letting tools run wild.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        The point of what? The push for AI in industry?

        You’d have to ask someone else. I can only make conjectures, but I’d say it has something to do with companies feeling the need to justify to their shareholders that their investments in AI were worth it, so they double down on the sunk cost fallacy. Or maybe those shareholders also own stock in big-name AI companies. It’s hard to say exactly…

    • dream_weasel@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 days ago

      It’s just negligence. Power tools injure and people are stupid. The technology is alluring and people make dumb mistakes. There’s no deeper motive here, and self admitting you’re not even an amateur I will just tell you that you’re giving way less credit to these models than they deserve by calling them purely probabilistic, and way more credit then they deserve by trying to assert some kind of malicious incentive by anthropic.

      These bastards are hard to make, and they have a lot of layers (not like NN layers, but training steps). They are, however, definitely better at programming than you or your buddy or any commentator here, and it lures you into a false sense of security before it makes a colossal fuck up.