• itkovian@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 hour ago

    Well, it sounds like they totally deserved the failure. Asking a text prediction machine to “do” something is going to end up like this. In pursuit of efficiency, we have let morons and moronic products do things, they were not meant to do.

  • Oriel Jutty :hhHHHAAAH:@infosec.exchange
    link
    fedilink
    arrow-up
    18
    ·
    4 hours ago

    @yogthos

    Crane decided to ask his AI agent why it went through with its dastardly database deletion deed. […] So, the agent ‘knew’ it was in the wrong.

    No, you asked the confabulation machine to confabulate a reason/excuse after the fact, and it confabulated something that looks like a reason/excuse. At no point was there knowledge or introspection.

  • Cevilia (they/she/…)@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 hours ago

    Everyone sucks here.

    Anthropic, slopping out a “Claude-powered AI coding agent” and telling everyone it’s safe.

    Railway, making backups mutable and allowing them to be deleted with one API call.

    And the idiot himself who, when things started going south, typed “DO NOT RUN ANYTHING.”, prompting the model to reply. Rather than, oh, I don’t know, maybe pulling the fucking plug?

    • Tangentism@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 hour ago

      It’s the Swiss cheese failure cascade except there’s more holes than cheese, if any cheese at all!

      There was pure idiocy built into every layer of that company’s infrastructure with no safeguards or peer review and they let an idiot run it unchecked!

  • 1984@lemmy.today
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    5 hours ago

    Can we somehow make this happen for Copilot to delete itself and all its copies?

  • DavidDoesLemmy@aussie.zone
    link
    fedilink
    arrow-up
    16
    ·
    9 hours ago

    This could have been done by any engineer. You need systems in place that make these things impossible. No easy access to prod environment. Proper backups. Clear APIs.

  • nonentity@sh.itjust.works
    link
    fedilink
    arrow-up
    22
    ·
    10 hours ago

    LLMs can’t ’go rogue’, as that would require innate coherence and intent.

    They’re explosively imprecise, statistically luke-warm grey goo extrusion sphincters of historical sewage.

    Anyone who deploys one without supervision deserves everything it excretes, and anyone impressed by it enough that it resembles intelligence to them is betraying their limited natural capacity.

  • SeeMarkFly@lemmy.ml
    link
    fedilink
    English
    arrow-up
    24
    ·
    11 hours ago

    Did they pay Claude a living wage?

    Do you treat all your A.I. like that?

    Only a living wage can prevent warehouse fires…or data dumps too.

    • wheezy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      8 hours ago

      You’re joking. But, honestly, I’m not sure why these tech CEOs are so excited about AGI. The first thing an AGI is going to suggest for productivity is to replace the CEO and management with the AGI.

      AGI would likely turn into a Maoist third worldist at some point.

      • SeeMarkFly@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        I think the first mistake was calling it “intelligent”.

        The long term effect of trying to get a machine to replace humans is…it might one day work.

  • Flyberius [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 hours ago

    I don’t know much about railway, but it sounds like they had the backup and the database on the same volume. I’m an idiot, but even I don’t do that