• Sahwa@reddthat.comOP
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 hours ago

    This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

    These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

    For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

    After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

    I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

    ‘I Worked on Google’s AI. My Fears Are Coming True’

    • sudo@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      “abuse the ai’s emotions” isn’t a thing. Full stop.

      This just reiterates OPs point that naive or moronic adults will believe what they want to believe.