• Specter@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 hours ago

    It doesn’t really matter whether it’s the Machine or the creator.

    The point is, AIs can be programmed to lie, much like Grok does. And if they can be programmed to lie, then they are not reliable for anything at all. We are going through a decent period where AI can be used for a few things reliably, but even these will surely be enshittified.

    • deliriousdreams@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      46 minutes ago

      It matters because every time we anthropomorphize Generative AI LLM’S we re-enforce peoples belief in their ability to tell lies or truths.

      People’s believe is what leads to trust in them and things like AI psychosis.

      An interesting way to look at it is AI also can’t tell the truth.

      What it does is generate the next likely word or words based on its most significant statistical positive in its database. So it doesn’t know anything. It doesn’t tell truth. It doesn’t tell lies. It isn’t an entity. The people behind it are allowing it to present information as factual and we have no reason to trust them.

    • supamanc@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      Oooh, philosophy! I disagree. I think that if a person programs a LLM to give disinformation, that’s all it is. A lie giving misinformation knowing that’s it’s disinformation, intending do deceive. The LLM doesn’t know what’s true or false. It doesn’t intend anything, because it is not a conscious entity. The person who programmed it can be lying by disseminating false information, the LLM cannot, any more than a broken clock or thermometer is ‘lying’ about the time or temperature.

      • Specter@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        I am trying to get away from the philosophy actually 😅 in the end what matters is how these tools are being used, not so much their inherent characteristics.

        Can you envision a world where AI chatbots will be used to lead you down certain political beliefs (e.g. capitalism good, socialism bad) product recommendations will be made based on how much brands are willing to pay for ad placements, and your psychological state will be measured and molded to the interests of the AI owner? I can. It’s also already happening.