• Crackhappy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 minutes ago

      I can’t keep up. Did you know that ostriches bury their head in the sand to avoid vipers? Vipers can’t see prey if their heads are obscured.

  • supamanc@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    12 hours ago

    Not technically lies, as to lie there has to be an intent to deceive. LLMs don’t have any intentions.

    • 8oow3291d@feddit.dk
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 minutes ago

      LLMs don’t have any intentions.

      Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions.

      The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        11 hours ago

        Don’t know if I’d call that an intention of the machine but rather the creator. Hate to be that kind of person but it’s similar to the whole thing of “guns don’t kill, people do.”

        LLMs aren’t people. They’re not self-aware and don’t have any inner complexities like say, a dog, or a sheep has. There’s no drive or motivation. It’s just maths.

        If you tie someone to a train track, and a train comes along killing them, it’s not like the train or the track intended to kill the person. That was the intent of you, who “programmed” the scenario.

        Similar to guns, strict control is what will be needed to fix these kinds of things. Megalomaniac billionaires who see people as nothing but numbers running amok with narcissistic manipulator systems isn’t a recipe for anything good.

        • Specter@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 hours ago

          It doesn’t really matter whether it’s the Machine or the creator.

          The point is, AIs can be programmed to lie, much like Grok does. And if they can be programmed to lie, then they are not reliable for anything at all. We are going through a decent period where AI can be used for a few things reliably, but even these will surely be enshittified.

          • supamanc@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 hours ago

            Oooh, philosophy! I disagree. I think that if a person programs a LLM to give disinformation, that’s all it is. A lie giving misinformation knowing that’s it’s disinformation, intending do deceive. The LLM doesn’t know what’s true or false. It doesn’t intend anything, because it is not a conscious entity. The person who programmed it can be lying by disseminating false information, the LLM cannot, any more than a broken clock or thermometer is ‘lying’ about the time or temperature.

            • Specter@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              I am trying to get away from the philosophy actually 😅 in the end what matters is how these tools are being used, not so much their inherent characteristics.

              Can you envision a world where AI chatbots will be used to lead you down certain political beliefs (e.g. capitalism good, socialism bad) product recommendations will be made based on how much brands are willing to pay for ad placements, and your psychological state will be measured and molded to the interests of the AI owner? I can. It’s also already happening.

        • HairyHarry@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          10 hours ago

          Ok, technically you are correct. Still they are lies or let’s call it disinformation or propaganda. Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.

          • WhatAmLemmy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            8 hours ago

            What you’re calling lies are false positives. To lie you have to know the truth. AI’s are ignorant. They don’t know what anything is, as all they “know” is mathematical patterns in 1’s and 0’s.

            They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM’s are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don’t care, because their only goal is profit and power.

            • hesh@quokk.au
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              If Google knows it outputs falsehoods and lets it continue it becomes purposeful. That makes them lies in my book.