• XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      In AI, a “hallucination” is just as much “there” as a non-“hallucination.” It’s a way for scientists to stomp their foot and say that the wrong output is the computer’s fault and not a natural consequence of how LLMs work.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      Hallucinations requires perception. LLMs are just statistical models and do not have perceptions.

      It was a cute name early on, now it is used to deflect when the output is just plain wrong.