• cub Gucci@lemmy.today
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    17 hours ago

    Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don’t use them for my coding job.

    Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that manual is a keyword in PostgreSQL, which is not.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      It’s more the hallucinations are due to the fact we have trained them to be unable to admit to failure or incompetence.

      Humans have the exact same “hallucinations” if you give them a job then tell them they aren’t allowed to admit to not knowing something ever for any reason.

      You end up only with people willing to lie, bullshit and sound incredibly confident.

      We literally reinvented the politician with LLMs.

      None of the big models are trained to be actually accurate, only to give results no matter what.

    • DireTech@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      16 hours ago

      What is a hallucination if not AI being confidently mistaken by making up something that is not true?