• DireTech@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    18 hours ago

    Either you’re using them rarely or just not noticing the issues. I mainly use them for looking up documentation and recently had Google’s AI screw up how sets work in JavaScript. If it makes mistakes on something that well documented, how is it doing on other items?

    • IsoKiero@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Just a few days ago I tried to feed my home automation logs to copilot in hopes that it might find a reason why my controller jams randomly multiple times per hour. It confidently claimed that as my noise level reported by controller is -100dB (so basically there’s absolutely nothing else on that frequency around, pretty much as good as it can get) it’s the problem and I should physically move the controller to less noisy area. A decent advice in itself, it might actually help on a lot of cases, but in my scenario it’s a completely wrong rabbit hole to dig in. I might still move the thing around to get better reception on some devices but it doesn’t explain why the whole controller freezes for several minutes on random intervals.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      I use them at work to get instructions on running processes and no matter how detailed I am “It is version X, the OS is Y” it still gives me commands that don’t work on my version, bad error code analysis, etc.

    • cub Gucci@lemmy.today
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      17 hours ago

      Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don’t use them for my coding job.

      Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that manual is a keyword in PostgreSQL, which is not.

      • Holytimes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 hours ago

        It’s more the hallucinations are due to the fact we have trained them to be unable to admit to failure or incompetence.

        Humans have the exact same “hallucinations” if you give them a job then tell them they aren’t allowed to admit to not knowing something ever for any reason.

        You end up only with people willing to lie, bullshit and sound incredibly confident.

        We literally reinvented the politician with LLMs.

        None of the big models are trained to be actually accurate, only to give results no matter what.

      • DireTech@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 hours ago

        What is a hallucination if not AI being confidently mistaken by making up something that is not true?