• lIlIlIlIlIlIl@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    18
    ·
    3 days ago

    Wish we could go back to web search when every answer was 100% correct and this would never ever happen

    Curse you AI for allowing lies on the internet!

    • Glide@lemmy.ca
      link
      fedilink
      arrow-up
      20
      arrow-down
      1
      ·
      3 days ago

      This sarcasm is completely unwarranted.

      People recognized that random answers on the internet were inclined to be from questionable sources. AI answers provide a sense of authority to what is being said, and then backs up that sense of authority with speech patterns and confidence which we are trained to trust in.

      Web searches were naturally met with a sense of scrutiny, but LLMs lean into habits and patterns that convince us it is right, often subtly.

      • FosterMolasses@leminal.space
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        AI answers provide a sense of authority to what is being said

        I think that’s the real issue, people have with zero discernment consumed the marketing ploy that AI is infallible when in actuality its output has more questionable results than any other mechanism so far.

        If people would simply grasp this then it wouldn’t be so big of a deal. No one had to ban Photoshop for us to eventually catch on that not all photos shared online are authentic lol

      • lIlIlIlIlIlIl@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        12
        ·
        3 days ago

        This sarcasm is completely and wholly warranted.

        Did people recognize random answers on the internet as lies when it was new? Of course not. We collectively grew that skill organically as we all came online.

        The next version is completely different - and exactly the same

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          3 days ago

          With web searches we learned who were likely to be reliable sources and able to dismiss other obviously shitty sites.

          How do you learn to identify which answers from the same LLM were likely to be wrong?