• anomnom@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    Yeah, thinking that these things have actual knowledge is wrong. I’m pretty sure even if an llm had only ever ingested (heh) data that said these were deadly, if it has ingested (still funny) other information about controversially deadly things it might apply that model to unrelated data, especially if you ask if it’s controversial.

    • luciferofastora@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      They have knowledge: the probability of words and phrases appearing in a larger context of other phrases. They probably have a knowledge of language patterns far more extensive than most humans. That’s why they’re so good at coming up with texts for a wide range of prompts. They know how to sound human.

      That in itself is a huge achievement.

      But they don’t know the semantics, the world-context outside of the text, or why it’s critical that a certain section of the text must refer to an actually extant source.

      The pitfall here is that users might not be aware of this distinction. Even if they do, they might not have the necessary knowledge themselves to verify. It’s obvious that this machine is smart enough to understand me and respond appropriately, but we must be aware just which kind of smart we’re talking about.