• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 hours ago

    You can get wrong answer with 100% token confidence, and correct one with 0.000001% confidence.

    If everything that I’ve seen in the past has said that 1+1 is 4, then sure — I’m going to say that 1+1 is 4. I will say that 1+1 is 4 and be confident in that.

    But if I’ve seen multiple sources of information that state differing things — say, half of the information that I’ve seen says that 1+1 is 4 and the other half says that 1+1 is 2, then I can expose that to the user.

    I do think that Aceticon does raise a fair point, that fully capturing uncertainty probably needs a higher level of understanding than an LLM directly generating text from its knowledge store is going to have. For example, having many ways of phrasing a response will also reduce confidence in the response, even if both phrasings are semantically compatible. Being on the edge between saying that, oh…an object is “white” or “eggshell” will also reduce the confidence derived from token probability, even if the two responses are both semantically more-or-less identical in the context of the given conversation.

    There’s probably enough information available to an LLM to do heuristics as to whether two different sentences are semantically-equivalent, but you wouldn’t be able to do that efficiently with a trivial change.

    • ThirdConsul@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      You do realise that prompts to and responses from the LLM are not as simple as what you wrote “1+1=?”. The context window is growing for a reason. And LLMs dont have two dimensional probability of the next token?