• dream_weasel@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 hours ago

    The embedding layer post tokenization is not just a probability machine the way you’re suggesting it. You can argue that it is probabilistic with inferred sentiment, but too many people think it works like how text prediction on your phone does and that is just factually inaccurate.

    Verify output of course, but saying “it doesn’t understand anything” and “probability machine” is a borderline erroneous short sell. At the level of tokens it “understands” relationships, and those relationships are not probabilistic, though they are fundamentally approximated based on a training corpus.

    • hesh@quokk.au
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Can you explain how it’s more than probability? It’s using a neural network to guess the most likely next token, isn’t it?

      • Canigou@jlai.lu
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        You could also say that it chooses what will be the next word it will say to you. It has a few words to choose from, which it has selected in relation to the previously spoken words, your question and previous interactions (the context). The probability you’re talking about (a number) could also be seen as it’s preference among those words. I’m not sure the probability vocabulary/analogy is necessarily the best one. The best might be to not employ any analogy at all, but then you have to dig deeper into the subject to form yourself an informed opinion. This series of videos explains it better than I do : https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi