• AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    15 hours ago

    I think it does accurately model the part of the brain that forms predictions from observations—including predictions about what a speaker is going to say next, which lets human listeners focus on the surprising/informative parts. But with LLMs they just keep feeding it its own output as if it were a third party whose next words it’s trying to predict.

    It’s like a child describing an imaginary friend, if you keep repeating “And what does your friend say after that?”